Naftiko Signals pulls publicly available blog posts, press releases, job postings, and social media for companies to gather the signals coming out of companies across a diverse range of industries, looking to understand the different types of investments companies are making. We evaluate the signals across 25+ areas to understand where each enterprise is in their digital journey.
|
|
API - Measuring the over API investment, from being API-first to design-first, to full lifecycle API management to understand where they are in their API journey. |
|
|
FinOps - Measuring the maturity of cloud, SaaS, AI cost management — user, usage plans, and token-level cost tracking, inference spend forecasting, model cost-performance optimization, GPU utilization monitoring, and chargeback models for shared AI infrastructure. |
|
|
Artificial Intelligence - Measuring the AI investment occuring from ChatGPT usage to MCP to investing in agentic automation, evaluating a companies grasp of it. |
|
|
Automation - Measuring the automation investment in all of it's forms to understand how sophisticated automation is, and how much it is being applied across operations. |
|
|
Containers - Measuring the container investment, beginning with Docker, but moving to the cloud, and where Kubernetes is in their overall platform journey with containers. |
|
|
Observability - Measuring the state of observability, how they are monitoring, testing, tracing, and reporting on their operations via dashboards, and other approaches. |
|
|
Integrations - Measuring the integration investment involving iPaaS, embedded iPaaS, but also legacy approaches with ETL, batch, and other common ways of integrating. |
|
|
Virtualization - Measuring the virtualization investment including data, examples, synthetic data, but also API mocking, and other ways companies are virtualizing resources. |
|
|
Data - Measuring the data investment, and how strong the data teams are, and what are thy focused on from access, quality, analytics, to governance and compliance issues. |
|
|
Databases - Measuring the database investment, and what database platforms are in use, and what the database tooling that is in use across teams to provide data access. |
|
|
Platform - Measuring the platform investment, and where a company is at in their platform journey, evaluating what common services, guard rails, and roles are in place. |
|
|
Operations - Measuring the operational investment, and how much they think about the big picture strategy of their operations, and how they can be improving. |
|
|
Event-Driven - Measuring the event-driven investment, and looking at the types of APIs in use, and the technology they are using that is steering them towards event-driven. |
|
|
Alignment - Measuring the business alignment investment, and are they doing work to bridge engineering with business, and invest more into the productization of APIs. |
|
|
Open Source - Measuring the open-source investment, and how much open-source they use, but also potentially contribute to, and even if they are investing in inner source. |
|
|
Standardization - Measuring the standardization investment, beginning with what standards they intentionally or unintentionally use, but also their strategic approach. |
|
|
Patterns - Measuring the different patterns in use across the different types of APIs, but also the parts and pieces of integrations, to understand the diversity of patterns. |
|
|
Specifications - Measuring the specifications in use, such as OpenAPI, AsyncAPI, and JSON Schema, but also newer formats like A2A, MCP, and other AI specs. |
|
|
Governance - Measuring the governance that is occurring, and how focused it is on APIs, as well as aligned with wider security, compliance, and other aspects of governance. |
|
|
Security - Measuring the security investment, and whether or not it is still more application focused or has evolved to be more API-centered, as well as thinking about AI. |
|
|
Code - Measuring the code investment, and what libraries and frameworks are in use, as well as any software development kits that provided or being applied for integrations. |
|
|
Apache - Measuring the Apache tooling investment, and what projects are in use, and how they are leveraged as part of operations, including involvement in community. |
|
|
CNCF - Measuring the CNCF tooling investment, and what projects are in use, and how they are being leveraged as part of operations, including involvement in community. |
|
|
Cloud - Measuring the cloud investment, beginning with which clouds they use, but then looking at their the approach to managing the technical and business side. |
|
|
Services - The entire SaaS portfolio for companies, beginning with the number of services, but then also evaluating which are infrastructure, platform, or more business. |
|
|
Languages - Which programming languages are used by teams, understanding the diversity of languages in use, and the relationship to services and tooling. |
|
|
Mergers & Acquisitions - Measuring how many mergers and acquisitions are conducted, and how their operations is due to years of this M&A approach to innovation. |
|
|
Data Pipelines - Measuring investment in training and fine-tuning data pipelines — how organizations curate, label, version, and govern the proprietary datasets used to customize models, including text, image, audio, and video corpora. |
|
|
Model Registry & Versioning - Measuring whether enterprises are tracking which models (base, fine-tuned, adapted) are deployed where, including version lineage, performance baselines, and rollback capabilities. |
|
|
Multimodal Infrastructure - Measuring the investment in processing non-text data — document extraction (OCR, PDF parsing), image and video analysis, audio transcription, and the pipelines that normalize these inputs for model consumption. |
|
|
Domain Specialization - Measuring the degree to which organizations are building or procuring domain-specific models versus relying on general-purpose models, and the regulatory or compliance drivers behind that choice. |
|
|
Testing & Quality - Measuring the investment in AI-specific testing — eval frameworks, regression benchmarks, hallucination detection, RAG accuracy scoring, and agent task completion rates as part of CI/CD and production monitoring. |
|
|
Developer Experience - Measuring how enterprises are instrumenting AI developer workflows — adoption metrics for coding assistants, productivity baselines, internal satisfaction surveys, and the feedback loops between developers and AI platform teams. |
|
|
ROI & Business Metrics - Measuring whether organizations have connected AI system performance to business outcomes — time saved, error reduction, customer satisfaction, cost avoidance — or whether measurement remains purely technical. |
|
|
Regulatory Posture - Measuring how enterprises are responding to AI regulation — EU AI Act classification, risk assessments, model documentation, and whether compliance is proactive or reactive. |
|
|
AI Review & Approval - Measuring whether formal AI review processes exist — review boards, use case approval workflows, model risk tiering, and the speed at which new AI use cases move from proposal to production. |
|
|
Privacy & Data Rights - Measuring investment in AI-specific privacy infrastructure — consent management for training data, right-to-deletion compliance across memory systems, data lineage tracking, and cross-border data flow management for model training and inference. |
|
|
FinOps - Measuring the maturity of cloud, SaaS, AI cost management — user, usage plans, and token-level cost tracking, inference spend forecasting, model cost-performance optimization, GPU utilization monitoring, and chargeback models for shared AI infrastructure. |
|
|
Provider Strategy - Measuring the deliberateness of model provider and infrastructure choices — single vs. multi-provider strategies, contractual terms, switching costs, and the balance between proprietary APIs and self-hosted open-source alternatives. |
|
|
Partnerships & Ecosystem - Measuring the strategic AI partnerships announced and in practice — cloud AI partnerships (Azure OpenAI, AWS Bedrock, GCP Vertex), model provider relationships (OpenAI, Anthropic, Cohere, Mistral), and how these partnerships shape or constrain architectural choices. |
|
|
Talent & Organizational Design - Measuring how enterprises are staffing AI initiatives — new roles (ML platform engineer, AI product manager, prompt engineer), team structures (centralized AI teams vs. embedded), and the skills gaps that job postings reveal about organizational readiness. |
We evaluate the different concepts at play in the different areas of investment, as well as profile the APIs available across the commercial services and open-source tooling being invested in by companies. Our goal is to understand what each company needs to be capable of when it comes to integration and automation, while basing it on where they are in their overall digital journey, and how they fit into the industry landscape.