This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Motion

These are all of the moving parts of the Naftiko go-to-market motion, providing a complete list of what is going on within each area, which are aggregated and expressed as main pages that help guide stories and conversations across multiple channels.

1 - Overview

This is the overview of the go-to-market motion, and how we use this site and the YAML behind it to drive Naftiko marketing forward in way that is accessible to the community, helping make our go-to-market and shared effort from the beginning, doing it all out in the open.

As we work to define our go-to-market motion at Naftiko we felt there was no reason to keep the process behind a secret. We have an initial plan in place, but our go-to-market is someething that will continue to evolve and change over time based upon the stories we tell and the conversations we have, so we thought it would help increase the velocity of our go-to-market flywheel if we just did it all out in the open.

The majority of the conversations we are having in the ecosystem are public, so it makes sense that we define, develop, publish, and syndicate those stories out in the open. A big part of our go-to-market effort is to openly figure out what the go-to-market effort for an open-source commercial software means, which is something we are more than happy to share with the wider open-source and API ecosystems.

This work is managed using Hugo/Docsy, but more importantly using YAML in GitHub. All of the stories, conversations, services, channels, and other information used as part of the go-to-market motion is stored simply as YAML and then synced with other platforms like Google Docs, Notion, and wherever the work for go-to-market activities occurs, with this documentation site being the central reference and point of automation for all of this work.

2 - Stories

These are all of the stories currently in motion as part of Naftiko go-to-market. Once published stories move into the archives, keeping this list an active representation of the work currently on the table, but managed across multiple platforms, with multiple community stakeholders.

2.1 - API Reusability - Naftiko Blog

This is blog post on the AI Context use case concerned with encouraging the discoverability and reuse of existing APIs, leveraging existing API infrastructure to quantify what API, schema, and tooling reuse looks like, and incentivizing reuse of APIs and schema across the software development lifecycle—reducing API sprawl and hardening the APIs that already exist.

Title

  • API Reusability

Tagline

  • Embracing your legacy and meeting the demands of AI integration using your existing API investments is how you will get the work done.

Description

This use case is concerned with encouraging the discoverability and reuse of existing APIs, leveraging existing API infrastructure to quantify what API, schema, and tooling reuse looks like, and incentivizing reuse of APIs and schema across the software development lifecycle—reducing API sprawl and hardening the APIs that already exist.

Teams need to be able to easily use existing API paths and operations as part of new integrations and automation, ensuring that paths, operations, and schema are available within IDE and copilot tooling—meeting developers where they already work. API reusability enables developers while also informing leadership regarding what API reuse looks like and where opportunities to refine exist.

Benefits

  • Unlock access to legacy data
  • Right-size & unify APIs
  • Foundation for AI initiatives

Pain

  • Build on existing internal APIs
  • Reuse 3rd-party APIs already used
  • Need to leverage existing OpenAPIs
  • We do not understand what API reuse is
  • We aren’t able to communicate API reuse

Gains##

  • Leverage existing internal API catalog
  • Establish API catalog for 3rd-party APIs
  • Extend existing OpenAPI for MCP delivery
  • We are able to communicate reuse to leadership
  • We are able to meet developers where they work

Connects

  • Internal APIs
  • Infrastructure APIs
  • SaaS APIs
  • Partner APIs
  • Paths
  • Schema

Adapters

  • HTTP
  • MCP
  • OpenAPI

Tags

  • API Management
  • API Reuse
  • Developer Tooling
  • Developer Enablement
  • Developer Experience
  • API Governance
  • AI Governance

2.2 - Getting On The MCP Bullet Train Without Leaving Governance Waiting At The Platform

As organisations rush headlong into the AI revolution, a familiar pattern is emerging. For integrations, we’ve seen it before with APIs, with microservices, and now with the Model Context Protocol (MCP). The technology arrives, excitement builds, adoption accelerates—and then the cracks begin to show. Today, many enterprises find themselves in precisely this position with MCP, caught between ambitious AI investments and the sobering realisation that their governance practices are failing to keep pace.

2.3 - Going from being on top of public APIs to feeling the way with MCPs

If you’re running an API product business—one where APIs are your primary revenue stream—you’re likely grappling with a critical question right now. What should our strategy be for MCP (Model Context Protocol) in the age of agentic AI? This has moved on from being a theoretical concern. The signals are everywhere. Your established API business is humming along nicely—you’ve got sophisticated infrastructure, mature governance practices, strong customer onboarding, and the revenue numbers to prove it all works. But then the landscape shifts. Agentic AI arrives. Suddenly, you’re fielding questions about how agents will discover and use your APIs. You’re watching competitors experiment with MCP servers. You’re wondering if your current approach is future-proof.

2.4 - Pivoting AI-enabled integration to what customers really want

For software solution providers in the SaaS space, the journey towards artificial intelligence integration has become increasingly complex. Many organisations have invested heavily in building their own branded AI experiences—co-pilots, assistants, and intelligent features designed to showcase their deep industry expertise. However, a quiet revolution is taking place in how AI agents interact with services, and it’s forcing even the most API-mature companies to reconsider their strategic priorities.

2.5 - AI Orchestration Use Case - Naftiko Blog

This is blog post on the AI Orchestration use case, focusing on he data, skills, and capabilities that artificial intelligence agents used internally can use to automate and orchestrate tasks while discovering and negotiating with other agents to accomplish specific goals. This use case employs the open-source Agent-2-Agent specification to securely and confidently enable agentic activity across operations.

Title

  • AI Orchestration

Tagline

  • Planning ahead for teams when it comes to discovery, security, and other properties of the Agent-2-Agent specification helps steer the fleet in the same direction.

Description

This use case provides the data, skills, and capabilities that artificial intelligence agents used internally can use to automate and orchestrate tasks while discovering and negotiating with other agents to accomplish specific goals. This use case employs the open-source Agent-2-Agent specification to securely and confidently enable agentic activity across operations.

As teams focus on responding to this AI moment and deploying MCP servers on top of existing APIs and other tooling, they need to begin understanding how to implement agentic automation and orchestration on top of MCP servers. Teams need structure and guidance when it comes to authentication and authorization, discovery, governance, and all the standardization required to deploy agents at scale.

Tags

  • Automation
  • Agentic
  • Agent-2-Agent
  • Compliance

Benefits

  • Discover skills & capabilities
  • Internal & external agents
  • Implement A2A protocol
  • Apply policy-driven governance

Pains

  • High complexity in standardizing message formats and handshakes.
  • Difficult to cap liability; risk of hallucinated agreements.
  • Requires vetting external agents; security risks.
  • High debugging difficulty; “Black Box” interactions.

Gains

  • Universal connectivity; “Write once, talk to many.”
  • Removal of human bottlenecks in approval chains.
  • Access to dynamic markets and real-time supply chains.
  • Modular system; easy to swap out underperforming agents.

Connects

  • Internal APIs
  • Infrastructure APIs
  • SaaS APIs
  • Partner APIs
  • MCP Servers

Adapters

  • HTTP
  • OpenAPI
  • MCP
  • A2A

2.6 - Capabilities - API Evangelist

This is blog post on capabilities meant for the API Evangelist blog, providing an opionated look at what capabilities are and why is the time we need them to align engineering with business outcomes, but also provide the much needed context for powering AI copilots and agents, help increase we will achieve the outcomes we desire.

2.7 - Cost - Naftiko Blog

This is blog post on managing costs when it comes to integrations and automation using a capabilities-driven approach, focusing on the cost, spend, and budget manage aspects of integrating primarily across 3rd-party services, but also possibly with internal APIs, helping bring more attention to the cost of operation integrations and automation.

2.8 - Data Sovereignty Use Case - Naftiko Blog

This is blog post on the AI Context use case, focusing on empowering companies to take control of their data that resides across the third-party SaaS solutions they use regularly. The data sovereignty movement is centered on establishing more control over the data generated across the different services you depend on, ensuring data is integrated, migrated, and synced to data and object stores where a company has full control and access.

Title

  • Data Sovereignty

Tagline

  • Govern, encrypt, and audit how data moves through your entire stack.
  • APIs, SaaS tools, and AI all touch sensitive data but governance rarely keeps up. Shadow IT, unencrypted transfers, and compliance risk abound.

Description

This use case focuses on empowering companies to take control of their data that resides across the third-party SaaS solutions they use regularly. The data sovereignty movement is centered on establishing more control over the data generated across the different services you depend on, ensuring data is integrated, migrated, and synced to data and object stores where a company has full control and access.

Data sovereignty enables teams to localize data and train local AI models using the data they produce across third-party platforms. This use case may be aligned with country or regional regulations, or it may simply be part of enterprise compliance programs. Data sovereignty investments have increased as part of the growth of AI integrations and the need for context across third-party systems, as well as the increasing value of data itself.

Tags

  • Data
  • Regulation
  • Compliance
  • Sovereignty
  • Control

Benefits

  • Aggregate 3rd-party SaaS data
  • Increase visibility of SaaS data
  • Allow for more SaaS data discovery
  • Encourage the reusability of SaaS data
  • Enable ETL/ELT access to SaaS data

Pain

  • Difficulty in Accessing 3rd-Party Data Sources
  • Regulatory Mandate for Control Over All Data
  • GraphQL Was Difficult to Adopt Across Teams
  • Lack of Data Available for AI Pilot Projects

Gains

  • Provide SQL Access Across 3rd-Party Data Sources
  • Satisfy Government Regulatory Compliance Requirements
  • Speak SQL Across All Data Sources For Any Teams
  • Universal Access to Data for Use in AI Projects

Connects##

  • Infrastructure APIs
  • SaaS APIs
  • Partner APIs

Adapters

  • HTTP
  • OpenAPI

2.9 - Innovation - Naftiko Blog

This is blog post focus on innovation, helping shine a light on how managing cost, velocity, and risk can help lead to more innovation, helping enterprises achieve an agreed upon around what innovation looks like by focusing on capabilities that consistently drive conversations around cost, velocity, and risk of integration and automation.

2.10 - Naftiko Signals - API Evangelist

This is blog post announcing the Nafitko Signals progam on the API Evangelist blog, providing a different perspective on how the program came to be and why it is turning into something more, helping provide a behind the scenes snapshot on what is going on with the research, but also the conversations we are having with design, service, and market partnrs.

2.11 - Naftiko Signals White Paper - Naftiko Blog

This is a blog post about the Naftiko Signals white paper that was published in December, providing an overview of the paper and the program behind, and why Signals provides an important way of looking at the enterprise system whether you are inside or outside of that system, helping generate more leads using the white paper.

2.12 - Risk - Naftiko Blog

This is a business outcomes blog post focused on managing risk when it comes to integrations and automation, helping demonstrate how a capabilities-driven approach can help with security, privacy, compliance, and other common approaches to managing risk across enterprise operations, from the dimensions business care about.

2.13 - SQL Data Access Use Case - Naftiko Blog

This is blog post on the SQL Data Access use case, focusing on consistently unlocking the data companies currently depend upon across multiple third-party SaaS providers and a variety of existing database connections via JDBC and ODBC to ensure AI integrations have the data they require. Data today is spread across many internal and external systems, and making it consistently available as part of AI integrations has significantly slowed the delivery of new products and features.

Title

  • SQL Data Access

Tagline

  • Data lives in silos. Teams want insights now but every new API means another custom connector.

Body

This use case seeks to consistently unlock the data companies currently depend upon across multiple third-party SaaS providers and a variety of existing database connections via JDBC and ODBC to ensure AI integrations have the data they require. Data today is spread across many internal and external systems, and making it consistently available as part of AI integrations has significantly slowed the delivery of new products and features.

Teams benefit from consistent SQL access to data sources via ODBC/JDBC interfaces, and expanding this access to third-party SaaS will help teams provide the context, resources, tooling, and data needed to deliver AI integrations across the enterprise. The capability and resulting engine deployment for this use case provides a unified, consolidated, and simplified approach to providing the data needed to power individual AI integrations within specific business domains.

Tags

  • SQL
  • Data
  • SaaS
  • Dashboards
  • Analytics
  • Copilots

Benefits

  • Unlock SaaS data
  • JDBC / ODBC drivers
  • Federated SQL processing

Pain

  • Limited or No Access to SaaS Data for Analytics Teams
  • No Access to Data Sources Across MCP-Enabled AI Integration
  • Demand for 3rd-Party Data for Business Intelligence in Dashboards
  • Demand for 3rd-Party Data by Data Science for ML Engineering

Gains

  • Access to SaaS Data via SQL
  • Access to Internal APIs via SQL
  • Easy Connections to Existing Dashboards
  • Plug-and-Play Connectors for Data Science

Connects

  • Internal APIs
  • Infrastructure APIs
  • SaaS APIs
  • Partner APIs
  • Legacy APIs

Adapters

  • HTTP
  • OpenAPI
  • ODBC
  • JDBC
  • MCP

2.14 - Velocity - Naftiko Blog

This is business outcomes blog post focued on velocity associated with integrations and automation, helping shine a light on moving the right velocity when it comes to the 3rd-party and internal systems we use, helping ensure we have a solid map of the services and the domains in which they are used to help teams move faster when using.

3 - Conversations

These are the active conversations on the table for the Naftiko go-to-market, and once a podcast occurs or a conversation gets used as part of podcasts, shorts, and other storytelling, it will be put into the archive, keeping this an active representation of the conversations on the table.

3.1 - Naftiko Capabilities Podcast - January 13th, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.2 - Naftiko Capabilities Podcast - January 15th, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.3 - Naftiko Capabilities Podcast - January 20th, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.4 - Naftiko Capabilities Podcast - January 22nd, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.5 - Naftiko Capabilities Podcast - January 26th, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.6 - Naftiko Capabilities Podcast - January 29th, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.7 - Naftiko Capabilities Podcast - January 6th, 2025

The first episode is with Mike Amundsen and Christian Posta talking about capabilities, coming at them from the hypermedia and semantics side as well as the artificical intelligence and gateway perspective, kicking off the first episode to begin exploring what capabilities are with two thought leaders in the space who have done a lot of thinking and writing on the subject.

The first episode focused on capabilities with Mike Amundsen and Christian Posta.

Table of Contents

  • Intro - Kin Lane
  • What is a capability? - Mike Amundsen
  • Segway - Kin Lane
  • What is a capability? - Christian Posta
  • Segway - Kin Lane
  • Closing - Kin Lane

3.8 - Naftiko Capabilities Podcast - January 8th, 2025

This is a placeholder for upcoming episode of the Naftiko Capabilities podcast, providing a placeholder as the work occurs, ensuring that we are planning ahead, with the work being published here as it is available, with different rates of work ocurring ahead of time, based upon the conversations we have and what topics we are planning.

The is a placeholder for an upcoming episode, with work added when ready.

Table of Contents

  • Intro - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Topic - ????
  • Segway - Kin Lane
  • Closing - Kin Lane

3.9 - Conversation with Sam Newman in November 2025

Sat down with Sam Newman to talk microservices, and learn more about what he is seeing with the companies he consults with, as well as how microservices have evolved as part of the artificial intelligence shift in the landscape, drawing some parralells between the domain work and right-sizing with microservices and language models.

Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.

Questions

  • Who are you?
  • What is a microservice?
  • What percentage of microservices are people vs. technology?
  • How are microservices dependent on organizational structures?
  • What have you learned from the last ten years of microservices?
  • How do you assess if an organization is ready for microservices?
  • What is the impact of AI you’ve seen on organizations?
  • Is AI supposed to be controlled by us, or control us?
  • Do you feel like small language models are the future?
  • What role does ownership and accountability play in AI?
  • What recommendations do you have to help right-size capabilities?
  • How does the changing in team structures impact success?
  • What is the role that STEM plays in your work?
  • What recommendations do you have for people just starting?

3.10 - Conversation with Simon Wardley in November 2025

Had a great conversation with Simon Wardley about Wardely Mapping, but also the politics in technology and how he compares the AI evolution to the cloud shift in the computing landscape, and got his views on the geopolitical aspects of artificial intelligence, while ending focused on the way he uses LinkedIn.

Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.

Questions

  • What is Wardley mapping?
  • Is there politics in technology?
  • What do you say to people when they say AI will be cheaper?
  • How do we make AI visible?
  • What are the implications of AI at the geopolitical level?
  • Can you talk to how you use LinkedIn?
  • What gives you hope right now?

3.11 - Conversation with Christian Posta of Solo October 2025

This is a conversation with Christian Posta about MCP and capabilities, approach the capabilities conversation from the very technical network view, but also as defined by artificial intelligence and Christina’s focus on MCP and authentication, helping provide valuable input on what we do at Naftiko.

A view of the capabilities discussion from the network perspective and artificial intelligence.

Questions

  • What is a Capability? 2:08
  • What is a natural language description of capabilities mean? 4:27
  • What is the role of identity and access management?
  • What is the role of the gateway when it comes to AI?
  • Are Agent Gateways centralized or federated?
  • What is the commercial open-source strategy of the Agent Gateway?
  • What do you need from the community?

3.12 - Conversation with David Boyne of EventCatalog October 2025

This is a conversation with David Boyne about EventCatalog, and the role that events play in discovery and visibliity of our infrastructure, working to align event-driven architecture with how Naftiko is approach the signals coming from enterprises, and understanding where Event Catalog fits into the picture.

Walk through the different uses of EventCatalog and how it helps with integrations.

Questions

  • What is Event Catalog?
  • What role does event-driven architect play in discoverability and visibility?
  • Can events help us document our systems in real-time?
  • What thinking goes into your support of open-source specifications?
  • How does Event Catalog help map your business domain?
  • Does Event Catalog help out at the tactical level?
  • How do you approach commercial open-source?
  • Can event-driven architecture help with ephemeral API discovery?
  • Do you feel event-driven will provide the richness and semantics AI agents will need?
  • How do we get more business people involved with Event Catalog?
  • What is your biggest need right now?

3.13 - Conversation with Mike Amundsen of Apiture October 2025

Sat down with Kevin Swiber to talk about hypermedia, but also the diff between hypermedia view of things and MCP, understanding more about the semantics we will need for AI agents to work, building on top of Kevin’s great work around MCP, but coming from his wealth of experience when it comes to API management over the years.

Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.

Questions

  • What is your experience with hypermedia? 2:04
  • What has changed from hypermedia to MCP? 5:53
  • Do we have the semantics needed for AI agents?
  • Is the world messy?
  • What keeps you going with your work?

3.14 - Conversation with Mike Amundsen of Apiture October 2025

Spent an hour with Mike Amundsen looking at microservices and hypermedia, but also went in deep on the role that ontology and taxonomy play in what we are trying to accomplish with artifiicial intelligence, walking through the cast of characters who have invested in what we need to make sense of information at scale.

Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.

Questions

  • What has changed since you wrote your first API book?
  • What’s the state of the web today?
  • What is the state of micro services?
  • What is hypermedia? 1:29
  • What is the role of information architecture in providing the meaning we need?
  • What is the role of ontology and taxonomy in hypermedia? 2:04
  • What was Ted Nelson a literary radical?
  • Who was Wendy Hall?
  • Who was Leonard Richardson?
  • What is a capability? 4:13
  • Who should be in the room when. Crafting capabilities? 1:53
  • How do we reconcile traditional automation and orchestration with agentic?
  • How do you help people understand and apply semantics?
  • How do you recommend folks stay grounded in hype cycles like today?
  • How do you help people be successful with AI in their work when they don’t have. Much control?

3.15 - Conversation with David Biesack of Apiture October 2025

A very compelling conversation with David Biesack about JSON Schema, and a deep dive into how they are using it for managing OpenAPI, but also their change log and pipelines, offering a compelling look at the roll that JSON Schema should and can play when it comes APIs, but also specifically schema.

A great discussion of the spectrum of uses by Apiture of JSON Schema to validate and stabilize operations.

Questions

  • How are you using JSON Schema?
  • Should your average developer have to deal with schema references?
  • How do you manage schema discovery?
  • How do you generate schema change logs?
  • When do you use YAML vs JSON?
  • How do you extend schema?
  • How do extensions help share how people use schema?
  • How do you educate people about schema?

4 - Capabilities

These are the capabilities we are actively developing based upon Naftiko Signals conversations we are having, producing real-world artifacts using capabilities, OpenAPI, JSON Schema, Bruno Collections, MCP, and other standards, to deliver and iterate upon each capability.

4.1 - API Reusability

Prototyping an API reusability capability to help drive conversation with different stakeholders when it comes to reusing APIs. We are still understanding what exactly this means, and this capability is driving that conversation out in the open on GitHub, our blog, and social media.

API Reusability

This is an exploratory proof of concept to quantify what API reuse is across a catalog of APIs for a domain, report it to the rest of the company using existing services, and then incentivize API reuse in VSCode, encouraging developers to reuse existing patterns across the APIs they are producing and consuming.

API reusaability has been identified as a need across multiple conversations Naftiko is having with companies, and this repository is mean to explore what is possible across many different providers, helping better understand what API reuse means in way that others can use.

Use Case

This is an implementation of the API reusable use cases for multiple pilot customers, leveraging the use case schema being developed to drive use case conversations, as well as how they are applied to each individual capability.

Capabilities

This end-to-end use cases has six separate capabilities, providing five individual capabilities that can be applied individually, as well as an aggregate capability that brings them all together to provide the right-size context window for an MCP server incentivizing reuse in VSCode, while also updating leadership and other teams of the reuse.

As many of the steps as possible are executed and validated using Bruno, when an HTTP adapter is used, which was pushed further with this iteration, using Bruno pre and post request scripts to calculate the API reuse definition using API catalog data gathered.

Image

This is an image of this aggregate events AI context capability to try and capture everything going on in the visual language we already use for our deck.

Alt text

4.2 - Manage Events

Prototyping an events management capability to assist in conversations we are having, but also explore how Naftiko could manage events and meetups we are planning on producing, helping develop a capability out in the open as we are working to build the product, iterating on capabilities we can use.

This is an exploratory proof of concept to explore what and end-to-end event management capability could look like–assembling all the existing standards in a single place to help inform what the capability schema might look like to support our ai context use case, while providing governance along the way.

Use Case

This is an application for our AI Context use cases, leveraging the use case schema being developed to drive use case conversations, as well as how they are applied to each individual capability.

  • AI Context - Using a capability as the context window for producing MCP servers.

Capabilities

This end-to-end use cases has six separate capabilities, providing five individual capabilities that can be applied individually, as well as an aggregate capability that brings them all together to provide the right-size context window for an MCP server.

Image

This is an image of this aggregate events AI context capability to try and capture everything going on in the visual language we already use for our deck.

Alt text

5 - Use Cases

These are the use cases we are focused on as part of go-to-market activities, helping standardize the business outcomes of the capabilities we are developing, ensuring that we are aligning the business and engineering details for any capability we are developing as part of this work.

6 - Services

These are all of the services we are currently supporting when it comes to building capabilities, offering a variety of services to support conversations with different companies, adding new ones as they are requested and needed as part of our ongoing Nafitko Signals work.

6.1 - Anthropic

Claude is an AI assistant created by Anthropic that helps people with a wide variety of tasks through natural conversation. I can assist with writing and editing, answer questions on many topics, help with analysis and research, provide coding support, engage in creative projects, and offer explanations of complex concepts.

Claude is an AI assistant created by Anthropic that helps people with a wide variety of tasks through natural conversation. I can assist with writing and editing, answer questions on many topics, help with analysis and research, provide coding support, engage in creative projects, and offer explanations of complex concepts.

Listing: https://contracts.apievangelist.com/store/anthropic/

Repo: https://github.com/api-evangelist/anthropic

APIs

Properties

6.2 - Atlassian

Atlassian is a software company that develops collaboration, productivity, and project management tools to help teams work more efficiently. Its products are designed to enhance teamwork, streamline workflows, and support project tracking across a wide range of industries.

Atlassian is a software company that develops collaboration, productivity, and project management tools to help teams work more efficiently. Its products are designed to enhance teamwork, streamline workflows, and support project tracking across a wide range of industries.

Listing: https://contracts.apievangelist.com/store/atlassian/

Repo: https://github.com/api-evangelist/atlassian

APIs

6.3 - Avalara

Avalara is a tax compliance software company that automates sales tax, VAT, and other transaction taxes for businesses. It calculates the correct tax rates for each transaction based on location and product type across thousands of jurisdictions, then handles tax return filing and compliance monitoring. Businesses use it because sales tax rules are extremely complex and constantly changing, especially when selling across multiple states or online, and Avalara’s automation saves them from having to manually track and comply with thousands of different tax requirements.

Avalara is a tax compliance software company that automates sales tax, VAT, and other transaction taxes for businesses. It calculates the correct tax rates for each transaction based on location and product type across thousands of jurisdictions, then handles tax return filing and compliance monitoring. Businesses use it because sales tax rules are extremely complex and constantly changing, especially when selling across multiple states or online, and Avalara’s automation saves them from having to manually track and comply with thousands of different tax requirements.

Listing: https://contracts.apievangelist.com/store/avalara/

Repo: https://github.com/api-evangelist/avalara

APIs

Properties

6.4 - BigCommerce

BigCommerce is a NASDAQ-listed ecommerce platform that provides software as a service services to retailers. The company’s platform includes online store creation, search engine optimization, hosting, and marketing and security from small to Enterprise sized businesses.

BigCommerce is a NASDAQ-listed ecommerce platform that provides software as a service services to retailers. The company’s platform includes online store creation, search engine optimization, hosting, and marketing and security from small to Enterprise sized businesses.

Listing: https://contracts.apievangelist.com/store/bigcommerce/

Repo: https://github.com/api-evangelist/bigcommerce

APIs

Properties

6.5 - Cvent

Cvent is a leading event management software company that helps organizations plan, promote, and execute successful events. Their comprehensive platform allows users to easily create event websites, manage registrations, and track attendee engagement. With features such as event budgeting, email marketing, and attendee analytics, Cvent streamlines the event planning process and helps businesses maximize their return on investment. Additionally, their mobile app and on-site check-in tools ensure a seamless experience for both event organizers and attendees. Overall, Cvent empowers organizations to deliver impactful and memorable events that drive business results.

Cvent is a leading event management software company that helps organizations plan, promote, and execute successful events. Their comprehensive platform allows users to easily create event websites, manage registrations, and track attendee engagement. With features such as event budgeting, email marketing, and attendee analytics, Cvent streamlines the event planning process and helps businesses maximize their return on investment. Additionally, their mobile app and on-site check-in tools ensure a seamless experience for both event organizers and attendees. Overall, Cvent empowers organizations to deliver impactful and memorable events that drive business results.

Listing: https://contracts.apievangelist.com/store/cvent/

Repo: https://github.com/api-evangelist/cvent

APIs

Properties

6.6 - Datadog

Datadog is a monitoring and analytics platform that helps organizations gain insight into their infrastructure, applications, and services. It allows users to collect, visualize, and analyze real-time data from a variety of sources, including servers, databases, and cloud services. Datadog’s platform enables companies to track performance metrics, troubleshoot issues, and optimize their systems for peak efficiency. With its customizable dashboards and alerting system, Datadog empowers teams to proactively monitor their environments and ensure smooth operations. Ultimately, Datadog helps businesses make data-driven decisions and improve the overall performance of their technology stack.

Datadog is a monitoring and analytics platform that helps organizations gain insight into their infrastructure, applications, and services. It allows users to collect, visualize, and analyze real-time data from a variety of sources, including servers, databases, and cloud services. Datadog’s platform enables companies to track performance metrics, troubleshoot issues, and optimize their systems for peak efficiency. With its customizable dashboards and alerting system, Datadog empowers teams to proactively monitor their environments and ensure smooth operations. Ultimately, Datadog helps businesses make data-driven decisions and improve the overall performance of their technology stack.

Listing: https://contracts.apievangelist.com/store/datadog/

Repo: https://github.com/api-evangelist/datadog

APIs

Properties

6.7 - Docker

Docker is a software platform that allows developers to package, distribute, and run applications in containers. Containers are lightweight, standalone, and portable environments that contain everything needed to run an application, including code, runtime, system tools, libraries, and settings. Docker provides a way to streamline the development and deployment process by isolating applications in containers, making it easier to manage dependencies, scale applications, and ensure consistency across different environments. Docker simplifies the process of building, deploying, and managing applications, ultimately leading to increased efficiency and productivity for developers.

Docker is a software platform that allows developers to package, distribute, and run applications in containers. Containers are lightweight, standalone, and portable environments that contain everything needed to run an application, including code, runtime, system tools, libraries, and settings. Docker provides a way to streamline the development and deployment process by isolating applications in containers, making it easier to manage dependencies, scale applications, and ensure consistency across different environments. Docker simplifies the process of building, deploying, and managing applications, ultimately leading to increased efficiency and productivity for developers.

Listing: https://contracts.apievangelist.com/store/docker/

Repo: https://github.com/api-evangelist/docker

APIs

6.8 - Figma

Figma’s mission is to make design accessible to everyone. Our two products help people from different backgrounds and roles express their ideas visually and make things together.

Figma’s mission is to make design accessible to everyone. Our two products help people from different backgrounds and roles express their ideas visually and make things together.

Listing: https://contracts.apievangelist.com/store/figma/

Repo: https://github.com/api-evangelist/figma

APIs

Properties

6.9 - GitHub

GitHub is a cloud-based platform for software development and version control, built on Git. It enables developers to store, manage, and collaborate on code. In addition to Gits distributed version control, GitHub offers access control, bug tracking, feature requests, task management, continuous integration, and wikis for projects. Headquartered in California, it has operated as a subsidiary of Microsoft since 2018.

GitHub is a cloud-based platform for software development and version control, built on Git. It enables developers to store, manage, and collaborate on code. In addition to Gits distributed version control, GitHub offers access control, bug tracking, feature requests, task management, continuous integration, and wikis for projects. Headquartered in California, it has operated as a subsidiary of Microsoft since 2018.

Listing: https://contracts.apievangelist.com/store/github/

Repo: https://github.com/api-evangelist/github

APIs

Properties

6.10 - Google

Google Cloud APIs are programmatic interfaces to Google Cloud Platform services. They are a key part of Google Cloud Platform, allowing you to easily add the power of everything from computing to networking to storage to machine-learning-based data analysis to your applications.

Google Cloud APIs are programmatic interfaces to Google Cloud Platform services. They are a key part of Google Cloud Platform, allowing you to easily add the power of everything from computing to networking to storage to machine-learning-based data analysis to your applications.

Listing: https://contracts.apievangelist.com/store/google/

Repo: https://github.com/api-evangelist/google

APIs

Properties

6.11 - Grafana

Grafana is a powerful open-source platform for data visualization and monitoring. It allows users to create interactive, customizable dashboards that display real-time data from multiple sources in a visually appealing way. With Grafana, users can easily connect to databases, cloud services, and other data sources, and then display that data in various chart types, tables, and histograms. Grafana also offers advanced alerting capabilities, enabling users to set up alerts based on specified conditions and thresholds. Overall, Grafana is a versatile tool that helps organizations make sense of their data and monitor the performance of their systems in a centralized, user-friendly interface.

Grafana is a powerful open-source platform for data visualization and monitoring. It allows users to create interactive, customizable dashboards that display real-time data from multiple sources in a visually appealing way. With Grafana, users can easily connect to databases, cloud services, and other data sources, and then display that data in various chart types, tables, and histograms. Grafana also offers advanced alerting capabilities, enabling users to set up alerts based on specified conditions and thresholds. Overall, Grafana is a versatile tool that helps organizations make sense of their data and monitor the performance of their systems in a centralized, user-friendly interface.

Listing: https://contracts.apievangelist.com/store/grafana/

Repo: https://github.com/api-evangelist/grafana

APIs

6.12 - HubSpot

HubSpot is a leading CRM platform that provides software and support to help businesses grow better. Our platform includes marketing, sales, service, and website management products that start free and scale to meet our customers' needs at any stage of growth. Today, thousands of customers around the world use our powerful and easy-to-use tools and integrations to attract, engage, and delight customers.

HubSpot is a leading CRM platform that provides software and support to help businesses grow better. Our platform includes marketing, sales, service, and website management products that start free and scale to meet our customers' needs at any stage of growth. Today, thousands of customers around the world use our powerful and easy-to-use tools and integrations to attract, engage, and delight customers.

Listing: https://contracts.apievangelist.com/store/hubspot/

Repo: https://github.com/api-evangelist/hubspot

APIs

Properties

6.13 - Kong

Kong provides the foundation that enables any company to securely adopt AI and become an API-first company speeding up time to market, creating new business opportunities, and delivering superior products and services.

Kong provides the foundation that enables any company to securely adopt AI and become an API-first company speeding up time to market, creating new business opportunities, and delivering superior products and services.

Listing: https://contracts.apievangelist.com/store/kong/

Repo: https://github.com/api-evangelist/kong

APIs

Properties

6.14 - LinkedIn

LinkedIn is a social networking site for professionals to connect with colleagues, employers, and other professionals. It’s a place to share ideas, information, and opportunities, and to find jobs, research companies, and learn about industry news.

LinkedIn is a social networking site for professionals to connect with colleagues, employers, and other professionals. It’s a place to share ideas, information, and opportunities, and to find jobs, research companies, and learn about industry news.

Listing: https://contracts.apievangelist.com/store/linkedin/

Repo: https://github.com/api-evangelist/linkedin

APIs

Properties

6.15 - Mailchimp

Mailchimp’s developer tools provide everything you need to integrate your data with intelligent marketing tools and event-driven transactional email.

Mailchimp’s developer tools provide everything you need to integrate your data with intelligent marketing tools and event-driven transactional email.

Listing: https://contracts.apievangelist.com/store/mailchimp/

Repo: https://github.com/api-evangelist/mailchimp

APIs

Properties

6.16 - Meta

Meta Platforms, Inc., doing business as Meta, and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California. The company owns and operates Facebook, Instagram, Threads, and WhatsApp, among other products and services.

Meta Platforms, Inc., doing business as Meta, and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California. The company owns and operates Facebook, Instagram, Threads, and WhatsApp, among other products and services.

Listing: https://contracts.apievangelist.com/store/meta/

Repo: https://github.com/api-evangelist/meta

APIs

Properties

6.17 - Microsoft Graph

Microsoft Graph is the gateway to data and intelligence in Microsoft cloud services like Microsoft Entra and Microsoft 365. Use the wealth of data accessible through Microsoft Graph to build apps for organizations and consumers that interact with millions of users.

Microsoft Graph is the gateway to data and intelligence in Microsoft cloud services like Microsoft Entra and Microsoft 365. Use the wealth of data accessible through Microsoft Graph to build apps for organizations and consumers that interact with millions of users.

Listing: https://contracts.apievangelist.com/store/microsoft-graph/

Repo: https://github.com/api-evangelist/microsoft-graph

APIs

Properties

6.18 - New Relic

New Relic is a software analytics company that helps businesses monitor and analyze their applications and infrastructure in real-time. By providing detailed insights into the performance and user experience of their systems, New Relic enables organizations to identify and fix issues quickly, optimize performance, and ultimately deliver better digital experiences to their customers. With a range of products and services, including application performance monitoring, infrastructure monitoring, and synthetic monitoring, New Relic empowers businesses to make data-driven decisions and drive digital transformation.

New Relic is a software analytics company that helps businesses monitor and analyze their applications and infrastructure in real-time. By providing detailed insights into the performance and user experience of their systems, New Relic enables organizations to identify and fix issues quickly, optimize performance, and ultimately deliver better digital experiences to their customers. With a range of products and services, including application performance monitoring, infrastructure monitoring, and synthetic monitoring, New Relic empowers businesses to make data-driven decisions and drive digital transformation.

Listing: https://contracts.apievangelist.com/store/new-relic/

Repo: https://github.com/api-evangelist/new-relic

APIs

Properties

6.19 - Notion

Notion is a versatile all-in-one workspace tool that helps individuals and teams organize their tasks, projects, and ideas in a centralized and collaborative platform. With features such as databases, boards, calendars, and documents, Notion allows users to create personalized workflows, track progress, and manage information efficiently. Users can customize their workspace to fit their unique needs, whether it be for project management, note-taking, or knowledge sharing. Notion aims to streamline workflows and enhance productivity by providing a flexible and intuitive platform for organizing and managing projects and information.

Notion is a versatile all-in-one workspace tool that helps individuals and teams organize their tasks, projects, and ideas in a centralized and collaborative platform. With features such as databases, boards, calendars, and documents, Notion allows users to create personalized workflows, track progress, and manage information efficiently. Users can customize their workspace to fit their unique needs, whether it be for project management, note-taking, or knowledge sharing. Notion aims to streamline workflows and enhance productivity by providing a flexible and intuitive platform for organizing and managing projects and information.

Listing: https://contracts.apievangelist.com/store/notion/

Repo: https://github.com/api-evangelist/notion

APIs

Properties

6.20 - OpenAI

OpenAI is a research organization that focuses on artificial intelligence (AI) and machine learning. Their mission is to ensure that AI benefits all of humanity, and they work on developing AI technology in a way that is safe and beneficial for society. OpenAI conducts cutting-edge research in fields such as natural language processing, reinforcement learning, and robotics. They also develop and release tools and models that help advance the field of AI and are open-source and accessible to the public. Additionally, OpenAI engages in outreach and advocacy efforts to promote the responsible development and deployment of AI technologies.

OpenAI is a research organization that focuses on artificial intelligence (AI) and machine learning. Their mission is to ensure that AI benefits all of humanity, and they work on developing AI technology in a way that is safe and beneficial for society. OpenAI conducts cutting-edge research in fields such as natural language processing, reinforcement learning, and robotics. They also develop and release tools and models that help advance the field of AI and are open-source and accessible to the public. Additionally, OpenAI engages in outreach and advocacy efforts to promote the responsible development and deployment of AI technologies.

Listing: https://contracts.apievangelist.com/store/openai/

Repo: https://github.com/api-evangelist/openai

APIs

Properties

6.21 - Salesforce

Salesforce is a cloud-based customer relationship management (CRM) platform that helps businesses manage and track their interactions with customers and leads. It provides a range of services including sales automation, marketing automation, customer service and analytics. Salesforce allows businesses to store all customer data in one centralized location, making it easier to collaborate and communicate with team members and provide personalized experiences for customers. With Salesforce, businesses can streamline their processes, increase efficiency, and ultimately drive growth and success.

Salesforce is a cloud-based customer relationship management (CRM) platform that helps businesses manage and track their interactions with customers and leads. It provides a range of services including sales automation, marketing automation, customer service and analytics. Salesforce allows businesses to store all customer data in one centralized location, making it easier to collaborate and communicate with team members and provide personalized experiences for customers. With Salesforce, businesses can streamline their processes, increase efficiency, and ultimately drive growth and success.

Listing: https://contracts.apievangelist.com/store/salesforce/

Repo: https://github.com/api-evangelist/salesforce

APIs

6.22 - SendGrid

SendGrid is a cloud-based customer communication platform that provides tools for email marketing and transactional email delivery. It helps businesses of all sizes easily create and send emails to their customers, enabling them to build stronger relationships and drive engagement. SendGrid also offers analytics and reporting tools to track the success of email campaigns, as well as features for managing subscriber lists and personalizing emails for targeted communications. Overall, SendGrid’s platform allows businesses to streamline their email marketing efforts and improve their overall communication strategies.

SendGrid is a cloud-based customer communication platform that provides tools for email marketing and transactional email delivery. It helps businesses of all sizes easily create and send emails to their customers, enabling them to build stronger relationships and drive engagement. SendGrid also offers analytics and reporting tools to track the success of email campaigns, as well as features for managing subscriber lists and personalizing emails for targeted communications. Overall, SendGrid’s platform allows businesses to streamline their email marketing efforts and improve their overall communication strategies.

Listing: https://contracts.apievangelist.com/store/sendgrid/

Repo: https://github.com/api-evangelist/sendgrid

APIs

Properties

6.23 - ServiceNow

ServiceNow is a cloud-based platform that provides a wide range of services for businesses to manage their IT operations, customer service, human resources, and other functions. The platform allows organizations to automate and streamline their workflows, improving efficiency and productivity. ServiceNow offers various applications and modules that help companies track and resolve issues, manage projects, and enhance collaboration among employees. Additionally, ServiceNow provides tools for data analytics, reporting, and monitoring to help businesses make informed decisions and optimize their operations. Overall, ServiceNow helps organizations simplify and improve their processes, leading to better customer satisfaction and business outcomes.

ServiceNow is a cloud-based platform that provides a wide range of services for businesses to manage their IT operations, customer service, human resources, and other functions. The platform allows organizations to automate and streamline their workflows, improving efficiency and productivity. ServiceNow offers various applications and modules that help companies track and resolve issues, manage projects, and enhance collaboration among employees. Additionally, ServiceNow provides tools for data analytics, reporting, and monitoring to help businesses make informed decisions and optimize their operations. Overall, ServiceNow helps organizations simplify and improve their processes, leading to better customer satisfaction and business outcomes.

Listing: https://contracts.apievangelist.com/store/servicenow/

Repo: https://github.com/api-evangelist/servicenow

APIs

Properties

6.24 - Shopify

Shopify is an e-commerce platform that enables businesses to create and operate their online stores. It provides a wide range of tools and features that help merchants manage their inventory, process payments, track shipments, and create customized storefronts. With Shopify, businesses can easily set up their online presence, sell products, and reach customers all over the world. The platform also offers various marketing and analytics tools to help businesses grow and succeed in the competitive online marketplace. Overall, Shopify simplifies the process of building and running an online store, making it a popular choice for businesses of all sizes.

Shopify is an e-commerce platform that enables businesses to create and operate their online stores. It provides a wide range of tools and features that help merchants manage their inventory, process payments, track shipments, and create customized storefronts. With Shopify, businesses can easily set up their online presence, sell products, and reach customers all over the world. The platform also offers various marketing and analytics tools to help businesses grow and succeed in the competitive online marketplace. Overall, Shopify simplifies the process of building and running an online store, making it a popular choice for businesses of all sizes.

Listing: https://contracts.apievangelist.com/store/shopify/

Repo: https://github.com/api-evangelist/shopify

APIs

Properties

6.25 - Slack

Slack is a cloud-based collaboration tool that brings teams together to work more efficiently and effectively. It allows team members to communicate in real-time through instant messaging, group chats, and video calls. Users can share files, collaborate on projects, and stay organized with task management features. Slack also integrates seamlessly with other tools and services, making it easy for teams to streamline their workflow and stay connected, no matter where they are located. With its user-friendly interface and robust features.

Slack is a cloud-based collaboration tool that brings teams together to work more efficiently and effectively. It allows team members to communicate in real-time through instant messaging, group chats, and video calls. Users can share files, collaborate on projects, and stay organized with task management features. Slack also integrates seamlessly with other tools and services, making it easy for teams to streamline their workflow and stay connected, no matter where they are located. With its user-friendly interface and robust features.

Listing: https://contracts.apievangelist.com/store/slack/

Repo: https://github.com/api-evangelist/slack

APIs

Properties

6.26 - Snowflake

Snowflake is a cloud-based data platform that provides data warehousing, data lake, and data sharing capabilities. It enables organizations to store, process, and analyze large volumes of structured and semi-structured data using SQL, while offering scalability, concurrency, and performance across multiple cloud providers. Snowflake is widely used for analytics, business intelligence, and data collaboration.

Snowflake is a cloud-based data platform that provides data warehousing, data lake, and data sharing capabilities. It enables organizations to store, process, and analyze large volumes of structured and semi-structured data using SQL, while offering scalability, concurrency, and performance across multiple cloud providers. Snowflake is widely used for analytics, business intelligence, and data collaboration.

Listing: https://contracts.apievangelist.com/store/snowflake/

Repo: https://github.com/api-evangelist/snowflake

APIs

Properties

6.27 - Stripe

Stripe is a technology company that provides a platform for online payment processing. They offer a secure and seamless way for businesses to accept payments from customers, handling transactions in multiple currencies and payment methods. Stripe’s software and APIs make it easy for businesses of all sizes to manage their online payments, track transactions, and analyze their revenue streams. With features such as fraud prevention, subscription billing, and mobile payment options, Stripe is a valuable tool for e-commerce businesses looking to streamline their payment processes and provide a better user experience for their customers.

Stripe is a technology company that provides a platform for online payment processing. They offer a secure and seamless way for businesses to accept payments from customers, handling transactions in multiple currencies and payment methods. Stripe’s software and APIs make it easy for businesses of all sizes to manage their online payments, track transactions, and analyze their revenue streams. With features such as fraud prevention, subscription billing, and mobile payment options, Stripe is a valuable tool for e-commerce businesses looking to streamline their payment processes and provide a better user experience for their customers.

Listing: https://contracts.apievangelist.com/store/stripe/

Repo: https://github.com/api-evangelist/stripe

APIs

Properties

6.28 - Twilio

Twilio is a cloud communications platform that enables developers to integrate voice, messaging, and video capabilities into their applications. Through its APIs, Twilio allows businesses to easily build and scale communication solutions, such as customer support helplines, appointment reminders, and two-factor authentication services. By partnering with Twilio, organizations can enhance their customer engagement strategies and streamline their communication channels, ultimately driving greater efficiency and customer satisfaction. In essence, Twilio empowers developers to create innovative and personalized communication experiences that connect people in new and meaningful ways.

Twilio is a cloud communications platform that enables developers to integrate voice, messaging, and video capabilities into their applications. Through its APIs, Twilio allows businesses to easily build and scale communication solutions, such as customer support helplines, appointment reminders, and two-factor authentication services. By partnering with Twilio, organizations can enhance their customer engagement strategies and streamline their communication channels, ultimately driving greater efficiency and customer satisfaction. In essence, Twilio empowers developers to create innovative and personalized communication experiences that connect people in new and meaningful ways.

Listing: https://contracts.apievangelist.com/store/twilio/

Repo: https://github.com/api-evangelist/twilio

APIs

Properties

6.29 - Youtube

The Youtube API provides the ability to retrieve feeds related to videos, users, and playlists. It also provides the ability to manipulate these feeds, such as creating new playlists, adding videos as favorites, and sending messsages. The API is also able to upload videos.

The Youtube API provides the ability to retrieve feeds related to videos, users, and playlists. It also provides the ability to manipulate these feeds, such as creating new playlists, adding videos as favorites, and sending messsages. The API is also able to upload videos.

Listing: https://contracts.apievangelist.com/store/youtube/

Repo: https://github.com/api-evangelist/youtube

APIs

Properties

6.30 - Zendesk

Zendesk provides customer service and engagement software that helps businesses manage support tickets, automate workflows, and offer multi-channel supportincluding email, chat, social media, and phonethrough a unified platform.

Zendesk provides customer service and engagement software that helps businesses manage support tickets, automate workflows, and offer multi-channel supportincluding email, chat, social media, and phonethrough a unified platform.

Listing: https://contracts.apievangelist.com/store/zendesk/

Repo: https://github.com/api-evangelist/zendesk

APIs

Properties

6.31 - Zoom

Zoom is a video conferencing platform that allows users to connect with others through virtual meetings, webinars, and chat features. It enables individuals and businesses to communicate and collaborate remotely, making it easier to work together from different locations. With its user-friendly interface and high-quality audio and video capabilities, Zoom has become a popular tool for businesses, schools, and other organizations to stay connected and productive. Whether it’s hosting a team meeting, conducting a virtual workshop, or catching up with friends and family, Zoom provides a seamless and reliable way to communicate in real-time.

Zoom is a video conferencing platform that allows users to connect with others through virtual meetings, webinars, and chat features. It enables individuals and businesses to communicate and collaborate remotely, making it easier to work together from different locations. With its user-friendly interface and high-quality audio and video capabilities, Zoom has become a popular tool for businesses, schools, and other organizations to stay connected and productive. Whether it’s hosting a team meeting, conducting a virtual workshop, or catching up with friends and family, Zoom provides a seamless and reliable way to communicate in real-time.

Listing: https://contracts.apievangelist.com/store/zoom/

Repo: https://github.com/api-evangelist/zoom

APIs

7 - Conversations Archive

These are the conversations that have already been published, providing a rich archive of the conversations we have told that is searchable and available as part of our overall go-to-market motion, helping keep track of the conversation that have occurred so we are consistent as we are moving forward.

8 - Stories Archive

These are the stories that have already been published, providing a rich archive of the stories we have told that is searchable and available as part of our overall go-to-market motion, helping keep track of the stories we have told and be consistent in the what stories we tell moving forward.

8.1 - AI Context Use Case - Naftiko Blog

This is blog post on the AI Context use case, focusing on providing Model Context Protocol (MCP) servers on top of common private, public/1st party and 3rd party APIs, as well as local SQL databases, employing a domain-driven, declarative, and governed approach to right-sizing the context windows via MCP while providing integrations for us across AI copilots and agents.

Title

  • AI Context

Tagline

  • AI without context is guesswork. Valuable data lives in SaaS tools, files, and systems your models can’t reach safely.

Description

This use case focuses on providing Model Context Protocol (MCP) servers on top of common private, public/1st party and 3rd party APIs, as well as local SQL databases, employing a domain-driven, declarative, and governed approach to right-sizing the context windows via MCP while providing integrations for us across AI copilots and agents.

Teams need a reliable way to deliver MCP servers from internal and third-party APIs without having to discover and learn about each API and the technical details of integration. This use case provides the fundamentals for safely integrating existing data and systems into artificial intelligence copilots and agents.

Benefits

  • Add data and tools to agents
  • Compose MCP servers
  • Aggregate & curate context

Pain

  • Copilot Leadership Mandate
  • MCP Leadership Mandate
  • Unmanaged Encryption
  • Unmanaged Discovery
  • Unmanaged Authentication
  • Unmanaged Usage
  • Unmanaged Cost

Gains

  • 3rd-Pary Data in Copilot
  • 3rd-Party MCP Available
  • Manage Budget Across
  • Managed Risk Involved
  • Optimize SaaS Usage
  • Create More Visibility
  • Create More Discovery
  • Create More Reusability

Connects

  • Internal APIs
  • Infrastructure APIs
  • SaaS APIs
  • Partner APIs

Adapters

  • HTTP
  • MCP
  • OpenAPI

8.2 - Hypermedia Automating Capabilities in this AI Moment

This is blog post on the capturing the conversation with Mike Amundsen and Kevin Swiber on the Naftiko Capabilities podcast, helping connect our hypermedia past with this AI integration moment.

8.3 - Exploring What Schema Tools Are Available

Do a fresh round-up of the schema tooling available. Specifically for the standards we are supporting, beginning with JSON Schema, but then also focusing on each of the specs and how their schema are managed or not. Round-up tooling, evaluate and apply metadata, and add to tooling section and link to any relevant standards, then publish a story on the subject.

8.4 - We've Been Wrong About API Reuse All Along

Established thinking in large organisations has been to focus API reuse goals and metrics on internal, in-house produced APIs. The logic is straightforward: build once, reuse everywhere, and maximise the return on investment for internally developed services. However, recent insights suggest we’ve been looking at API reuse through the wrong lens. The real opportunity—and the real risks—lie not in producer-side reuse, but in how we enable engineers to consume our and third-party APIs efficiently, securely, and with proper governance.

8.5 - Avalara Developer Experience Review

Avalara asked me to take a look at their API portal and provide feedback on their API experience – which resulted in pretty compelling review and will continue with a deeper dive on their APIs and MCP servers. I will then explore the possibilities for using as part of our AI orchestration use case because this is where they appear to be at in their journey.

8.6 - Capabilities - The New Stack Blog

This is blog post being written and published by The New Stack on capabilities. We have already done the interview with them and shared relevant capabilities thinking stories from the ecosystem, and they will be working on first part of January, and then publishing it with a link to our Naftiko Signals onboarding landing page to drive conversations.

8.7 - Capabilities Podcast - Naftiko Blog

This is blog post announcing the Naftiko Capabilities blog post, providing an introduction to why we are doing the podcast and the central role that capbilities will play in the conversation. Our goal is to talk about the technical details of integration and automation, while ensuring we are aligned with business outcomes.

8.8 - Engine - Naftiko Blog

This is an introductory blog post to help us begin sharing stories of what the Nafitko Engine could be, helping stimulate more conversation around gateways, and what we are looking to build with engines, helping shape our road map, but also the go-to-market effort around what is possible with engines.

8.9 - Fabric - Naftiko Blog

This is an introductory blog post on the Naftiko Fabric to get the conversation going around what we are building, helping connect the dots with capabilities, and support Jerome’s talk at APIDays, and the conversation we are having with folks about what is needed when it comes to integrations and automation.

8.10 - Naftiko Launch - API Evangelist Blog

This is blog post for API Evangelist, announcing Naftiko, building upon what we did in December, helping continue get the word out about what we are building, taking another shot at explaining what we do to the community, and catching more people who may not have seen the original announcement and press release within the API Evangelist community.

8.11 - Naftiko Launch - EIN Presswire

This was the press release to announce the launch of Naftiko as a company and the funding announcement, showing the team, and what we are building, drawling a public line in the sand for when Naftiko was launched at APIDays, helping provide material for other channels and outlets to syndicate, while helping build attention to what we are doing.

8.12 - Naftiko Signals - Naftiko Blog

This is blog post announcing the Naftiko Signals program on the Naftiko Blog, providing an official line in the sand for when we started the program, while we are still building it out in the public, helping share the backstory, the data, and bring in more design, service, and market partners to help us invest in the research.

8.13 - Naftiko Signals White Paper

This was a white paper to support the Naftiko Signals workshop at APIDays, and provide more content for the website as we work to launch the company, helping share why signals from enterprise systems can help us understand what is happening, and can be use to steer any system in the desired direciton and achieve the business outcomes required.

8.14 - Capabilities - Naftiko Blog

This is an introductory blog post on the Naftiko blog sharing our perspective on capabilities, publishing early views on what capabilities are to help stimulate other conversations as we continue to work to shape what a capability could be and develop our schema that can be used to support the Naftiko Framework, Engine, and Fabric.

8.15 - Naftiko Launch - Naftiko Blog

This is a blog post announcing Naftiko in the lead up to APIDays and after the launch of our website, announcing the company, funding, and team behind, providing a snapshot of what we are building and get the conversation started with the ecosystem about what is needed when it comes to API and AI integration.

9 - Standards

These are the standards we are tuning into as part of the go-to-market effort, ensuring that we are building on the go-to-market efforts of these standards, while we help amplify and contribute, ensuring that the core of Naftiko is centered on the standards that power the Web.

9.1 - OpenAPI

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

Describing the surface area of HTTP APIs and Webhooks.

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

OpenAPI was formerly known as Swagger. In 2015, SmartBear donated the specification to the Linux Foundation, establishing the OpenAPI Initiative (OAI) and a formal, community-driven governance model that anyone can participate in.

An OpenAPI document can be written in JSON or YAML and typically defines elements such as: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security.

OpenAPI has an active GitHub organization, blog, LinkedIn page, and Slack channel to encourage community participation. In addition, OAI membership helps fund projects and events that drive awareness and adoption.

The OpenAPI Specification can be used alongside two other OAI specifications: (1) the Arazzo specification for defining API-driven workflows, and (2) OpenAPI Overlays, which allow additional information to be overlaid onto an OpenAPI document.

License: Apache

Tags: HTTP APIs, Webhooks

Properties: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security

Website: https://www.openapis.org

9.2 - OpenAPI Overlays

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.

Define metadata, operations, and data structures for overlaying on top of OpenAPI.

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.

OpenAPI Overlays emerged from the need to adapt APIs for varied use cases, from improving developer experience to localizing documentation. The first version was recently released, and the roadmap is being developed within the OpenAPI Initiative.

The specification provides three constructs for augmenting an OpenAPI description: Info, Overlays, and Actions. How these are applied is being worked out across different tools and industries to accommodate the diversity of APIs being delivered.

To get involved, participate via the GitHub repository, where you’ll find discussions, meeting notes, and related topics. There’s also a dedicated channel within the broader OpenAPI Initiative Slack.

OpenAPI Overlays offer a robust way to manage the complexity of producing and consuming APIs across industries, regions, and domains. As the specification matures, it presents a strong opportunity to ensure documentation, mocks, examples, code generation, tests, and other artifacts carry the right context for different situations.

License: Apache License

Tags: Overlays

Properties: info, overlays, and actions

Website: https://spec.openapis.org/overlay/v1.0.0.html

Standards: JSON Schema

9.3 - Arazzo

The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Describing your business processes and workflows using OpenAPI.

The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Arazzo emerged from a need identified in the OpenAPI community for orchestration and automation across APIs described with OpenAPI. Version 1 of the specification is available, and work on future iterations is guided by a public roadmap.

With Arazzo, you can define elements such as: Info, Sources, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusables, Criteria, Request Bodies, and Payload Replacements—providing a consistent approach to delivering a wide range of automation outcomes.

You can engage with the Arazzo community via the GitHub repository for each version and participate in GitHub Discussions to stay current on meetings and interact with the specification’s stewards and the broader community.

Arazzo is the logical layer on top of OpenAPI: it goes beyond documentation, mocking, and SDKs to focus on defining real business workflows that use APIs. Together, Arazzo and OpenAPI help align API operations with the rest of the business.

License: Apache 2.0

Tags: Workflows, Automation

Properties: Info, Source, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusable, Criterion, Request Bodies, and Payload Replacements

Website: https://spec.openapis.org/arazzo/latest.html

9.4 - AsyncAPI

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

Describing the surface area of your event-driven infrastructure.

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

The specification began as an open-source side project and was later donated to the Linux Foundation after the team joined Postman, establishing it as a standard with formal governance.

AsyncAPI lets you define servers, producers and consumers, channels, protocols, and messages used in event-driven API operations—providing a common, tool-friendly way to describe the surface area of event-driven APIs.

To get involved, visit the AsyncAPI GitHub repository and blog, follow the LinkedIn page, tune into the YouTube or Twitch channels, and join the conversation in the community Slack.

AsyncAPI can be used to define HTTP APIs much like OpenAPI, and it further supports multiple protocols such as Pub/Sub, Kafka, MQTT, NATS, Redis, SNS, Solace, AMQP, JMS, and WebSockets—making it useful across many approaches to delivering APIs.

License: Apache

Tags: Event-Driven

Properties: Servers, Producers, Consumers, Channels, Protocols, and Messages

Website: https://www.asyncapi.com

9.5 - APIOps Cycles

APIOps Cycles is a Lean and service design–inspired methodology for designing, improving, and scaling APIs throughout their entire lifecycle. Developed since 2017 and continuously refined through community contributions and real-world projects across industries, APIOps Cycles provides a structured approach to API strategy using a distinctive metro map visualization where stations and lines represent critical aspects of the API lifecycle.

Aligning engineering with products when it comes to APIs.

The method is built around a collection of strategic canvas templates that help teams systematically address everything from customer journey mapping and value proposition definition to domain modeling, capacity planning, and risk assessment. As an open-source framework released under the Creative Commons Attribution–ShareAlike 4.0 license, APIOps Cycles is freely available for anyone to use, adapt, and share, with the complete method consisting of localized JSON and markdown files that power both the official website and open tooling available as an npm package. Whether you’re a developer integrating the method into your products and services, or an organization seeking to establish API product strategy and best practices, APIOps Cycles offers a proven, community-backed approach supported by a network of partners who can provide guidance and expertise in implementing the methodology effectively.

License: Creative Commons Attribution–ShareAlike 4.0

Tags: Products, Operations

Website: https://www.apiopscycles.com/

APIOps Cycles Canvases Outline

  1. Customer Journey Canvas
  • Persona
  • Customer Discovers Need
  • Customer Need Is Resolved
  • Journey Steps
  • Pains
  • Gains
  • Inputs & Outputs
  • Interaction & Processing Rules
  1. API Value Proposition Canvas
  • Tasks
  • Gain Enabling Features
  • Pain Relieving Features
  • API Products
  1. API Business Model Canvas
  • API Value Proposition
  • API Consumer Segments
  • Developer Relations
  • Channels
  • Key Resources
  • Key Activities
  • Key Partners
  • Benefits
  • Costs
  1. Domain Canvas
  • Selected Customer Journey Steps
  • Core Entities & Business Meaning
  • Attributes & Business Importance
  • Relationships Between Entities
  • Business, Compliance & Integrity Rules
  • Security & Privacy Considerations
  1. Interaction Canvas
  • CRUD Interactions
  • CRUD Input & Output Models
  • CRUD Processing & Validation
  • Query-Driven Interactions
  • Query-Driven Input & Output Models
  • Query-Driven Processing & Validation
  • Command-Driven Interactions
  • Command-Driven Input & Output Models
  • Command-Driven Processing & Validation
  • Event-Driven Interactions
  • Event-Driven Input & Output Models
  • Event-Driven Processing & Validation
  1. REST Canvas
  • API Resources
  • API Resource Model
  • API Verbs
  • API Verb Example
  1. GraphQL Canvas
  • API Name
  • Consumer Goals
  • Key Types
  • Relationships
  • Queries
  • Mutations
  • Subscriptions
  • Authorization Rules
  • Consumer Constraints
  • Notes / Open Questions
  1. Event Canvas
  • User Task / Trigger
  • Input / Event Payload
  • Processing / Logic
  • Output / Event Result
  1. Capacity Canvas
  • Current Business Volumes
  • Future Consumption Trends
  • Peak Load and Availability Requirements
  • Caching Strategies
  • Rate Limiting Strategies
  • Scaling Strategies
  1. Business Impact Canvas
  • Availability Risks
  • Mitigate Availability Risks
  • Security Risks
  • Mitigate Security Risks
  • Data Risks
  • Mitigate Data Risks
  1. Locations Canvas
  • Location Groups
  • Location Group Characteristics
  • Locations
  • Location Characteristics
  • Location Distances
  • Location Distance Characteristics
  • Location Endpoints
  • Location Endpoint Characteristics

9.6 - Postman Collections

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Executable artifact for automating APi requests and responses for testing.

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Postman Collections started as a simple way to save and share API requests in the early Postman client (2013), then grew into a formal JSON format with the v1 schema published in 2015. The format then stabilized as v2.0.0 and shortly after as v2.1.0 in 2017, which remains the common export/import version today.

Owner: Postman

License: Apache 2.0

Properties: Metadata, Requests, Scripts, Variables, Authentication, Methods, Headers, URLs, Bodies, Events, Responses

Website: https://postman.com

9.7 - Postman Environments

Postman environments are collections of variables that let you easily switch between different configurations (like development, staging, and production server URLs) without manually changing values throughout your API requests.

Storing variables for running along with Postman Collections.

Postman environments are a powerful feature that allow you to manage different sets of variables for your API testing and development workflow. An environment is essentially a named collection of key-value pairs (variables) that you can switch between depending on your context—such as development, staging, or production. For example, you might have different base URLs, authentication tokens, or API keys for each environment. Instead of manually updating these values in every request when you switch from testing locally to hitting a production server, you can simply select a different environment from a dropdown menu, and all your requests will automatically use the appropriate variables. This makes it much easier to maintain consistency, avoid errors, and streamline your workflow when working across multiple environments or sharing collections with team members.

Owner: Postman

License: Apache 2.0

Properties: Variables, Variable name, Initial value, Current value, Type, Environment Name, Environment ID, Scope, State

Website: https://learning.postman.com/docs/sending-requests/variables/managing-environments/

9.8 - Bruno Collection

Bruno collections are organized sets of API requests and environments within the Bruno API client, allowing developers to structure, test, and share their API workflows efficiently.

Open source client specification.

Bruno collections are structured groups of API requests, variables, and environments used within the Bruno API client to help developers organize and manage their API workflows. Each collection acts as a self-contained workspace where you can store requests, define authentication, set environment values, document behaviors, and run tests. Designed with a filesystem-first approach, Bruno collections are easy to version-control and share, making them especially useful for teams collaborating on API development or maintaining consistent testing practices across environments.

License: MIT license

Tags: Clients, Executable

Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings

Website: https://www.usebruno.com/

9.9 - Bruno Environment

A Bruno environment is a set of key–value variables that let you switch configurations—such as URLs, tokens, or credentials—so you can run the same API requests across different contexts like development, staging, or production.

A open-source client environment.

A Bruno environment is a configurable set of key–value variables that allows you to run the same API requests across different deployment contexts, such as local development, staging, and production. Environments typically store values like base URLs, authentication tokens, headers, or other parameters that may change depending on where an API is being tested. By separating these values from the requests themselves, Bruno makes it easy to switch contexts, maintain cleaner collections, and ensure consistency when collaborating with others or automating API workflows.

License: MIT license

Tags: Name, Variables, Enabled, Secret, Ephemeral, Persisted Value

Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings

Website: https://www.usebruno.com/

9.10 - Open Collections

A modern, developer-first specification pioneered by Bruno for defining and sharing API collections. Designed for simplicity and collaboration.

Open-source collection format.

The OpenCollection Specification is a format for describing API collections, including requests, authentication, variables, and scripts. This specification enables tools to understand and work with API collections in a standardized way.

License: Apache License

Tags: Collections

Website: https://www.opencollection.com/

9.11 - gRPC

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

9.12 - JSON RPC

JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses. A client sends an object with jsonrpc “2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.

Lightweight transport-agnostic remote procedure call protocol.

JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses: a client sends an object with jsonrpc:“2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.

JSON-RPC emerged in the mid-2000s as a community-driven, lightweight RPC protocol using JSON, with an informal 1.0 spec (c. 2005) that defined simple request/response messaging and “notifications” (no reply). A 1.1 working draft (around 2008) tried to broaden and formalize features but never became canonical. The widely adopted JSON-RPC 2.0 specification (2010) simplified and standardized the model—introducing the mandatory “jsonrpc”:“2.0” version tag, clearer error objects, support for both positional and named parameters, and request batching—while remaining transport-agnostic (HTTP, WebSocket, pipes, etc.).

License: Apache License 2.0 or MIT License

Tags: RPC

Properties: methods, parameters, identifier, results, errors, codes, messages, data

Website: https://www.jsonrpc.org/

Forum:** https://groups.google.com/g/json-rpc

9.13 - Model Context Protocol (MCP)

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Allowing applications to connect to large language models (LLMs).

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Introduced by Anthropic as an open-source effort, MCP addresses the challenge of integrating AI models with external tools and data. It aims to serve as a universal “USB port” for AI, allowing models to access real-time information and perform actions.

MCP defines concepts and properties such as hosts, clients, servers, protocol negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, and logging—providing a standardized approach to connecting applications with LLMs.

The MCP community organizes around a GitHub repository (with issues and discussions), plus a Discord, blog, and RSS feed to track updates and changes to the specification.

MCP is seeing growing adoption among API and tooling providers for agent interactions. Many related API/AI specifications reference, integrate with, or overlap with MCP—despite the project being an open-source protocol currently stewarded by a single company, which has not been contributed to a foundation.

Owner: Anthropic

License: MIT License

Tags: agents, workflows

Properties: hosts, clients, servers, protocols, negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, logging

Website: https://modelcontextprotocol.io/

Standards: JSON-RPC 2.0, JSON Schema

9.14 - Apache Parquet

Apache Parquet is a columnar storage file format designed for efficient data storage and retrieval in big data processing frameworks, optimizing for analytics workloads by storing data column-by-column rather than row-by-row, which enables compression, encoding, and query performance optimizations.

Compact binary data serialization.

Apache Parquet is a columnar storage file format specifically designed for efficient data storage and processing in big data analytics environments, developed as a collaboration between Twitter and Cloudera in 2013 and now part of the Apache Software Foundation. Unlike traditional row-oriented formats (like CSV or Avro) that store data records sequentially, Parquet organizes data by columns, grouping all values from the same column together in storage. This columnar approach provides significant advantages for analytical workloads where queries typically access only a subset of columns from wide tables—instead of reading entire rows and discarding unneeded columns, Parquet allows systems to read only the specific columns required for a query, dramatically reducing I/O operations and improving query performance. The format also enables highly effective compression since values in the same column tend to have similar characteristics and patterns, allowing compression algorithms like Snappy, Gzip, LZO, and Zstandard to achieve much better compression ratios than they would on mixed-type row data. Parquet files are self-describing, containing schema information and metadata that allow any processing system to understand the data structure without external schema definitions.

Parquet has become the de facto standard for analytical data storage in modern data lakes and big data ecosystems, with native support across virtually all major data processing frameworks including Apache Spark, Apache Hive, Apache Impala, Presto, Trino, Apache Drill, and cloud data warehouses like Amazon Athena, Google BigQuery, Azure Synapse, and Snowflake. The format supports rich data types including nested and repeated structures (arrays, maps, and complex records), making it ideal for storing semi-structured data from JSON or Avro sources while maintaining query efficiency. Parquet’s internal structure uses techniques like dictionary encoding for low-cardinality columns, bit-packing for small integers, run-length encoding for repeated values, and delta encoding for sorted data, all of which contribute to both storage efficiency and query speed. The format includes column statistics (min/max values, null counts) stored in file metadata that enable predicate pushdown—allowing query engines to skip entire row groups or files that don’t contain relevant data based on filter conditions. This combination of columnar organization, advanced encoding schemes, efficient compression, predicate pushdown, and schema evolution support makes Parquet the optimal choice for data warehouse tables, analytical datasets, machine learning feature stores, time-series data, and any scenario where fast analytical queries over large datasets are required, often achieving 10-100x improvements in query performance and storage efficiency compared to row-oriented formats.

License: Apache 2.0

Tags: Data, Serialization, Binary

Properties: Columnar Storage Format, Column-Oriented, Apache Project, Open Source, Twitter-Cloudera Collaboration, Big Data Format, Analytics Optimized, Self-Describing, Schema Embedded, Metadata Rich, Binary Format, Efficient Storage, High Compression, Compression Support, Snappy Compression, Gzip Compression, LZO Compression, Brotli Compression, Zstandard Compression, Uncompressed Option, Column-Level Compression, Encoding Schemes, Dictionary Encoding, Run-Length Encoding, Bit-Packing, Delta Encoding, Delta Binary Packing, Plain Encoding, Byte Stream Split, Hybrid Encoding, Efficient Reads, Selective Column Access, Column Pruning, Projection Pushdown, Predicate Pushdown, Filter Pushdown, Statistics-Based Filtering, Row Group Skipping, Page-Level Statistics, Column Statistics, Min/Max Values, Null Counts, Distinct Counts, Bloom Filters, File-Level Metadata, Row Group Metadata, Column Chunk Metadata, Page Metadata, Schema Evolution, Schema Compatibility, Add Columns, Remove Columns, Rename Columns, Type Evolution, Nested Data Support, Complex Types, Struct Types, Array Types, Map Types, List Types, Repeated Fields, Optional Fields, Required Fields, Hierarchical Data, Semi-Structured Data, JSON Compatible, Avro Compatible, Thrift Compatible, Protocol Buffers Compatible, Rich Data Types, Primitive Types, Boolean Type, Integer Types, INT32, INT64, INT96, Float Type, Double Type, Binary Type, Fixed-Length Binary, String Type, UTF-8 Strings, Decimal Type, Date Type, Time Type, Timestamp Type, UUID Type, Enum Type, Logical Types, Converted Types, Annotation Support, Row Groups, Columnar Chunks, Data Pages, Dictionary Pages, Index Pages, Header Pages, Footer Structure, Magic Number, Version Number, File Format Version, Parquet Format 2.0, Apache Arrow Integration, Arrow Flight, In-Memory Format, Zero-Copy Reads, Memory Mapping, Lazy Loading, Streaming Reads, Batch Reads, Vectorized Processing, SIMD Optimization, CPU Efficiency, I/O Efficiency, Network Efficiency, Query Performance, Fast Scans, Aggregate Performance, Join Performance, Analytical Workloads, OLAP Queries, Data Warehouse Format, Data Lake Format, Cloud Storage Optimized, S3 Optimized, Azure Blob Compatible, Google Cloud Storage Compatible, HDFS Compatible, Object Storage, Distributed Storage, Splittable Files, Parallel Processing, Multi-Threaded Reads, Concurrent Access, Apache Spark Integration, Spark SQL, DataFrame Support, Dataset Support, PySpark Support, Apache Hive Integration, Hive Tables, HiveQL Support, Impala Support, Presto Support, Trino Support, Apache Drill Support, Dremio Support, ClickHouse Support, DuckDB Support, Snowflake Compatible, BigQuery Compatible, Redshift Spectrum, Athena Compatible, Azure Synapse, Databricks Support, EMR Support, Dataproc Compatible, AWS Glue, Data Catalog Integration, Table Format Support, Delta Lake, Apache Iceberg, Apache Hudi, Time Travel, ACID Transactions, Schema Registry, Metastore Integration, Partition Support, Partitioned Tables, Partition Pruning, Bucketing Support, Sorted Data, Clustered Data, Data Organization, File Organization, Directory Structure, Hive Partitioning, Key-Based Partitioning, Date Partitioning, ETL Integration, Data Pipelines, Batch Processing, Stream Processing, Real-Time Analytics, Apache Kafka Integration, Apache Flink Support, Streaming Writes, Micro-Batching, Change Data Capture, Incremental Updates, Upsert Support, Delete Support, Merge Support, Compaction, Small File Problem, File Consolidation, Optimization, Vacuum Operations, Machine Learning, Feature Store, Training Datasets, Model Input, Data Science, Pandas Integration, NumPy Compatible, Scikit-learn, TensorFlow Datasets, PyTorch DataLoader, Jupyter Notebooks, R Support, Julia Support, Command Line Tools, parquet-tools, parquet-cli, File Inspection, Schema Extraction, Row Count, File Size, Compression Ratio, Storage Metrics, Performance Metrics, Benchmark Results, Query Optimization, Cost-Based Optimization, Statistics Collection, Cardinality Estimation, Data Profiling, Data Quality, Data Validation, Type Safety, Schema Validation, Constraint Checking, Business Rules, Programming Language Support, Java Support, Scala Support, Python Support, C++ Support, Go Support, Rust Support, JavaScript Support, .NET Support, Arrow Parquet, PyArrow, FastParquet, parquet-cpp, parquet-mr, parquet-format, Specification, Open Standard, Vendor Neutral, Cross-Platform, Portable Format, Interoperability, Data Exchange, Data Sharing, Data Publishing, Open Data, Public Datasets, Reproducible Research, Version Control Friendly, Git LFS Compatible, Data Versioning, Data Lineage, Provenance Tracking, Audit Trails, Compliance Support, GDPR Compatible, Data Governance, Access Control, Encryption Support, Encryption at Rest, Column Encryption, Transparent Encryption, Security, Authentication, Authorization, Row-Level Security, Column Masking, Data Redaction, PII Protection, Sensitive Data, Anonymization, Pseudonymization, Production Ready, Enterprise Grade, Mission Critical, High Performance, Scalable, Petabyte Scale, Exabyte Scale, Large Datasets, Wide Tables, Deep Nesting, High Cardinality, Low Cardinality, Sparse Data, Dense Data, Time Series Data, Event Data, Log Data, Metrics Data, Sensor Data, IoT Data, Clickstream Data, User Behavior, Transaction Data, Financial Data, Scientific Data, Genomics Data, Weather Data, Geospatial Data, GIS Integration, Location Data, Coordinates, Spatial Queries, Temporal Queries, Historical Data, Archive Format, Cold Storage, Data Retention, Backup Format, Disaster Recovery, Long-Term Storage, Cost Optimization, Storage Savings, Cloud Cost Reduction, Bandwidth Savings, Compute Efficiency, Resource Optimization, Green Computing, Energy Efficient, Carbon Footprint, Sustainability, Industry Standard, Widely Adopted, Battle Tested, Mature Technology, Active Development, Community Support, Documentation, Examples, Tutorials, Best Practices, Design Patterns, Anti-Patterns, Troubleshooting, Debugging, Profiling Tools, Performance Tuning, Optimization Guides, Migration Tools, Conversion Tools, CSV to Parquet, JSON to Parquet, Avro to Parquet, ORC Alternative, Comparison Benchmarks, Format Selection, Use Case Specific, Analytics First, Write Once Read Many, WORM, Append-Only, Immutable Files, Idempotent Writes, Exactly-Once Semantics, Consistency, Durability, Reliability, Fault Tolerance, Error Handling, Data Integrity, Checksum Validation, CRC Checks, Corruption Detection, Self-Healing, Backward Compatible, Forward Compatible, Version Migration, Legacy Support, Modern Format, Future Proof, Ecosystem Integration, Tool Support, BI Tools, Tableau Support, Power BI, Looker, Qlik, Metabase, Superset, Grafana, Monitoring, Observability, Telemetry, Usage Tracking, Access Patterns, Query Patterns, Workload Analysis

Website: https://parquet.apache.org/

9.15 - Avro

Apache Avro is a data serialization system that provides compact binary encoding of structured data along with schema definitions, enabling efficient data exchange and storage with built-in schema evolution capabilities that allow data structures to change over time while maintaining compatibility between different versions.

Compact binary data serialization.

Apache Avro is a data serialization framework developed within the Apache Hadoop project that provides a compact, fast binary data format along with rich data structures and schema definitions. Created by Doug Cutting (the creator of Hadoop) in 2009, Avro addresses the need for efficient data serialization in big data ecosystems where massive volumes of data must be stored and transmitted efficiently. Unlike JSON or XML which use verbose text-based formats, Avro serializes data into a compact binary representation that significantly reduces storage requirements and network bandwidth while maintaining fast serialization and deserialization performance. Avro schemas are defined using JSON, making them human-readable and language-independent, and these schemas travel with the data (either embedded in files or referenced through a schema registry), ensuring that any system can correctly interpret the serialized data without prior knowledge of its structure. This self-describing nature makes Avro particularly valuable in distributed systems where different services written in different languages need to exchange data reliably.

One of Avro’s most powerful features is its robust support for schema evolution, which allows data schemas to change over time without breaking compatibility between producers and consumers of that data. Avro supports both forward compatibility (new code can read old data) and backward compatibility (old code can read new data) through features like default values for fields, optional fields, and union types. This makes it ideal for long-lived data storage and streaming systems where data structures evolve as business requirements change. Avro has become a cornerstone technology in the big data ecosystem, widely used with Apache Kafka for streaming data pipelines (where the Confluent Schema Registry manages Avro schemas), Apache Spark for data processing, Apache Hive for data warehousing, and as the serialization format for Hadoop’s remote procedure calls. Avro supports rich data types including primitive types (null, boolean, int, long, float, double, bytes, string), complex types (records, enums, arrays, maps, unions, fixed), and logical types (decimals, dates, timestamps), and provides code generation capabilities that create type-safe classes in languages like Java, C++, C#, Python, Ruby, and PHP. Its combination of compact binary encoding, strong schema support, language independence, and schema evolution capabilities makes Avro the preferred serialization format for many data-intensive applications, particularly in streaming architectures and data lakes.

License: Apache 2.0

Tags: Data, Serialization, Binary

Properties: Data Serialization System, Binary Format, Compact Encoding, Schema-Based, JSON Schema Definition, Self-Describing Data, Apache Project, Apache Hadoop Ecosystem, Doug Cutting Created, Open Source, Language-Independent, Platform-Independent, Cross-Language Support, Rich Data Structures, Schema Evolution, Forward Compatibility, Backward Compatibility, Full Compatibility, Default Values, Optional Fields, Field Addition, Field Deletion, Field Renaming, Union Types, Schema Resolution, Schema Registry Support, Confluent Schema Registry, Schema Versioning, Schema ID, Schema Discovery, Dynamic Typing, Static Typing, Code Generation, Type-Safe Classes, Java Support, C++ Support, C# Support, Python Support, Ruby Support, PHP Support, JavaScript Support, Perl Support, Haskell Support, Rust Support, Go Support, Primitive Types, Null Type, Boolean Type, Integer Types, Int Type, Long Type, Float Type, Double Type, Bytes Type, String Type, Complex Types, Record Type, Enum Type, Array Type, Map Type, Union Type, Fixed Type, Logical Types, Decimal Type, Date Type, Time Type, Timestamp Type, Duration Type, UUID Type, Nested Structures, Recursive Types, Named Types, Namespace Support, Documentation Fields, Aliases, Order Specification, File Format, Object Container Files, Data Files, Block-Based Storage, Compression Support, Deflate Compression, Snappy Compression, Bzip2 Compression, XZ Compression, Zstandard Compression, Sync Markers, Splittable Files, Hadoop Compatible, MapReduce Compatible, HDFS Storage, Distributed Storage, Big Data Processing, Streaming Data, Apache Kafka Integration, Kafka Serialization, Producer Support, Consumer Support, Apache Spark Integration, Spark SQL, DataFrame Support, Dataset Support, Apache Hive Integration, Hive Tables, Metastore Integration, Apache Flink Support, Apache Storm Integration, Data Pipeline, ETL Processes, Data Lakes, Data Warehousing, Message Queue Format, Event Sourcing, Log Aggregation, RPC Framework, Remote Procedure Calls, IDL Support, Interface Definition, Service Definition, Protocol Definition, Request/Response, Binary Protocol, Efficient Serialization, Fast Deserialization, Low Overhead, Small Payload Size, Network Efficient, Storage Efficient, Memory Efficient, CPU Efficient, Performance Optimized, High Throughput, Low Latency, Scalable, Version Control Friendly, Schema Registry, Centralized Schema Management, Schema Validation, Schema Compatibility Checking, Breaking Change Detection, Migration Support, Data Transformation, Schema Mapping, Type Conversion, Field Mapping, Data Migration Tools, Schema Tools, Command Line Tools, Avro Tools JAR, Schema Validation Tools, File Inspection, Data Inspection, JSON Conversion, Avro to JSON, JSON to Avro, File Reading, File Writing, Stream Processing, Batch Processing, Real-Time Processing, Container Format, Metadata Support, Custom Metadata, User Metadata, Codec Support, Encoding Options, Generic Records, Specific Records, Reflect Records, Dynamic Schema, Runtime Schema, Compile-Time Schema, Type Safety, Null Safety, Missing Field Handling, Extra Field Handling, Type Promotion, Numeric Promotion, String Encoding, UTF-8, Byte Arrays, Binary Data, Large Object Support, Chunked Data, Block Size Configuration, Buffer Management, Memory Allocation, Object Reuse, Object Pooling, Zero-Copy, Direct Buffers, NIO Support, Async IO, Streaming API, Iterator Support, Random Access, Sequential Access, Index Support, Projection Support, Column Pruning, Predicate Pushdown, Filter Support, Query Optimization, Partition Support, Sharding, Distribution, Replication, Fault Tolerance, Data Integrity, Checksum Support, CRC Validation, Error Detection, Error Handling, Exception Handling, Validation Rules, Constraint Enforcement, Business Rules, Industry Standard, Production Ready, Enterprise Grade, Mission Critical, High Availability, Disaster Recovery, Backup Format, Archive Format, Long-Term Storage, Cold Storage, Hot Storage, Warm Storage, Tiered Storage, Cloud Storage Compatible, S3 Compatible, Azure Blob Storage, Google Cloud Storage, Object Storage, Distributed File Systems, Network File Systems, Local File Systems, Database Storage, NoSQL Databases, Document Stores, Column Stores, Key-Value Stores, Time Series Databases, Graph Databases, Search Engines, Elasticsearch Support, Solr Support, Analytics Engines, Data Science, Machine Learning Datasets, Training Data, Feature Storage, Model Serialization, Experiment Tracking, MLflow Integration, Data Versioning, DVC Support, Data Lineage, Data Provenance, Audit Trails, Compliance, GDPR Support, Data Governance, Data Quality, Data Catalog, Metadata Management, Documentation, API Documentation, Schema Documentation, Field Documentation, Type Documentation, Example Data, Sample Files, Test Data, Mock Data, Debugging Tools, Profiling Tools, Performance Monitoring, Metrics Collection, Logging Support, Tracing, Observability, Monitoring Integration, Alerting, Community Support, Active Development, Regular Releases, Bug Fixes, Security Patches, Performance Improvements, Feature Additions, Backward Compatible Releases, Stable API, Mature Technology, Battle Tested, Widely Adopted, Industry Proven, Ecosystem Integration, Tool Support, IDE Plugins, Editor Support, Build Tool Integration, Maven Support, Gradle Support, SBT Support, NPM Packages, PyPI Packages, Package Managers, Dependency Management, Transitive Dependencies, Minimal Dependencies, Lightweight, Portable, Embeddable, Library Form, Framework Integration, Microservices, Service Mesh, Container Support, Docker Compatible, Kubernetes Support, Cloud Native, Serverless Compatible, Lambda Functions, Edge Computing, IoT Data, Sensor Data, Telemetry, Metrics, Events, Notifications, Webhooks, API Responses, Interoperability, Protocol Buffers Alternative, Thrift Alternative, MessagePack Alternative, BSON Alternative, Parquet Complementary, ORC Complementary, Specification, Standard Format, Open Standard, Vendor Neutral, Community Driven

Website: https://avro.apache.org/

9.16 - Agent2Agent

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

9.17 - JSON Schema

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

Annotating and validating JSON artifacts.

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

The specification traces back to early proposals by Kris Zyp in 2007 and has evolved through draft-04, draft-06, and draft-07 to the current 2020-12 release.

JSON Schema provides a rich set of keywords—such as title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref—to describe and validate data used in business operations.

To get involved with the community, visit the JSON Schema GitHub organization, subscribe to the blog via RSS, join discussions and meetings in the Slack workspace, and follow updates on LinkedIn.

JSON Schema is a foundational standard used by many other specifications, tools, and services. It’s the workhorse for defining and validating the digital data that keeps modern businesses running.

License: Academic Free License version 3.0

Tags: Schema, Validation

Properties: schema, title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref

Website: https://json-schema.org

9.18 - Protocol Buffers

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Fast binary serialized structured data.

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Protocol Buffers began inside Google in the early 2000s as an internal, compact, schema-driven serialization format; in 2008 Google open-sourced it as proto2. Most recently in 2023, Google introduced “Protobuf Editions” to evolve semantics without fragmenting the language into proto2 vs. proto3, while the project continues to refine tooling, compatibility guidance, and release processes across a broad open-source community.

Owner: Google

License: BSD-3-Clause License

Tags: Schema, Data, Binary, Serialization

Properties: messages, types, fields, cardinality, comments, reserved values, scalars, defaults, enumerations, nested types, vinary, unknown fields, oneOf, maps, packages, and services

Website: https://protobuf.dev/

9.19 - Schema.org

Schema.org is a collaborative, community-driven vocabulary (launched in 2011 by Google, Microsoft, Yahoo!, and Yandex) that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content.

Community-driven schema vocabulary for people, places, and things.

Schema.org is a collaborative, community-driven vocabulary that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content. Publishers annotate pages using formats like JSON-LD (now the common choice), Microdata, or RDFa to express this structured data, which enables features such as rich results, knowledge panels, and better content discovery. The project maintains core and extension vocabularies, evolves through open proposals and discussion, and focuses on practical, interoperable semantics rather than being tied to a single standard body.

License: Creative Commons Attribution-ShareAlike License (CC BY-SA 3.0)

Tags: Schema

Properties: Thing, Action, AchieveAction, LoseAction, TieAction, WinAction, AssessAction, ChooseAction, VoteAction, IgnoreAction, ReactAction, AgreeAction, DisagreeAction, DislikeAction, EndorseAction, LikeAction, WantAction, ReviewAction, ConsumeAction, DrinkAction, EatAction, InstallAction, ListenAction, PlayGameAction, ReadAction, UseAction, WearAction, ViewAction, WatchAction, ControlAction, ActivateAction, AuthenticateAction, DeactivateAction, LoginAction, ResetPasswordAction, ResumeAction, SuspendAction, CreateAction, CookAction, DrawAction, FilmAction, PaintAction, PhotographAction, WriteAction, FindAction, CheckAction, DiscoverAction, TrackAction, InteractAction, BefriendAction, CommunicateAction, AskAction, CheckInAction, CheckOutAction, CommentAction, InformAction, ConfirmAction, RsvpAction, InviteAction, ReplyAction, ShareAction, FollowAction, JoinAction, LeaveAction, MarryAction, RegisterAction, SubscribeAction, UnRegisterAction, MoveAction, ArriveAction, DepartAction, TravelAction, OrganizeAction, AllocateAction, AcceptAction, AssignAction, AuthorizeAction, RejectAction, ApplyAction, BookmarkAction, PlanAction, CancelAction, ReserveAction, ScheduleAction, PlayAction, ExerciseAction, PerformAction, SearchAction, SeekToAction, SolveMathAction, TradeAction, BuyAction, OrderAction, PayAction, PreOrderAction, QuoteAction, RentAction, SellAction, TipAction, TransferAction, BorrowAction, DonateAction, DownloadAction, GiveAction, LendAction, MoneyTransfer, ReceiveAction, ReturnAction, SendAction, TakeAction, UpdateAction, AddAction, InsertAction, AppendAction, PrependAction, DeleteAction, ReplaceAction

Website: https://schema.org/g/latest/

9.20 - JSON-LD

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Introducing semantics into JSON so machines can understand meaning.

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Properties: base, containers, context, direction, graph, imports, included, language, lists, nests, prefixesm propagate, protected, reverse, set, types, values, versions, and vocabulary

Website: https://json-ld.org/

9.21 - Spectral

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

Enforcing style guides across JSON artifacts to govern schema.

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

While Spectral is a tool, its rules format is increasingly treated as a de facto standard. Spectral traces its roots to Speccy, an API linting engine created by Phil Sturgeon at WeWork. Phil later brought the concept to Stoplight, where Spectral and the next iteration of the rules format were developed; Stoplight was subsequently acquired by SmartBear.

With Spectral, you define rules and rulesets using properties such as given, then, description, message, severity, formats, recommended, and resolved. These can be applied to any JSON or YAML artifact, with primary adoption to date around OpenAPI and AsyncAPI.

The project’s GitHub repository hosts active issues and discussions, largely focused on the CLI. Development continues under SmartBear, including expanding how rules are applied across API operations and support for Arazzo workflow use cases.

Most commonly, Spectral is used to lint and govern OpenAPI and AsyncAPI specifications during design and development. It is expanding into Arazzo workflows and can be applied to any standardized JSON or YAML artifact validated with JSON Schema—making it a flexible foundation for governance across the API lifecycle.

License: Apache

Tags: Rules, Governance

Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties

GitHub: https://github.com/stoplightio/spectral

Standards: JSON Schema

9.22 - Vacuum

RuleSets are how to configure vacuum to know which rules to run for each specification, and how it should evaluate those rules, and a RuleSet is a style guide with each rule being an individual requirement as a part of the overall guide.

Enforcing style guides across JSON artifacts to govern schema.

VACUUM rules in the context of API linting are configuration definitions that specify quality and style requirements for OpenAPI specifications. RuleSets serve as comprehensive style guides where each individual rule represents a specific requirement that the API specification must meet. These rules are configured using YAML or JSON and follow the Spectral Ruleset model, making them fully compatible with Spectral rulesets while adding vacuum-specific enhancements like an id property for backward compatibility and flexible naming. A RuleSet contains a collection of rules that define what to check, where to check it, and how violations should be handled, allowing organizations to enforce consistent API design standards across their specifications.

Each rule within a RuleSet consists of several key components: a given property that uses JSONPath expressions (supporting both RFC 9535 and JSON Path Plus) to target specific sections of the OpenAPI document, a severity level (such as error, warning, or info) that indicates the importance of the rule, and a then clause that specifies which built-in function to apply and what field to evaluate. For example, a rule might target all tag objects in an API specification using $.tags[*] as the JSONPath expression, then apply the truthy function to verify that each tag has a description field populated. Built-in core functions like casing, truthy, and pattern provide the logic for evaluating whether specifications comply with the defined rules, enabling automated validation of API documentation quality, consistency, and adherence to organizational or industry standards.

Vacuum is a soft fork of Spectral, keeping the base format for interoperability, while also taking the specification into a new direction to support OpenAPI Doctor and Vacuum linting rules functionality in tooling and pipelines.

License: Apache

Tags: Rules, Governance

Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties

GitHub: https://quobix.com/vacuum/rulesets/understanding/

Standards: JSON Schema, Spectral

9.23 - Open Policy Agent (OPA)

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Unifies policy enforcement for authentication, security, and auditability.

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Originally developed at Styra in 2016, OPA was donated to the Cloud Native Computing Foundation (CNCF) in 2018 and graduated in 2021.

Rego includes rules and rulesets, unit tests, functions and built-ins, reserved keywords, conditionals, comprehensions/iterations, lookups, assignment, and comparison/equality operators—giving you a concise, expressive way to author and validate policy.

You can contribute on GitHub, follow updates via the blog and its RSS feed, and join conversations in the community Slack and on the OPA LinkedIn page.

OPA works across platforms and operational layers, standardizing policy for key infrastructure such as Kubernetes, API gateways, Docker, CI/CD, and more. It also helps normalize policy across diverse data and API integration patterns used in application and agent automation.

License: Apache

Tags: Policies, Authentication, Authorization

Properties: rules, language, tests, functions, reserved names, grammar, conditionals, iterations, lookups, assignment, equality

Website: https://www.openpolicyagent.org/

9.24 - CSV

CSV (Comma-Separated Values) is a simple text format for storing tabular data where each line represents a row and values within rows are separated by commas (or other delimiters).

Lighter weight data serialization format for data exchange.

CSV (Comma-Separated Values) is a simple, plain-text file format used to store tabular data in a structured way where each line represents a row and values within each row are separated by commas (or other delimiters like semicolons, tabs, or pipes). This straightforward format makes CSV one of the most universally supported data exchange formats, readable by spreadsheet applications like Microsoft Excel, Google Sheets, and LibreOffice Calc, as well as databases, data analysis tools, and virtually every programming language. CSV files are human-readable when opened in a text editor, showing data in a grid-like structure that closely mirrors how it would appear in a spreadsheet. The format’s simplicity—requiring no special markup, tags, or complex syntax—makes it ideal for representing datasets, lists, reports, and any tabular information where relationships between columns and rows need to be preserved.

Despite its simplicity, CSV has become essential for data import/export operations, data migration between systems, bulk data loading into databases, and sharing datasets for data analysis and machine learning. The format is particularly valuable in business contexts for handling customer lists, financial records, inventory data, sales reports, and scientific datasets. CSV files are compact and efficient, requiring minimal storage space compared to more verbose formats like XML or JSON, which makes them ideal for transferring large datasets over networks or storing historical data archives. However, CSV has limitations: it lacks standardized support for data types (everything is typically treated as text unless parsed), has no built-in schema definition, struggles with representing hierarchical or nested data, and can encounter issues with special characters, line breaks within fields, or commas in data values (typically addressed by enclosing fields in quotes). Despite these constraints, CSV remains the go-to format for flat, rectangular data exchange due to its universal compatibility, ease of use, and the fact that it can be created and edited with the most basic tools, from text editors to sophisticated data processing frameworks.

Tags: Data Format

Properties: Text-Based, Plain Text Format, Tabular Data, Row-Based Structure, Column-Based Structure, Delimiter-Separated, Comma Delimiter, Alternative Delimiters, Tab-Separated Values, Pipe-Separated, Semicolon-Separated, Human-Readable, Machine-Parsable, Flat File Format, Simple Syntax, Minimal Markup, No Tags, No Attributes, Lightweight, Compact, Small File Size, Efficient Storage, Fast Parsing, Universal Support, Cross-Platform, Language-Agnostic, Spreadsheet Compatible, Excel Compatible, Google Sheets Compatible, LibreOffice Compatible, Database Import/Export, SQL Bulk Loading, Data Exchange Format, Data Migration, Line-Based Records, Newline Row Separator, Field Delimiter, Quote Encapsulation, Double-Quote Escaping, Escape Characters, Header Row Support, Column Names, Schema-Less, No Data Types, Text-Only Values, No Type Enforcement, No Metadata, No Validation, No Comments, No Processing Instructions, RFC 4180, MIME Type text/csv, File Extension .csv, UTF-8 Encoding, ASCII Compatible, Character Encoding Support, Special Character Handling, Embedded Commas, Embedded Quotes, Embedded Newlines, Field Quoting, Optional Quoting, Whitespace Handling, Trailing Spaces, Leading Spaces, Empty Fields, Null Values, Missing Data Support, Sparse Data, Dense Data, Rectangular Grid, Fixed Columns, Variable Rows, No Nesting, No Hierarchy, No Relationships, Flat Structure, Single Table, No Joins, No Foreign Keys, Streaming Compatible, Incremental Processing, Line-by-Line Reading, Memory Efficient, Large File Support, Append-Only, Chronological Data, Time Series Data, Log Files, Sequential Access, Random Access, Indexing Support, Sorting Compatible, Filtering Compatible, Aggregation Compatible, Data Analysis, Statistical Analysis, Machine Learning Datasets, Training Data, Feature Vectors, Pandas Compatible, R Compatible, Python CSV Module, Java CSV Libraries, .NET CSV Support, Excel Formula Support, Cell Formatting Loss, No Styling, No Colors, No Fonts, No Borders, No Images, No Charts, Data-Only Format, Export Format, Import Format, Batch Processing, ETL Operations, Data Warehousing, Business Intelligence, Reporting Format, Audit Trails, Transaction Logs, Customer Lists, Contact Lists, Inventory Data, Sales Data, Financial Records, Scientific Data, Sensor Data, Measurement Data, Survey Results, Poll Data, Census Data, Demographic Data, Geographic Data, Coordinate Data, Latitude Longitude, Address Lists, Email Lists, Product Catalogs, Price Lists, Stock Data, Market Data, Historical Data, Archive Format, Backup Format, Version Control Friendly, Diff-Friendly, Merge-Friendly, Git Compatible, Text Editor Compatible, Command Line Tools, Awk Processing, Sed Processing, Grep Searching, Cut Command, Unix Tools, Shell Scripting, Automation Friendly, Cron Job Compatible, Scheduled Exports, API Responses, Web Scraping Output, Data Dumps, Bulk Downloads, FTP Transfer, Email Attachments, Cloud Storage, S3 Compatible, Azure Blob Storage, Google Cloud Storage, Database Export, MySQL Export, PostgreSQL Export, SQLite Export, Oracle Export, SQL Server Export, MongoDB Export, NoSQL Export, Data Conversion, Format Transformation, JSON to CSV, XML to CSV, Excel to CSV, CSV to JSON, Interoperability, Legacy System Support, Backwards Compatible, Universal Standard, Industry Standard, De Facto Standard, Widely Adopted, Mature Format, Production Ready, Battle Tested, Simple Implementation, Easy Generation, Easy Parsing, Minimal Dependencies, No External Libraries Required, Low Overhead, High Performance, Scalable, Concatenation Support, Split Support, Chunking Support, Partitioning Support, Compression Compatible, Gzip Compatible, Zip Compatible, Tar Compatible

Wikipedia: https://en.wikipedia.org/wiki/Comma-separated_values

9.25 - HTML

HTML (HyperText Markup Language) is the standard markup language used to create and structure content on web pages, defining elements like headings, paragraphs, links, images, and forms through a system of tags that web browsers interpret and render as visual displays.

The standard markup language powering the Web.

HTML (HyperText Markup Language) is the foundational markup language of the World Wide Web, created by Tim Berners-Lee in 1991, that defines the structure and content of web pages through a system of elements enclosed in angle-bracket tags. HTML provides the semantic framework for organizing information on the web, using tags like

for headings,

for paragraphs, for hyperlinks, for images,

for tabular data, and
for user input. These elements create a hierarchical document structure called the Document Object Model (DOM) that web browsers parse and render into the visual pages users see and interact with. HTML is a declarative language, meaning developers describe what content should appear and how it should be structured rather than specifying how to display it—the actual visual presentation is handled by CSS (Cascading Style Sheets), while interactive behavior is managed by JavaScript. This separation of concerns allows HTML to focus purely on semantic meaning and content structure, making web pages accessible to screen readers, search engines, and various devices.

Modern HTML (currently HTML5, standardized by the W3C and WHATWG) has evolved far beyond simple text formatting to support rich multimedia content, complex web applications, and interactive experiences without requiring plugins. HTML5 introduced semantic elements like

,