This is the multi-page printable view of this section. Click here to print.
Motion
- 1: Overview
- 2: Stories
- 2.1: API Reusability - Naftiko Blog
- 2.2: Getting On The MCP Bullet Train Without Leaving Governance Waiting At The Platform
- 2.3: Going from being on top of public APIs to feeling the way with MCPs
- 2.4: Pivoting AI-enabled integration to what customers really want
- 2.5: AI Orchestration Use Case - Naftiko Blog
- 2.6: Capabilities - API Evangelist
- 2.7: Cost - Naftiko Blog
- 2.8: Data Sovereignty Use Case - Naftiko Blog
- 2.9: Innovation - Naftiko Blog
- 2.10: Naftiko Signals - API Evangelist
- 2.11: Naftiko Signals White Paper - Naftiko Blog
- 2.12: Risk - Naftiko Blog
- 2.13: SQL Data Access Use Case - Naftiko Blog
- 2.14: Velocity - Naftiko Blog
- 3: Conversations
- 3.1: Naftiko Capabilities Podcast - January 13th, 2025
- 3.2: Naftiko Capabilities Podcast - January 15th, 2025
- 3.3: Naftiko Capabilities Podcast - January 20th, 2025
- 3.4: Naftiko Capabilities Podcast - January 22nd, 2025
- 3.5: Naftiko Capabilities Podcast - January 26th, 2025
- 3.6: Naftiko Capabilities Podcast - January 29th, 2025
- 3.7: Naftiko Capabilities Podcast - January 6th, 2025
- 3.8: Naftiko Capabilities Podcast - January 8th, 2025
- 3.9: Conversation with Sam Newman in November 2025
- 3.10: Conversation with Simon Wardley in November 2025
- 3.11: Conversation with Christian Posta of Solo October 2025
- 3.12: Conversation with David Boyne of EventCatalog October 2025
- 3.13: Conversation with Mike Amundsen of Apiture October 2025
- 3.14: Conversation with Mike Amundsen of Apiture October 2025
- 3.15: Conversation with David Biesack of Apiture October 2025
- 4: Capabilities
- 4.1: API Reusability
- 4.2: Manage Events
- 5: Use Cases
- 6: Services
- 6.1: Anthropic
- 6.2: Atlassian
- 6.3: Avalara
- 6.4: BigCommerce
- 6.5: Cvent
- 6.6: Datadog
- 6.7: Docker
- 6.8: Figma
- 6.9: GitHub
- 6.10: Google
- 6.11: Grafana
- 6.12: HubSpot
- 6.13: Kong
- 6.14: LinkedIn
- 6.15: Mailchimp
- 6.16: Meta
- 6.17: Microsoft Graph
- 6.18: New Relic
- 6.19: Notion
- 6.20: OpenAI
- 6.21: Salesforce
- 6.22: SendGrid
- 6.23: ServiceNow
- 6.24: Shopify
- 6.25: Slack
- 6.26: Snowflake
- 6.27: Stripe
- 6.28: Twilio
- 6.29: Youtube
- 6.30: Zendesk
- 6.31: Zoom
- 7: Conversations Archive
- 8: Stories Archive
- 8.1: AI Context Use Case - Naftiko Blog
- 8.2: Hypermedia Automating Capabilities in this AI Moment
- 8.3: Exploring What Schema Tools Are Available
- 8.4: We've Been Wrong About API Reuse All Along
- 8.5: Avalara Developer Experience Review
- 8.6: Capabilities - The New Stack Blog
- 8.7: Capabilities Podcast - Naftiko Blog
- 8.8: Engine - Naftiko Blog
- 8.9: Fabric - Naftiko Blog
- 8.10: Naftiko Launch - API Evangelist Blog
- 8.11: Naftiko Launch - EIN Presswire
- 8.12: Naftiko Signals - Naftiko Blog
- 8.13: Naftiko Signals White Paper
- 8.14: Capabilities - Naftiko Blog
- 8.15: Naftiko Launch - Naftiko Blog
- 9: Standards
- 9.1: OpenAPI
- 9.2: OpenAPI Overlays
- 9.3: Arazzo
- 9.4: AsyncAPI
- 9.5: APIOps Cycles
- 9.6: Postman Collections
- 9.7: Postman Environments
- 9.8: Bruno Collection
- 9.9: Bruno Environment
- 9.10: Open Collections
- 9.11: gRPC
- 9.12: JSON RPC
- 9.13: Model Context Protocol (MCP)
- 9.14: Apache Parquet
- 9.15: Avro
- 9.16: Agent2Agent
- 9.17: JSON Schema
- 9.18: Protocol Buffers
- 9.19: Schema.org
- 9.20: JSON-LD
- 9.21: Spectral
- 9.22: Vacuum
- 9.23: Open Policy Agent (OPA)
- 9.24: CSV
- 9.25: HTML
- 9.26: Java Database Connectivity (JDBC)
- 9.27: JSON
- 9.28: Markdown
- 9.29: ODBC
- 9.30: YAML
- 9.31: XML
- 9.32: OAuth 2.0
- 9.33: JSON Web Token (JWT)
- 9.34: HTTP
- 9.35: HTTP 2.0
- 9.36: HTTP 3.0
- 9.37: APIs.json
- 9.38: API Commons
- 9.39: Microcks Examples
- 9.40: vCard Ontology
- 10: Organizations
- 10.1: Apache Foundation
- 10.2: Cloud Native Computing Foundation (CNCF)
- 10.3: IANA
- 10.4: Internet Engineering Task Force (IETF)
- 10.5: Linux Foundation
- 10.6: World Wide Web Consortium (W3C)
- 11: Channels
- 11.1: Spotify
- 11.2: Youtube
- 11.3: API Evangelist
- 11.4: Apple Podcasts
- 11.5: Bluesky
- 11.6: Crunchbase
- 11.7: Daily.Dev
- 11.8: Dev.To
- 11.9: Discord
- 11.10: DockerHub
- 11.11: EIN Presswire
- 11.12: G2
- 11.13: Gartner
- 11.14: GitHub
- 11.15: Hacker News
- 11.16: LinkedIn
- 11.17: Medium
- 11.18: Naftiko Blog
- 11.19: Reddit
- 11.20: Slack
- 11.21: Twitch
- 11.22: Visual Studio Marketplace
- 12: Events
- 12.1: AI on the Factory Floor Webinar
- 12.2: Nordic APIs Live Cast
- 12.3: Kubecon
- 12.4: APIDays
- 13: Glossary
- 13.1: Domain
- 13.2: AI Integration
- 13.3: Agentic
- 13.4: AI
- 13.5: AI-Driven
- 13.6: API Discovery
- 13.7: API Integration
- 13.8: Application Integration
- 13.9: Budgets
- 13.10: CCPA
- 13.11: Cloud Integration
- 13.12: Code Generation
- 13.13: Compliance
- 13.14: Costs
- 13.15: Data Integration
- 13.16: Databases
- 13.17: DevOops
- 13.18: EDI
- 13.19: Event-Driven
- 13.20: Gateways
- 13.21: GDPR
- 13.22: GitOps
- 13.23: Governance
- 13.24: GraphQL
- 13.25: Integrations
- 13.26: KYC
- 13.27: MCP
- 13.28: Microservices
- 13.29: Modernization
- 13.30: Observability
- 13.31: Open-Source
- 13.32: OpenAPI
- 13.33: Operations
- 13.34: Platforms
- 13.35: Registries
- 13.36: Risk
- 13.37: SaaS
- 13.38: Schema
- 13.39: Security
- 13.40: Standards
- 13.41: Strategy
- 13.42: Virtualizations
- 13.43: Access Control
- 13.44: Advertising
- 13.45: Agents
- 13.46: Agents
1 - Overview
As we work to define our go-to-market motion at Naftiko we felt there was no reason to keep the process behind a secret. We have an initial plan in place, but our go-to-market is someething that will continue to evolve and change over time based upon the stories we tell and the conversations we have, so we thought it would help increase the velocity of our go-to-market flywheel if we just did it all out in the open.
The majority of the conversations we are having in the ecosystem are public, so it makes sense that we define, develop, publish, and syndicate those stories out in the open. A big part of our go-to-market effort is to openly figure out what the go-to-market effort for an open-source commercial software means, which is something we are more than happy to share with the wider open-source and API ecosystems.
This work is managed using Hugo/Docsy, but more importantly using YAML in GitHub. All of the stories, conversations, services, channels, and other information used as part of the go-to-market motion is stored simply as YAML and then synced with other platforms like Google Docs, Notion, and wherever the work for go-to-market activities occurs, with this documentation site being the central reference and point of automation for all of this work.
2 - Stories
2.1 - API Reusability - Naftiko Blog
Title
- API Reusability
Tagline
- Embracing your legacy and meeting the demands of AI integration using your existing API investments is how you will get the work done.
Description
This use case is concerned with encouraging the discoverability and reuse of existing APIs, leveraging existing API infrastructure to quantify what API, schema, and tooling reuse looks like, and incentivizing reuse of APIs and schema across the software development lifecycle—reducing API sprawl and hardening the APIs that already exist.
Teams need to be able to easily use existing API paths and operations as part of new integrations and automation, ensuring that paths, operations, and schema are available within IDE and copilot tooling—meeting developers where they already work. API reusability enables developers while also informing leadership regarding what API reuse looks like and where opportunities to refine exist.
Benefits
- Unlock access to legacy data
- Right-size & unify APIs
- Foundation for AI initiatives
Pain
- Build on existing internal APIs
- Reuse 3rd-party APIs already used
- Need to leverage existing OpenAPIs
- We do not understand what API reuse is
- We aren’t able to communicate API reuse
Gains##
- Leverage existing internal API catalog
- Establish API catalog for 3rd-party APIs
- Extend existing OpenAPI for MCP delivery
- We are able to communicate reuse to leadership
- We are able to meet developers where they work
Connects
- Internal APIs
- Infrastructure APIs
- SaaS APIs
- Partner APIs
- Paths
- Schema
Adapters
- HTTP
- MCP
- OpenAPI
Tags
- API Management
- API Reuse
- Developer Tooling
- Developer Enablement
- Developer Experience
- API Governance
- AI Governance
Links
2.2 - Getting On The MCP Bullet Train Without Leaving Governance Waiting At The Platform
Links
2.3 - Going from being on top of public APIs to feeling the way with MCPs
Links
2.4 - Pivoting AI-enabled integration to what customers really want
Links
2.5 - AI Orchestration Use Case - Naftiko Blog
Title
- AI Orchestration
Tagline
- Planning ahead for teams when it comes to discovery, security, and other properties of the Agent-2-Agent specification helps steer the fleet in the same direction.
Description
This use case provides the data, skills, and capabilities that artificial intelligence agents used internally can use to automate and orchestrate tasks while discovering and negotiating with other agents to accomplish specific goals. This use case employs the open-source Agent-2-Agent specification to securely and confidently enable agentic activity across operations.
As teams focus on responding to this AI moment and deploying MCP servers on top of existing APIs and other tooling, they need to begin understanding how to implement agentic automation and orchestration on top of MCP servers. Teams need structure and guidance when it comes to authentication and authorization, discovery, governance, and all the standardization required to deploy agents at scale.
Tags
- Automation
- Agentic
- Agent-2-Agent
- Compliance
Benefits
- Discover skills & capabilities
- Internal & external agents
- Implement A2A protocol
- Apply policy-driven governance
Pains
- High complexity in standardizing message formats and handshakes.
- Difficult to cap liability; risk of hallucinated agreements.
- Requires vetting external agents; security risks.
- High debugging difficulty; “Black Box” interactions.
Gains
- Universal connectivity; “Write once, talk to many.”
- Removal of human bottlenecks in approval chains.
- Access to dynamic markets and real-time supply chains.
- Modular system; easy to swap out underperforming agents.
Connects
- Internal APIs
- Infrastructure APIs
- SaaS APIs
- Partner APIs
- MCP Servers
Adapters
- HTTP
- OpenAPI
- MCP
- A2A
Links
2.6 - Capabilities - API Evangelist
Links
2.7 - Cost - Naftiko Blog
Links
2.8 - Data Sovereignty Use Case - Naftiko Blog
Title
- Data Sovereignty
Tagline
- Govern, encrypt, and audit how data moves through your entire stack.
- APIs, SaaS tools, and AI all touch sensitive data but governance rarely keeps up. Shadow IT, unencrypted transfers, and compliance risk abound.
Description
This use case focuses on empowering companies to take control of their data that resides across the third-party SaaS solutions they use regularly. The data sovereignty movement is centered on establishing more control over the data generated across the different services you depend on, ensuring data is integrated, migrated, and synced to data and object stores where a company has full control and access.
Data sovereignty enables teams to localize data and train local AI models using the data they produce across third-party platforms. This use case may be aligned with country or regional regulations, or it may simply be part of enterprise compliance programs. Data sovereignty investments have increased as part of the growth of AI integrations and the need for context across third-party systems, as well as the increasing value of data itself.
Tags
- Data
- Regulation
- Compliance
- Sovereignty
- Control
Benefits
- Aggregate 3rd-party SaaS data
- Increase visibility of SaaS data
- Allow for more SaaS data discovery
- Encourage the reusability of SaaS data
- Enable ETL/ELT access to SaaS data
Pain
- Difficulty in Accessing 3rd-Party Data Sources
- Regulatory Mandate for Control Over All Data
- GraphQL Was Difficult to Adopt Across Teams
- Lack of Data Available for AI Pilot Projects
Gains
- Provide SQL Access Across 3rd-Party Data Sources
- Satisfy Government Regulatory Compliance Requirements
- Speak SQL Across All Data Sources For Any Teams
- Universal Access to Data for Use in AI Projects
Connects##
- Infrastructure APIs
- SaaS APIs
- Partner APIs
Adapters
- HTTP
- OpenAPI
Links
2.9 - Innovation - Naftiko Blog
Links
2.10 - Naftiko Signals - API Evangelist
Links
2.11 - Naftiko Signals White Paper - Naftiko Blog
Links
2.12 - Risk - Naftiko Blog
Links
2.13 - SQL Data Access Use Case - Naftiko Blog
Title
- SQL Data Access
Tagline
- Data lives in silos. Teams want insights now but every new API means another custom connector.
Body
This use case seeks to consistently unlock the data companies currently depend upon across multiple third-party SaaS providers and a variety of existing database connections via JDBC and ODBC to ensure AI integrations have the data they require. Data today is spread across many internal and external systems, and making it consistently available as part of AI integrations has significantly slowed the delivery of new products and features.
Teams benefit from consistent SQL access to data sources via ODBC/JDBC interfaces, and expanding this access to third-party SaaS will help teams provide the context, resources, tooling, and data needed to deliver AI integrations across the enterprise. The capability and resulting engine deployment for this use case provides a unified, consolidated, and simplified approach to providing the data needed to power individual AI integrations within specific business domains.
Tags
- SQL
- Data
- SaaS
- Dashboards
- Analytics
- Copilots
Benefits
- Unlock SaaS data
- JDBC / ODBC drivers
- Federated SQL processing
Pain
- Limited or No Access to SaaS Data for Analytics Teams
- No Access to Data Sources Across MCP-Enabled AI Integration
- Demand for 3rd-Party Data for Business Intelligence in Dashboards
- Demand for 3rd-Party Data by Data Science for ML Engineering
Gains
- Access to SaaS Data via SQL
- Access to Internal APIs via SQL
- Easy Connections to Existing Dashboards
- Plug-and-Play Connectors for Data Science
Connects
- Internal APIs
- Infrastructure APIs
- SaaS APIs
- Partner APIs
- Legacy APIs
Adapters
- HTTP
- OpenAPI
- ODBC
- JDBC
- MCP
Links
2.14 - Velocity - Naftiko Blog
Links
3 - Conversations
3.1 - Naftiko Capabilities Podcast - January 13th, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.2 - Naftiko Capabilities Podcast - January 15th, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.3 - Naftiko Capabilities Podcast - January 20th, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.4 - Naftiko Capabilities Podcast - January 22nd, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.5 - Naftiko Capabilities Podcast - January 26th, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.6 - Naftiko Capabilities Podcast - January 29th, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.7 - Naftiko Capabilities Podcast - January 6th, 2025
The first episode focused on capabilities with Mike Amundsen and Christian Posta.
Table of Contents
- Intro - Kin Lane
- What is a capability? - Mike Amundsen
- Segway - Kin Lane
- What is a capability? - Christian Posta
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.8 - Naftiko Capabilities Podcast - January 8th, 2025
The is a placeholder for an upcoming episode, with work added when ready.
Table of Contents
- Intro - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Topic - ????
- Segway - Kin Lane
- Closing - Kin Lane
Links
3.9 - Conversation with Sam Newman in November 2025
Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.
Questions
- Who are you?
- What is a microservice?
- What percentage of microservices are people vs. technology?
- How are microservices dependent on organizational structures?
- What have you learned from the last ten years of microservices?
- How do you assess if an organization is ready for microservices?
- What is the impact of AI you’ve seen on organizations?
- Is AI supposed to be controlled by us, or control us?
- Do you feel like small language models are the future?
- What role does ownership and accountability play in AI?
- What recommendations do you have to help right-size capabilities?
- How does the changing in team structures impact success?
- What is the role that STEM plays in your work?
- What recommendations do you have for people just starting?

3.10 - Conversation with Simon Wardley in November 2025
Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.
Questions
- What is Wardley mapping?
- Is there politics in technology?
- What do you say to people when they say AI will be cheaper?
- How do we make AI visible?
- What are the implications of AI at the geopolitical level?
- Can you talk to how you use LinkedIn?
- What gives you hope right now?

3.11 - Conversation with Christian Posta of Solo October 2025
A view of the capabilities discussion from the network perspective and artificial intelligence.
Questions
- What is a Capability? 2:08
- What is a natural language description of capabilities mean? 4:27
- What is the role of identity and access management?
- What is the role of the gateway when it comes to AI?
- Are Agent Gateways centralized or federated?
- What is the commercial open-source strategy of the Agent Gateway?
- What do you need from the community?

3.12 - Conversation with David Boyne of EventCatalog October 2025
Walk through the different uses of EventCatalog and how it helps with integrations.
Questions
- What is Event Catalog?
- What role does event-driven architect play in discoverability and visibility?
- Can events help us document our systems in real-time?
- What thinking goes into your support of open-source specifications?
- How does Event Catalog help map your business domain?
- Does Event Catalog help out at the tactical level?
- How do you approach commercial open-source?
- Can event-driven architecture help with ephemeral API discovery?
- Do you feel event-driven will provide the richness and semantics AI agents will need?
- How do we get more business people involved with Event Catalog?
- What is your biggest need right now?

3.13 - Conversation with Mike Amundsen of Apiture October 2025
Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.
Questions
- What is your experience with hypermedia? 2:04
- What has changed from hypermedia to MCP? 5:53
- Do we have the semantics needed for AI agents?
- Is the world messy?
- What keeps you going with your work?

3.14 - Conversation with Mike Amundsen of Apiture October 2025
Very relevant discussion about hypermedia and capabilities that provides a good deal of material for Naftiko storytelling.
Questions
- What has changed since you wrote your first API book?
- What’s the state of the web today?
- What is the state of micro services?
- What is hypermedia? 1:29
- What is the role of information architecture in providing the meaning we need?
- What is the role of ontology and taxonomy in hypermedia? 2:04
- What was Ted Nelson a literary radical?
- Who was Wendy Hall?
- Who was Leonard Richardson?
- What is a capability? 4:13
- Who should be in the room when. Crafting capabilities? 1:53
- How do we reconcile traditional automation and orchestration with agentic?
- How do you help people understand and apply semantics?
- How do you recommend folks stay grounded in hype cycles like today?
- How do you help people be successful with AI in their work when they don’t have. Much control?

3.15 - Conversation with David Biesack of Apiture October 2025
A great discussion of the spectrum of uses by Apiture of JSON Schema to validate and stabilize operations.
Questions
- How are you using JSON Schema?
- Should your average developer have to deal with schema references?
- How do you manage schema discovery?
- How do you generate schema change logs?
- When do you use YAML vs JSON?
- How do you extend schema?
- How do extensions help share how people use schema?
- How do you educate people about schema?

4 - Capabilities
4.1 - API Reusability
API Reusability
This is an exploratory proof of concept to quantify what API reuse is across a catalog of APIs for a domain, report it to the rest of the company using existing services, and then incentivize API reuse in VSCode, encouraging developers to reuse existing patterns across the APIs they are producing and consuming.
API reusaability has been identified as a need across multiple conversations Naftiko is having with companies, and this repository is mean to explore what is possible across many different providers, helping better understand what API reuse means in way that others can use.
Use Case
This is an implementation of the API reusable use cases for multiple pilot customers, leveraging the use case schema being developed to drive use case conversations, as well as how they are applied to each individual capability.
- API Reusability - Defining and driving API reusability across a domain.
Capabilities
This end-to-end use cases has six separate capabilities, providing five individual capabilities that can be applied individually, as well as an aggregate capability that brings them all together to provide the right-size context window for an MCP server incentivizing reuse in VSCode, while also updating leadership and other teams of the reuse.
- API Reusability - An aggregate capability to help manage API reusability for a domain.
- Establish API Catalog - An individual capability to establish an API catalog being assessed.
- Define API Reuse - An individual capability to define the state of API reuse.
- Communicate API Reuse - An individual capability to communicate API reusability to different audiences.
- Incentivize API Reuse - An individual capability to incentivize the reuse of APIs during development.
As many of the steps as possible are executed and validated using Bruno, when an HTTP adapter is used, which was pushed further with this iteration, using Bruno pre and post request scripts to calculate the API reuse definition using API catalog data gathered.
Image
This is an image of this aggregate events AI context capability to try and capture everything going on in the visual language we already use for our deck.

Links
4.2 - Manage Events
This is an exploratory proof of concept to explore what and end-to-end event management capability could look like–assembling all the existing standards in a single place to help inform what the capability schema might look like to support our ai context use case, while providing governance along the way.
Use Case
This is an application for our AI Context use cases, leveraging the use case schema being developed to drive use case conversations, as well as how they are applied to each individual capability.
- AI Context - Using a capability as the context window for producing MCP servers.
Capabilities
This end-to-end use cases has six separate capabilities, providing five individual capabilities that can be applied individually, as well as an aggregate capability that brings them all together to provide the right-size context window for an MCP server.
- Events (Aggregate) - A single aggregate events capability.
- Events - An individual events capability.
- Atendees - An individual attendees capability.
- Exhibitors - An individual exhibitors capability.
- Sessions - An individual sessions capability.
- Speakers - An individual speakers capability.
Image
This is an image of this aggregate events AI context capability to try and capture everything going on in the visual language we already use for our deck.

Links
5 - Use Cases
6 - Services
6.1 - Anthropic
Claude is an AI assistant created by Anthropic that helps people with a wide variety of tasks through natural conversation. I can assist with writing and editing, answer questions on many topics, help with analysis and research, provide coding support, engage in creative projects, and offer explanations of complex concepts.
Listing: https://contracts.apievangelist.com/store/anthropic/
Repo: https://github.com/api-evangelist/anthropic
APIs
- Anthropic Messages API
- Anthropic Models API
- Anthropic Message Batches API
- Anthropic Files API
- Anthropic Admin API
- Anthropic Prompts API
Properties
6.2 - Atlassian
Atlassian is a software company that develops collaboration, productivity, and project management tools to help teams work more efficiently. Its products are designed to enhance teamwork, streamline workflows, and support project tracking across a wide range of industries.
Listing: https://contracts.apievangelist.com/store/atlassian/
Repo: https://github.com/api-evangelist/atlassian
APIs
- Atlassian Bit Bucket Addon API
- Atlassian Bit Bucket Hook Events API
- Atlassian Bit Bucket Pull Requests API
- Atlassian Bit Bucket Repositories API
- Atlassian Bit Bucket Snippets API
- Atlassian Bit Bucket Teams API
- Atlassian Bit Bucket User API
- Atlassian Bit Bucket Workspaces API
- Atlassian Confluence Analytics API
- Atlassian Confluence Audit API
- Atlassian Confluence Connect App Module API
- Atlassian Confluence Content API
- Atlassian Confluence Content Body API
- Atlassian Confluence Content States API
- Atlassian Confluence Group API
- Atlassian Confluence Inline Tasks API
- Atlassian Confluence Label API
- Atlassian Confluence Longtask API
- Atlassian Confluence Relation API
- Atlassian Confluence Search API
- Atlassian Confluence Settings API
- Atlassian Confluence Space API
- Atlassian Confluence Template API
- Atlassian Confluence User API
- Atlassian Jira Announcement Banner API
- Atlassian Jira App API
- Atlassian Jira Application Properties API
- Atlassian Jira Application Role API
- Atlassian Jira Attachment API
- Atlassian Jira Auditing API
- Atlassian Jira Avatar API
- Atlassian Jira Classification Levels API
- Atlassian Jira Comment API
- Atlassian Jira Component API
- Atlassian Jira Configuration API
- Atlassian Jira Connect Addons API
- Atlassian Jira Connect App API
- Atlassian Jira Connect Migration API
- Atlassian Jira Connect Service Registry API
- Atlassian Jira Custom Field Option API
- Atlassian Jira Dashboard API
- Atlassian Jira Data Policy API
- Atlassian Jira Events API
- Atlassian Jira Expression API
- Atlassian Jira Field API
- Atlassian Jira Field Configuration API
- Atlassian Jira Field Configuration Scheme API
- Atlassian Jira Filter API
- Atlassian Jira forge App API
- Atlassian Jira Group API
- Atlassian Jira Group User Picker API
- Atlassian Jira Groups API
- Atlassian Jira Issue API
- Atlassian Jira Issue Link API
- Atlassian Jira Issue Link Type API
- Atlassian Jira Issue Security Schemes API
- Atlassian Jira Issue Type API
- Atlassian Jira Issue Type Scheme API
- Atlassian Jira Issue Type Screen Scheme API
- Atlassian Jira Issues API
- Atlassian Jira Jql API
- Atlassian Jira Label API
- Atlassian Jira License API
- Atlassian Jira License Metrics API
- Atlassian Jira My Permissions API
- Atlassian Jira My Preferences API
- Atlassian Jira Myself API
- Atlassian Jira Notification Scheme API
- Atlassian Jira Permission Scheme API
- Atlassian Jira Permissions API
- Atlassian Jira Priority API
- Atlassian Jira Project API
- Atlassian Jira Project Category API
- Atlassian Jira Project Validate API
- Atlassian Jira Resolution API
- Atlassian Jira Role API
- Atlassian Jira Screens API
- Atlassian Jira Screens Scheme API
- Atlassian Jira Search API
- Atlassian Jira Security Level API
- Atlassian Jira Server Info API
- Atlassian Jira Settings API
- Atlassian Jira Status API
- Atlassian Jira Status Category API
- Atlassian Jira Statuses API
- Atlassian Jira Task API
- Atlassian Jira Ui Modifications API
- Atlassian Jira Universal Avatar API
- Atlassian Jira User API
- Atlassian Jira Users API
- Atlassian Jira Version API
- Atlassian Jira Webhook API
- Atlassian Jira Workflow API
- Atlassian Jira Workflow Scheme API
- Atlassian Jira Workflows API
- Atlassian Jira Worklog API
6.3 - Avalara
Avalara is a tax compliance software company that automates sales tax, VAT, and other transaction taxes for businesses. It calculates the correct tax rates for each transaction based on location and product type across thousands of jurisdictions, then handles tax return filing and compliance monitoring. Businesses use it because sales tax rules are extremely complex and constantly changing, especially when selling across multiple states or online, and Avalara’s automation saves them from having to manually track and comply with thousands of different tax requirements.
Listing: https://contracts.apievangelist.com/store/avalara/
Repo: https://github.com/api-evangelist/avalara
APIs
Properties
- SDKs
- Community
- Blog
- Support
- Contact
- AskQuestions
- Certifications
- Webinars
- Learning
- Schema
- Portal
- Explorer
- MCPServers
- Trial
- Guide
- Integrations
- TermsOfService
- PrivacyPolicy
- Customers
- Careers
- Partners
- Newsroom
- WhitePapers
- Events
- Training
- Login
- Documentation
- ChangeLog
- Versioning
- SignUp
- Compliance
- YouTube
- PostmanWorkspace
- GitHubOrganization
- Swagger
- Copilot
- Integrations
6.4 - BigCommerce
BigCommerce is a NASDAQ-listed ecommerce platform that provides software as a service services to retailers. The company’s platform includes online store creation, search engine optimization, hosting, and marketing and security from small to Enterprise sized businesses.
Listing: https://contracts.apievangelist.com/store/bigcommerce/
Repo: https://github.com/api-evangelist/bigcommerce
APIs
- Big Commerce Abandoned Cart Emails
- Big Commerce Abandoned Carts
- Big Commerce Accepted Payment Methods
- Big Commerce Carts
- Big Commerce Catalog Brands
- Big Commerce Catalog Categories
- Big Commerce Catalog Category Trees
- Big Commerce Catalog Product Modifiers
- Big Commerce Catalog Product Variant Options
- Big Commerce Catalog Product Variants
- Big Commerce Catalog Products
- Big Commerce Channels
- Big Commerce Checkouts
- Big Commerce Content
- Big Commerce Currencies
- Big Commerce Current Customer
- Big Commerce Custom Template Associations
- Big Commerce Customer Login (Sso)
- Big Commerce Customers
- Big Commerce Email Templates
- Big Commerce Geography
- Big Commerce Marketing
- Big Commerce Orders
- Big Commerce Pages
- Big Commerce Payment Access Token
- Big Commerce Payment Methods (Deprecated)
- Big Commerce Payment Processing
- Big Commerce Price Lists
- Big Commerce Pricing
- Big Commerce Redirects
- Big Commerce Scripts
- Big Commerce Settings
- Big Commerce Shipping
- Big Commerce Shipping Providers
- Big Commerce Sites
- Big Commerce Store Information
- Big Commerce Store Logs
- Big Commerce Storefront Carts
- Big Commerce Storefront Checkouts
- Big Commerce Storefront Cookie Consent
- Big Commerce Storefront Customers
- Big Commerce Storefront form Fields (Beta)
- Big Commerce Storefront Orders
- Big Commerce Storefront Subscriptions
- Big Commerce Storefront Token
- Big Commerce Subscribers
- Big Commerce Tax Classes
- Big Commerce Tax Properties
- Big Commerce Tax Provider
- Big Commerce Tax Provider Connection
- Big Commerce Tax Rates & Zones
- Big Commerce Tax Settings
- Big Commerce Tax Zone Check
- Big Commerce Themes
- Big Commerce Webhooks V3
- Big Commerce Widgets
- Big Commerce Wishlist
Properties
6.5 - Cvent
Cvent is a leading event management software company that helps organizations plan, promote, and execute successful events. Their comprehensive platform allows users to easily create event websites, manage registrations, and track attendee engagement. With features such as event budgeting, email marketing, and attendee analytics, Cvent streamlines the event planning process and helps businesses maximize their return on investment. Additionally, their mobile app and on-site check-in tools ensure a seamless experience for both event organizers and attendees. Overall, Cvent empowers organizations to deliver impactful and memorable events that drive business results.
Listing: https://contracts.apievangelist.com/store/cvent/
Repo: https://github.com/api-evangelist/cvent
APIs
Properties
- Documentation
- Tutorials
- Guide
- Standards
- ChangeLog
- Widgets
- WhiteLabel
- SSO
- Webhooks
- Support
- Website
- Integrations
- Blog
- Website
- Pricing
- Careers
- Partners
- Blog
- CaseStudies
- Events
- Webinars
- Community
- Portal
- Documentation
- GettingStarted
- Authentication
- RateLimits
- Pagination
- Filtering
- ChangeLog
- Standards
- Webhooks
- Guide
- SSO
- WhiteLabel
- Security
- Training
- Login
- RequestDemo
6.6 - Datadog
Datadog is a monitoring and analytics platform that helps organizations gain insight into their infrastructure, applications, and services. It allows users to collect, visualize, and analyze real-time data from a variety of sources, including servers, databases, and cloud services. Datadog’s platform enables companies to track performance metrics, troubleshoot issues, and optimize their systems for peak efficiency. With its customizable dashboards and alerting system, Datadog empowers teams to proactively monitor their environments and ensure smooth operations. Ultimately, Datadog helps businesses make data-driven decisions and improve the overall performance of their technology stack.
Listing: https://contracts.apievangelist.com/store/datadog/
Repo: https://github.com/api-evangelist/datadog
APIs
Properties
6.7 - Docker
Docker is a software platform that allows developers to package, distribute, and run applications in containers. Containers are lightweight, standalone, and portable environments that contain everything needed to run an application, including code, runtime, system tools, libraries, and settings. Docker provides a way to streamline the development and deployment process by isolating applications in containers, making it easier to manage dependencies, scale applications, and ensure consistency across different environments. Docker simplifies the process of building, deploying, and managing applications, ultimately leading to increased efficiency and productivity for developers.
Listing: https://contracts.apievangelist.com/store/docker/
Repo: https://github.com/api-evangelist/docker
APIs
6.8 - Figma
Figma’s mission is to make design accessible to everyone. Our two products help people from different backgrounds and roles express their ideas visually and make things together.
Listing: https://contracts.apievangelist.com/store/figma/
Repo: https://github.com/api-evangelist/figma
APIs
- Figma API
- Figma Files API
- Figma Images API
- Figma Teams API
- Figma Projects API
- Figma Me API
- Figma Components API
- Figma Component_sets API
- Figma Styles API
- Figma Webhooks API
- Figma Teams API
- Figma Activity_logs API
- Figma Payments API
- Figma Dev_resources API
- Figma Analytics API
Properties
6.9 - GitHub
GitHub is a cloud-based platform for software development and version control, built on Git. It enables developers to store, manage, and collaborate on code. In addition to Gits distributed version control, GitHub offers access control, bug tracking, feature requests, task management, continuous integration, and wikis for projects. Headquartered in California, it has operated as a subsidiary of Microsoft since 2018.
Listing: https://contracts.apievangelist.com/store/github/
Repo: https://github.com/api-evangelist/github
APIs
- GitHub App API
- GitHub Authorization API
- GitHub Code of Conduct API
- GitHub Emojis API
- GitHub Events API
- GitHub Feeds API
- GitHub Gists API
- GitHub Gitignore Templates API
- GitHub Installation API
- GitHub Issues API
- GitHub Licenses API
- GitHub Enterprise Management API
- GitHub Markdown API
- GitHub Meta API
- GitHub Networks API
- GitHub Notifications API
- GitHub Octocat API
- GitHub Organization API
- GitHub Projects API
- GitHub Rate Limit API
- GitHub Repos API
- GitHub SCIM API
- GitHub Search API
- GitHub Setup API
- GitHub Teams API
- GitHub Zen API
- GitHub User API
Properties
6.10 - Google
Google Cloud APIs are programmatic interfaces to Google Cloud Platform services. They are a key part of Google Cloud Platform, allowing you to easily add the power of everything from computing to networking to storage to machine-learning-based data analysis to your applications.
Listing: https://contracts.apievangelist.com/store/google/
Repo: https://github.com/api-evangelist/google
APIs
- Google Cloud API Gateway
- Books API
- Google Drive API
- Google Drive Activity API
- Google Drive Labels API
- Google Calendar API
- Google Gmail API
- Google Sheets API
- Google Docs API
- Google Maps API
- Google Places API
- Google Aggregate Places API
- Google Places Insights API
- Google Street View Imagery API
- Google Elevation API
- Google Routes API
- Google Geocoding API
- Google Geolocation API
- Google Address Validation API
- Google Time Zone API
- Google Air Quality API
- Google Pollen API
- Google Solar API
- Google Weather API
- Google Gemini API
Properties
6.11 - Grafana
Grafana is a powerful open-source platform for data visualization and monitoring. It allows users to create interactive, customizable dashboards that display real-time data from multiple sources in a visually appealing way. With Grafana, users can easily connect to databases, cloud services, and other data sources, and then display that data in various chart types, tables, and histograms. Grafana also offers advanced alerting capabilities, enabling users to set up alerts based on specified conditions and thresholds. Overall, Grafana is a versatile tool that helps organizations make sense of their data and monitor the performance of their systems in a centralized, user-friendly interface.
Listing: https://contracts.apievangelist.com/store/grafana/
Repo: https://github.com/api-evangelist/grafana
APIs
6.12 - HubSpot
HubSpot is a leading CRM platform that provides software and support to help businesses grow better. Our platform includes marketing, sales, service, and website management products that start free and scale to meet our customers' needs at any stage of growth. Today, thousands of customers around the world use our powerful and easy-to-use tools and integrations to attract, engage, and delight customers.
Listing: https://contracts.apievangelist.com/store/hubspot/
Repo: https://github.com/api-evangelist/hubspot
APIs
- HubSpot Domains API
- HubSpot Source Code API
- HubSpot Posts API
- HubSpot Authors API
- HubSpot URL Redirects API
Properties
6.13 - Kong
Kong provides the foundation that enables any company to securely adopt AI and become an API-first company speeding up time to market, creating new business opportunities, and delivering superior products and services.
Listing: https://contracts.apievangelist.com/store/kong/
Repo: https://github.com/api-evangelist/kong
APIs
Properties
6.14 - LinkedIn
LinkedIn is a social networking site for professionals to connect with colleagues, employers, and other professionals. It’s a place to share ideas, information, and opportunities, and to find jobs, research companies, and learn about industry news.
Listing: https://contracts.apievangelist.com/store/linkedin/
Repo: https://github.com/api-evangelist/linkedin
APIs
- LinkedIn Consumer API
- LinkedIn Marketing API
- LinkedIn Learning Solutions
- LinkedIn Talent Solutions
- LinkedIn Compliance Solutions
- LinkedIn Sales Navigator API
Properties
6.15 - Mailchimp
Mailchimp’s developer tools provide everything you need to integrate your data with intelligent marketing tools and event-driven transactional email.
Listing: https://contracts.apievangelist.com/store/mailchimp/
Repo: https://github.com/api-evangelist/mailchimp
APIs
Properties
6.16 - Meta
Meta Platforms, Inc., doing business as Meta, and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California. The company owns and operates Facebook, Instagram, Threads, and WhatsApp, among other products and services.
Listing: https://contracts.apievangelist.com/store/meta/
Repo: https://github.com/api-evangelist/meta
APIs
Properties
6.17 - Microsoft Graph
Microsoft Graph is the gateway to data and intelligence in Microsoft cloud services like Microsoft Entra and Microsoft 365. Use the wealth of data accessible through Microsoft Graph to build apps for organizations and consumers that interact with millions of users.
Listing: https://contracts.apievangelist.com/store/microsoft-graph/
Repo: https://github.com/api-evangelist/microsoft-graph
APIs
- Microsoft Graph Admin
- Microsoft Graph Agreement Acceptances
- Microsoft Graph Agreements
- Microsoft Graph Applicaiton Catalogs
- Microsoft Graph Applications
- Microsoft Graph Application Templates
- Microsoft Graph Audit Logs
- Microsoft Graph Authentication Method Configurations
- Microsoft Graph Authentication Methods Policies
- Microsoft Graph Certificate Based Authorization Configuration
- Microsoft Graph Chats
- Microsoft Graph Communications
- Microsoft Graph Compliance
- Microsoft Graph Connections
- Microsoft Graph Contacts
- Microsoft Graph Contracts
- Microsoft Graph Copilot
- Microsoft Graph Data Policy Operations
- Microsoft Graph Device Application Management
- Microsoft Graph Device Management
- Microsoft Graph Devices
- Microsoft Graph Directory
- Microsoft Graph Directory Objects
- Microsoft Graph Directory Roles
- Microsoft Graph Directory Role Templates
- Microsoft Graph Domain DNS Records
- Microsoft Graph Domains
- Microsoft Graph Drives
- Microsoft Graph Education
- Microsoft Graph Employee Experience
- Microsoft Graph External
- Microsoft Graph Filter Operators
- Microsoft Graph Functions
- Microsoft Graph Group Lifecycle Policies
- Microsoft Graph Groups
- Microsoft Graph Group Settings
- Microsoft Graph Group Setting Templates
- Microsoft Graph Identity
- Microsoft Graph Identity Governance
- Microsoft Graph Identity Protection
- Microsoft Graph Identity Providers
- Microsoft Graph Information Protection
- Microsoft Graph Invitations
- Microsoft Graph Me
- Microsoft Graph Oauth2 Permission Grants
- Microsoft Graph Organizations
- Microsoft Graph Permission Grants
- Microsoft Graph Places
- Microsoft Graph Planner
- Microsoft Graph Policies
- Microsoft Graph Print
- Microsoft Graph Privacy
- Microsoft Graph Reports
- Microsoft Graph Role Management
- Microsoft Graph Schema Extensions
- Microsoft Graph Scoped Role Memberships
- Microsoft Graph Search
- Microsoft Graph Security
- Microsoft Graph Service Principals
- Microsoft Graph Shares
- Microsoft Graph Sites
- Microsoft Graph Solutions
- Microsoft Graph Storage
- Microsoft Graph Subscribed SKUs
- Microsoft Graph Subscriptions
- Microsoft Graph Teams
- Microsoft Graph Teams Templates
- Microsoft Graph Teamwork
- Microsoft Graph Tenant Relationships
- Microsoft Graph Users
Properties
6.18 - New Relic
New Relic is a software analytics company that helps businesses monitor and analyze their applications and infrastructure in real-time. By providing detailed insights into the performance and user experience of their systems, New Relic enables organizations to identify and fix issues quickly, optimize performance, and ultimately deliver better digital experiences to their customers. With a range of products and services, including application performance monitoring, infrastructure monitoring, and synthetic monitoring, New Relic empowers businesses to make data-driven decisions and drive digital transformation.
Listing: https://contracts.apievangelist.com/store/new-relic/
Repo: https://github.com/api-evangelist/new-relic
APIs
Properties
6.19 - Notion
Notion is a versatile all-in-one workspace tool that helps individuals and teams organize their tasks, projects, and ideas in a centralized and collaborative platform. With features such as databases, boards, calendars, and documents, Notion allows users to create personalized workflows, track progress, and manage information efficiently. Users can customize their workspace to fit their unique needs, whether it be for project management, note-taking, or knowledge sharing. Notion aims to streamline workflows and enhance productivity by providing a flexible and intuitive platform for organizing and managing projects and information.
Listing: https://contracts.apievangelist.com/store/notion/
Repo: https://github.com/api-evangelist/notion
APIs
Properties
6.20 - OpenAI
OpenAI is a research organization that focuses on artificial intelligence (AI) and machine learning. Their mission is to ensure that AI benefits all of humanity, and they work on developing AI technology in a way that is safe and beneficial for society. OpenAI conducts cutting-edge research in fields such as natural language processing, reinforcement learning, and robotics. They also develop and release tools and models that help advance the field of AI and are open-source and accessible to the public. Additionally, OpenAI engages in outreach and advocacy efforts to promote the responsible development and deployment of AI technologies.
Listing: https://contracts.apievangelist.com/store/openai/
Repo: https://github.com/api-evangelist/openai
APIs
- OpenAI Assistants API
- OpenAI Audio API
- OpenAI Chat API
- OpenAI Chat Completions API
- OpenAI Embeddings API
- OpenAI Files API
- OpenAI Fine Tuning API
- OpenAI Images API
- OpenAI Models API
- OpenAI Threads API
Properties
6.21 - Salesforce
Salesforce is a cloud-based customer relationship management (CRM) platform that helps businesses manage and track their interactions with customers and leads. It provides a range of services including sales automation, marketing automation, customer service and analytics. Salesforce allows businesses to store all customer data in one centralized location, making it easier to collaborate and communicate with team members and provide personalized experiences for customers. With Salesforce, businesses can streamline their processes, increase efficiency, and ultimately drive growth and success.
Listing: https://contracts.apievangelist.com/store/salesforce/
Repo: https://github.com/api-evangelist/salesforce
APIs
6.22 - SendGrid
SendGrid is a cloud-based customer communication platform that provides tools for email marketing and transactional email delivery. It helps businesses of all sizes easily create and send emails to their customers, enabling them to build stronger relationships and drive engagement. SendGrid also offers analytics and reporting tools to track the success of email campaigns, as well as features for managing subscriber lists and personalizing emails for targeted communications. Overall, SendGrid’s platform allows businesses to streamline their email marketing efforts and improve their overall communication strategies.
Listing: https://contracts.apievangelist.com/store/sendgrid/
Repo: https://github.com/api-evangelist/sendgrid
APIs
- Twilio SendGrid Account Provisioning API
- Twilio SendGrid Alerts API
- Twilio SendGrid API Keys API
- Twilio SendGrid Domain Authentication API
- Twilio SendGrid Email Activity API
- Twilio SendGrid Email Address Validation API
- Twilio SendGrid Enforced TLS API
- Twilio SendGrid Integrations API
- Twilio SendGrid IP Access Management API
- Twilio SendGrid IP Address Management API
- Twilio SendGrid IP Warmup API
- Twilio SendGrid IP Address API
- Twilio SendGrid Link Branding API
- Twilio SendGrid Legacy Marketing Campaigns Campaigns API
- Twilio SendGrid Legacy Marketing Campaigns Contacts API
- Twilio SendGrid Legacy Marketing Campaigns Sender Identities API
- Twilio SendGrid Mail Settings API
- Twilio SendGrid Mail API
- Twilio SendGrid Marketing Campaigns Contacts API
- Twilio SendGrid Marketing Campaigns Custom Fields API
- Twilio SendGrid Marketing Campaigns Designs
- Twilio SendGrid Marketing Campaigns Lists API
- Twilio SendGrid Marketing Campaigns Segments 2.0 API
- Twilio SendGrid Marketing Campaigns Segments API
- Twilio SendGrid Marketing Campaigns Senders API
- Twilio SendGrid Marketing Campaigns Single Sends API
- Twilio SendGrid Marketing Campaigns Statistics API
- Twilio SendGrid Marketing Campaigns Send Test Email API
- Twilio SendGrid Partner API
- Twilio SendGrid Recipients’ Data Erasure API
- Twilio SendGrid Reverse DNS API
- Twilio SendGrid Scheduled Sends API
- Twilio SendGrid Scopes API
- Twilio SendGrid Engagement Quality API
- Twilio SendGrid Single Sign-On API
- Twilio SendGrid Statistics API
- Twilio SendGrid Subusers
- Twilio SendGrid Suppressions API
- Twilio SendGrid Teammates API
- Twilio SendGrid Templates API
- Twilio SendGrid Tracking Settings API
- Twilio SendGrid User API
- Twilio SendGrid Verified Senders API
- Twilio SendGrid Webhook Configuration API
Properties
6.23 - ServiceNow
ServiceNow is a cloud-based platform that provides a wide range of services for businesses to manage their IT operations, customer service, human resources, and other functions. The platform allows organizations to automate and streamline their workflows, improving efficiency and productivity. ServiceNow offers various applications and modules that help companies track and resolve issues, manage projects, and enhance collaboration among employees. Additionally, ServiceNow provides tools for data analytics, reporting, and monitoring to help businesses make informed decisions and optimize their operations. Overall, ServiceNow helps organizations simplify and improve their processes, leading to better customer satisfaction and business outcomes.
Listing: https://contracts.apievangelist.com/store/servicenow/
Repo: https://github.com/api-evangelist/servicenow
APIs
Properties
6.24 - Shopify
Shopify is an e-commerce platform that enables businesses to create and operate their online stores. It provides a wide range of tools and features that help merchants manage their inventory, process payments, track shipments, and create customized storefronts. With Shopify, businesses can easily set up their online presence, sell products, and reach customers all over the world. The platform also offers various marketing and analytics tools to help businesses grow and succeed in the competitive online marketplace. Overall, Shopify simplifies the process of building and running an online store, making it a popular choice for businesses of all sizes.
Listing: https://contracts.apievangelist.com/store/shopify/
Repo: https://github.com/api-evangelist/shopify
APIs
Properties
6.25 - Slack
Slack is a cloud-based collaboration tool that brings teams together to work more efficiently and effectively. It allows team members to communicate in real-time through instant messaging, group chats, and video calls. Users can share files, collaborate on projects, and stay organized with task management features. Slack also integrates seamlessly with other tools and services, making it easy for teams to streamline their workflow and stay connected, no matter where they are located. With its user-friendly interface and robust features.
Listing: https://contracts.apievangelist.com/store/slack/
Repo: https://github.com/api-evangelist/slack
APIs
- Slack Admin API
- Slack Tests API
- Slack Apps API
- Slack Auth API
- Slack Bots API
- Slack Calls API
- Slack Chat API
- Slack Conversations API
- Slack Dialog API
- Slack DND API
- Slack Emoji API
- Slack Files API
- Slack Migration API
- Slack OAuth API
- Slack Pins API
- Slack Reactions API
- Slack Reminders API
- Slack RTM API
- Slack Search API
- Slack Stars API
- Slack Team API
- Slack User Groups API
- Slack Users API
- Slack Views API
- Slack Workflows API
Properties
6.26 - Snowflake
Snowflake is a cloud-based data platform that provides data warehousing, data lake, and data sharing capabilities. It enables organizations to store, process, and analyze large volumes of structured and semi-structured data using SQL, while offering scalability, concurrency, and performance across multiple cloud providers. Snowflake is widely used for analytics, business intelligence, and data collaboration.
Listing: https://contracts.apievangelist.com/store/snowflake/
Repo: https://github.com/api-evangelist/snowflake
APIs
- Snowflake Account API
- Snowflake Alert API
- Snowflake API Integration API
- Snowflake Catalog Integration API
- Snowflake Compute Pools API
- Cortex Analyst API
- Cortex Inference API
- Cortex Search REST API
- Snowflake Database Role API
- Snowflake Database API
- Snowflake Dynamic Table API
- Snowflake Event Table API
- Snowflake External Volume API
- Snowflake Function API
- Snowflake Grant API
- Snowflake Iceberg Table API
- Snowflake Image Repository API
- Snowflake Managed Account API
- Snowflake Network Policy API
- Snowflake Notebook API
- Snowflake Notification Integration API
- Snowflake Pipe API
- Snowflake Procedure API
- Snowflake Result API
- Snowflake Role API
- Snowflake Schema API
- Snowflake Services API
- Snowflake SQL API
- Snowflake Stage API
- Snowflake Stream API
- Snowflake Table API
- Snowflake Task API
- Snowflake User Defined Function API
- Snowflake User API
- Snowflake View API
- Snowflake Warehouse API
Properties
6.27 - Stripe
Stripe is a technology company that provides a platform for online payment processing. They offer a secure and seamless way for businesses to accept payments from customers, handling transactions in multiple currencies and payment methods. Stripe’s software and APIs make it easy for businesses of all sizes to manage their online payments, track transactions, and analyze their revenue streams. With features such as fraud prevention, subscription billing, and mobile payment options, Stripe is a valuable tool for e-commerce businesses looking to streamline their payment processes and provide a better user experience for their customers.
Listing: https://contracts.apievangelist.com/store/stripe/
Repo: https://github.com/api-evangelist/stripe
APIs
- Stripe Accounts API
- Stripe Apple Pay API
- Stripe Application Fees API
- Stripe Application Secrets API
- Stripe Balance API
- Stripe Billing API
- Stripe Charges API
- Stripe Checkout API
- Stripe Climate API
- Stripe Country API
- Stripe Coupons API
- Stripe Credit Notes API
- Stripe Customers API
- Stripe Disputes API
- Stripe Ephemeral Keys API
- Stripe Events API
- Stripe Exchange Rates API
- Stripe Files API
- Stripe Financial Connections API
- Stripe Identity API
- Stripe Invoice API
- Stripe Issuing API
- Stripe Link API
- Stripe Payment Intents API
- Stripe Payment Links API
- Stripe Payment Method API
- Stripe Payouts API
- Stripe Plans API
- Stripe Prices API
- Stripe Products API
- Stripe Promotion Codes API
- Stripe Quotes API
- Stripe Radar API
- Stripe Refunds API
- Stripe Reporting API
- Stripe Reviews API
- Stripe Setup API
- Stripe Shipping Rates API
- Stripe Sigma API
- Stripe Sources API
- Stripe Subscription API
- Stripe Tax API
- Stripe Terminal API
- Stripe Test Helpers API
- Stripe Tokens API
- Stripe Topups API
- Stripe Transfers API
- Stripe Treasury API
- Stripe Webhook API
Properties
6.28 - Twilio
Twilio is a cloud communications platform that enables developers to integrate voice, messaging, and video capabilities into their applications. Through its APIs, Twilio allows businesses to easily build and scale communication solutions, such as customer support helplines, appointment reminders, and two-factor authentication services. By partnering with Twilio, organizations can enhance their customer engagement strategies and streamline their communication channels, ultimately driving greater efficiency and customer satisfaction. In essence, Twilio empowers developers to create innovative and personalized communication experiences that connect people in new and meaningful ways.
Listing: https://contracts.apievangelist.com/store/twilio/
Repo: https://github.com/api-evangelist/twilio
APIs
- Twilio Accounts API
- Twilio Assistant API
- Twilio Autopilot API
- Twilio Bulk Exports API
- Twilio Content API
- Twilio Conversations API
- Twilio Events API
- Twilio Frontline API
- Twilio Insights API
- Twilio Intelligence API
- Twilio IP Messaging API
- Twilio Marketplace API
- Twilio Media API
- Twilio Messaging API
- Twilio Microvisor API
- Twilio Monitor API
- Twilio Notify API
- Twilio Numbers API
- Twilio Pricing API
- Twilio Proxy API
- Twilio Routes API
- Twilio Serverless API
- Twilio Studio API
- Twilio Super SIM API
- Twilio Sync API
- Twilio Task Router API
- Twilio Elastic SIP Trunking API
- Twilio Trust Hub API
- Twilio Verify API
- Twilio Video API
- Twilio Voice API
- Twilio Wireless API
Properties
6.29 - Youtube
The Youtube API provides the ability to retrieve feeds related to videos, users, and playlists. It also provides the ability to manipulate these feeds, such as creating new playlists, adding videos as favorites, and sending messsages. The API is also able to upload videos.
Listing: https://contracts.apievangelist.com/store/youtube/
Repo: https://github.com/api-evangelist/youtube
APIs
- Youtube Activities API
- Youtube Channels API
- Youtube Comments API
- Youtube Playlists API
- Youtube Search API
- Youtube Subscriptions API
- Youtube Videos API
Properties
6.30 - Zendesk
Zendesk provides customer service and engagement software that helps businesses manage support tickets, automate workflows, and offer multi-channel supportincluding email, chat, social media, and phonethrough a unified platform.
Listing: https://contracts.apievangelist.com/store/zendesk/
Repo: https://github.com/api-evangelist/zendesk
APIs
- Zendesk Assignables API
- Zendesk Target Type API
- Zendesk Account API
- Zendesk Accounts API
- Zendesk Activities API
- Zendesk Any Channel API
- Zendesk Approval Workflow Instances API
- Zendesk Attachments API
- Zendesk Audit Logs API
- Zendesk Autocomplete API
- Zendesk Automations API
- Zendesk Bookmarks API
- Zendesk Brand Agents API
- Zendesk Brands API
- Zendesk Channels API
- Zendesk Chat File Redactions API
- Zendesk Chat Redactions API
- Zendesk Comment Redactions API
- Zendesk Custom Objects API
- Zendesk Custom Roles API
- Zendesk Custom Status API
- Zendesk Custom Statuses API
- Zendesk Deleted Tickets API
- Zendesk Deleted Users API
- Zendesk Deletion Schedules API
- Zendesk Dynamic Content API
- Zendesk Email Notifications API
- Zendesk Group Memberships API
- Zendesk Group SLAs API
- Zendesk Groups API
- Zendesk Imports API
- Zendesk Incremental API
- Zendesk Job Statuses
- Zendesk Locales API
- Zendesk Macros API
- Zendesk Oauth API
- Zendesk Object Layouts API
- Zendesk Organization Fields API
- Zendesk Organization Memberships API
- Zendesk Organization Merges API
- Zendesk Organization Subscriptions API
- Zendesk Organizations API
- Zendesk Problems API
- Zendesk Push Notification Devices API
- Zendesk Queues API
- Zendesk Recipient Addresses API
- Zendesk Relationships API
- Zendesk Requests API
- Zendesk Resource Collections API
- Zendesk Routing API
- Zendesk Satisfaction Ratings API
- Zendesk Satisfaction Reasons API
- Zendesk Search API
- Zendesk Sessions API
- Zendesk Sharing Agreements API
- Zendesk Skips API
- Zendesk SLAs API
- Zendesk Suspended Tickets API
- Zendesk Tags API
- Zendesk Target Failures API
- Zendesk Targets API
- Zendesk Ticket Audits API
- Zendesk Ticket Content Pins API
- Zendesk Ticket Fields API
- Zendesk Ticket Form Statuses API
- Zendesk Ticket Forms API
- Zendesk Ticket Metrics API
- Zendesk Tickets API
- Zendesk Trigger Categories API
- Zendesk Triggers API
- Zendesk Uploads API
- Zendesk User Fields API
- Zendesk Users API
- Zendesk Views API
- Zendesk Workspaces API
Properties
6.31 - Zoom
Zoom is a video conferencing platform that allows users to connect with others through virtual meetings, webinars, and chat features. It enables individuals and businesses to communicate and collaborate remotely, making it easier to work together from different locations. With its user-friendly interface and high-quality audio and video capabilities, Zoom has become a popular tool for businesses, schools, and other organizations to stay connected and productive. Whether it’s hosting a team meeting, conducting a virtual workshop, or catching up with friends and family, Zoom provides a seamless and reliable way to communicate in real-time.
Listing: https://contracts.apievangelist.com/store/zoom/
Repo: https://github.com/api-evangelist/zoom
APIs
7 - Conversations Archive
8 - Stories Archive
8.1 - AI Context Use Case - Naftiko Blog
Title
- AI Context
Tagline
- AI without context is guesswork. Valuable data lives in SaaS tools, files, and systems your models can’t reach safely.
Description
This use case focuses on providing Model Context Protocol (MCP) servers on top of common private, public/1st party and 3rd party APIs, as well as local SQL databases, employing a domain-driven, declarative, and governed approach to right-sizing the context windows via MCP while providing integrations for us across AI copilots and agents.
Teams need a reliable way to deliver MCP servers from internal and third-party APIs without having to discover and learn about each API and the technical details of integration. This use case provides the fundamentals for safely integrating existing data and systems into artificial intelligence copilots and agents.
Benefits
- Add data and tools to agents
- Compose MCP servers
- Aggregate & curate context
Pain
- Copilot Leadership Mandate
- MCP Leadership Mandate
- Unmanaged Encryption
- Unmanaged Discovery
- Unmanaged Authentication
- Unmanaged Usage
- Unmanaged Cost
Gains
- 3rd-Pary Data in Copilot
- 3rd-Party MCP Available
- Manage Budget Across
- Managed Risk Involved
- Optimize SaaS Usage
- Create More Visibility
- Create More Discovery
- Create More Reusability
Connects
- Internal APIs
- Infrastructure APIs
- SaaS APIs
- Partner APIs
Adapters
- HTTP
- MCP
- OpenAPI
Links
8.2 - Hypermedia Automating Capabilities in this AI Moment
Links
8.3 - Exploring What Schema Tools Are Available
Links
8.4 - We've Been Wrong About API Reuse All Along
Links
8.5 - Avalara Developer Experience Review
Links
8.6 - Capabilities - The New Stack Blog
Links
8.7 - Capabilities Podcast - Naftiko Blog
Links
8.8 - Engine - Naftiko Blog
Links
8.9 - Fabric - Naftiko Blog
Links
8.10 - Naftiko Launch - API Evangelist Blog
Links
8.11 - Naftiko Launch - EIN Presswire
Links
8.12 - Naftiko Signals - Naftiko Blog
Links
8.13 - Naftiko Signals White Paper
Links
8.14 - Capabilities - Naftiko Blog
Links
8.15 - Naftiko Launch - Naftiko Blog
Links
9 - Standards
9.1 - OpenAPI
Describing the surface area of HTTP APIs and Webhooks.
The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.
OpenAPI was formerly known as Swagger. In 2015, SmartBear donated the specification to the Linux Foundation, establishing the OpenAPI Initiative (OAI) and a formal, community-driven governance model that anyone can participate in.
An OpenAPI document can be written in JSON or YAML and typically defines elements such as: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security.
OpenAPI has an active GitHub organization, blog, LinkedIn page, and Slack channel to encourage community participation. In addition, OAI membership helps fund projects and events that drive awareness and adoption.
The OpenAPI Specification can be used alongside two other OAI specifications: (1) the Arazzo specification for defining API-driven workflows, and (2) OpenAPI Overlays, which allow additional information to be overlaid onto an OpenAPI document.
License: Apache
Tags: HTTP APIs, Webhooks
Properties: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security
Website: https://www.openapis.org
9.2 - OpenAPI Overlays
Define metadata, operations, and data structures for overlaying on top of OpenAPI.
The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.
OpenAPI Overlays emerged from the need to adapt APIs for varied use cases, from improving developer experience to localizing documentation. The first version was recently released, and the roadmap is being developed within the OpenAPI Initiative.
The specification provides three constructs for augmenting an OpenAPI description: Info, Overlays, and Actions. How these are applied is being worked out across different tools and industries to accommodate the diversity of APIs being delivered.
To get involved, participate via the GitHub repository, where you’ll find discussions, meeting notes, and related topics. There’s also a dedicated channel within the broader OpenAPI Initiative Slack.
OpenAPI Overlays offer a robust way to manage the complexity of producing and consuming APIs across industries, regions, and domains. As the specification matures, it presents a strong opportunity to ensure documentation, mocks, examples, code generation, tests, and other artifacts carry the right context for different situations.
License: Apache License
Tags: Overlays
Properties: info, overlays, and actions
Website: https://spec.openapis.org/overlay/v1.0.0.html
Standards: JSON Schema
9.3 - Arazzo
Describing your business processes and workflows using OpenAPI.
The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.
Arazzo emerged from a need identified in the OpenAPI community for orchestration and automation across APIs described with OpenAPI. Version 1 of the specification is available, and work on future iterations is guided by a public roadmap.
With Arazzo, you can define elements such as: Info, Sources, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusables, Criteria, Request Bodies, and Payload Replacements—providing a consistent approach to delivering a wide range of automation outcomes.
You can engage with the Arazzo community via the GitHub repository for each version and participate in GitHub Discussions to stay current on meetings and interact with the specification’s stewards and the broader community.
Arazzo is the logical layer on top of OpenAPI: it goes beyond documentation, mocking, and SDKs to focus on defining real business workflows that use APIs. Together, Arazzo and OpenAPI help align API operations with the rest of the business.
License: Apache 2.0
Tags: Workflows, Automation
Properties: Info, Source, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusable, Criterion, Request Bodies, and Payload Replacements
9.4 - AsyncAPI
Describing the surface area of your event-driven infrastructure.
AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.
The specification began as an open-source side project and was later donated to the Linux Foundation after the team joined Postman, establishing it as a standard with formal governance.
AsyncAPI lets you define servers, producers and consumers, channels, protocols, and messages used in event-driven API operations—providing a common, tool-friendly way to describe the surface area of event-driven APIs.
To get involved, visit the AsyncAPI GitHub repository and blog, follow the LinkedIn page, tune into the YouTube or Twitch channels, and join the conversation in the community Slack.
AsyncAPI can be used to define HTTP APIs much like OpenAPI, and it further supports multiple protocols such as Pub/Sub, Kafka, MQTT, NATS, Redis, SNS, Solace, AMQP, JMS, and WebSockets—making it useful across many approaches to delivering APIs.
License: Apache
Tags: Event-Driven
Properties: Servers, Producers, Consumers, Channels, Protocols, and Messages
Website: https://www.asyncapi.com
9.5 - APIOps Cycles
Aligning engineering with products when it comes to APIs.
The method is built around a collection of strategic canvas templates that help teams systematically address everything from customer journey mapping and value proposition definition to domain modeling, capacity planning, and risk assessment. As an open-source framework released under the Creative Commons Attribution–ShareAlike 4.0 license, APIOps Cycles is freely available for anyone to use, adapt, and share, with the complete method consisting of localized JSON and markdown files that power both the official website and open tooling available as an npm package. Whether you’re a developer integrating the method into your products and services, or an organization seeking to establish API product strategy and best practices, APIOps Cycles offers a proven, community-backed approach supported by a network of partners who can provide guidance and expertise in implementing the methodology effectively.
License: Creative Commons Attribution–ShareAlike 4.0
Tags: Products, Operations
Website: https://www.apiopscycles.com/
APIOps Cycles Canvases Outline
- Customer Journey Canvas
- Persona
- Customer Discovers Need
- Customer Need Is Resolved
- Journey Steps
- Pains
- Gains
- Inputs & Outputs
- Interaction & Processing Rules
- API Value Proposition Canvas
- Tasks
- Gain Enabling Features
- Pain Relieving Features
- API Products
- API Business Model Canvas
- API Value Proposition
- API Consumer Segments
- Developer Relations
- Channels
- Key Resources
- Key Activities
- Key Partners
- Benefits
- Costs
- Domain Canvas
- Selected Customer Journey Steps
- Core Entities & Business Meaning
- Attributes & Business Importance
- Relationships Between Entities
- Business, Compliance & Integrity Rules
- Security & Privacy Considerations
- Interaction Canvas
- CRUD Interactions
- CRUD Input & Output Models
- CRUD Processing & Validation
- Query-Driven Interactions
- Query-Driven Input & Output Models
- Query-Driven Processing & Validation
- Command-Driven Interactions
- Command-Driven Input & Output Models
- Command-Driven Processing & Validation
- Event-Driven Interactions
- Event-Driven Input & Output Models
- Event-Driven Processing & Validation
- REST Canvas
- API Resources
- API Resource Model
- API Verbs
- API Verb Example
- GraphQL Canvas
- API Name
- Consumer Goals
- Key Types
- Relationships
- Queries
- Mutations
- Subscriptions
- Authorization Rules
- Consumer Constraints
- Notes / Open Questions
- Event Canvas
- User Task / Trigger
- Input / Event Payload
- Processing / Logic
- Output / Event Result
- Capacity Canvas
- Current Business Volumes
- Future Consumption Trends
- Peak Load and Availability Requirements
- Caching Strategies
- Rate Limiting Strategies
- Scaling Strategies
- Business Impact Canvas
- Availability Risks
- Mitigate Availability Risks
- Security Risks
- Mitigate Security Risks
- Data Risks
- Mitigate Data Risks
- Locations Canvas
- Location Groups
- Location Group Characteristics
- Locations
- Location Characteristics
- Location Distances
- Location Distance Characteristics
- Location Endpoints
- Location Endpoint Characteristics
9.6 - Postman Collections
Executable artifact for automating APi requests and responses for testing.
A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.
Postman Collections started as a simple way to save and share API requests in the early Postman client (2013), then grew into a formal JSON format with the v1 schema published in 2015. The format then stabilized as v2.0.0 and shortly after as v2.1.0 in 2017, which remains the common export/import version today.
Owner: Postman
License: Apache 2.0
Properties: Metadata, Requests, Scripts, Variables, Authentication, Methods, Headers, URLs, Bodies, Events, Responses
Website: https://postman.com
9.7 - Postman Environments
Storing variables for running along with Postman Collections.
Postman environments are a powerful feature that allow you to manage different sets of variables for your API testing and development workflow. An environment is essentially a named collection of key-value pairs (variables) that you can switch between depending on your context—such as development, staging, or production. For example, you might have different base URLs, authentication tokens, or API keys for each environment. Instead of manually updating these values in every request when you switch from testing locally to hitting a production server, you can simply select a different environment from a dropdown menu, and all your requests will automatically use the appropriate variables. This makes it much easier to maintain consistency, avoid errors, and streamline your workflow when working across multiple environments or sharing collections with team members.
Owner: Postman
License: Apache 2.0
Properties: Variables, Variable name, Initial value, Current value, Type, Environment Name, Environment ID, Scope, State
Website: https://learning.postman.com/docs/sending-requests/variables/managing-environments/
9.8 - Bruno Collection
Open source client specification.
Bruno collections are structured groups of API requests, variables, and environments used within the Bruno API client to help developers organize and manage their API workflows. Each collection acts as a self-contained workspace where you can store requests, define authentication, set environment values, document behaviors, and run tests. Designed with a filesystem-first approach, Bruno collections are easy to version-control and share, making them especially useful for teams collaborating on API development or maintaining consistent testing practices across environments.
License: MIT license
Tags: Clients, Executable
Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings
Website: https://www.usebruno.com/
9.9 - Bruno Environment
A open-source client environment.
A Bruno environment is a configurable set of key–value variables that allows you to run the same API requests across different deployment contexts, such as local development, staging, and production. Environments typically store values like base URLs, authentication tokens, headers, or other parameters that may change depending on where an API is being tested. By separating these values from the requests themselves, Bruno makes it easy to switch contexts, maintain cleaner collections, and ensure consistency when collaborating with others or automating API workflows.
License: MIT license
Tags: Name, Variables, Enabled, Secret, Ephemeral, Persisted Value
Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings
Website: https://www.usebruno.com/
9.10 - Open Collections
Open-source collection format.
The OpenCollection Specification is a format for describing API collections, including requests, authentication, variables, and scripts. This specification enables tools to understand and work with API collections in a standardized way.
License: Apache License
Tags: Collections
Website: https://www.opencollection.com/
9.11 - gRPC
Communicating the interoperability between systems using AI agents.
The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.
License: Apache 2.0
Tags: agents
Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context
Website: https://a2a-protocol.org/latest/
Standards: JSON-RPC 2.0, gRPC
9.12 - JSON RPC
Lightweight transport-agnostic remote procedure call protocol.
JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses: a client sends an object with jsonrpc:“2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.
JSON-RPC emerged in the mid-2000s as a community-driven, lightweight RPC protocol using JSON, with an informal 1.0 spec (c. 2005) that defined simple request/response messaging and “notifications” (no reply). A 1.1 working draft (around 2008) tried to broaden and formalize features but never became canonical. The widely adopted JSON-RPC 2.0 specification (2010) simplified and standardized the model—introducing the mandatory “jsonrpc”:“2.0” version tag, clearer error objects, support for both positional and named parameters, and request batching—while remaining transport-agnostic (HTTP, WebSocket, pipes, etc.).
License: Apache License 2.0 or MIT License
Tags: RPC
Properties: methods, parameters, identifier, results, errors, codes, messages, data
Website: https://www.jsonrpc.org/
Forum:** https://groups.google.com/g/json-rpc
9.13 - Model Context Protocol (MCP)
Allowing applications to connect to large language models (LLMs).
MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.
Introduced by Anthropic as an open-source effort, MCP addresses the challenge of integrating AI models with external tools and data. It aims to serve as a universal “USB port” for AI, allowing models to access real-time information and perform actions.
MCP defines concepts and properties such as hosts, clients, servers, protocol negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, and logging—providing a standardized approach to connecting applications with LLMs.
The MCP community organizes around a GitHub repository (with issues and discussions), plus a Discord, blog, and RSS feed to track updates and changes to the specification.
MCP is seeing growing adoption among API and tooling providers for agent interactions. Many related API/AI specifications reference, integrate with, or overlap with MCP—despite the project being an open-source protocol currently stewarded by a single company, which has not been contributed to a foundation.
Owner: Anthropic
License: MIT License
Tags: agents, workflows
Properties: hosts, clients, servers, protocols, negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, logging
Website: https://modelcontextprotocol.io/
Standards: JSON-RPC 2.0, JSON Schema
9.14 - Apache Parquet
Compact binary data serialization.
Apache Parquet is a columnar storage file format specifically designed for efficient data storage and processing in big data analytics environments, developed as a collaboration between Twitter and Cloudera in 2013 and now part of the Apache Software Foundation. Unlike traditional row-oriented formats (like CSV or Avro) that store data records sequentially, Parquet organizes data by columns, grouping all values from the same column together in storage. This columnar approach provides significant advantages for analytical workloads where queries typically access only a subset of columns from wide tables—instead of reading entire rows and discarding unneeded columns, Parquet allows systems to read only the specific columns required for a query, dramatically reducing I/O operations and improving query performance. The format also enables highly effective compression since values in the same column tend to have similar characteristics and patterns, allowing compression algorithms like Snappy, Gzip, LZO, and Zstandard to achieve much better compression ratios than they would on mixed-type row data. Parquet files are self-describing, containing schema information and metadata that allow any processing system to understand the data structure without external schema definitions.
Parquet has become the de facto standard for analytical data storage in modern data lakes and big data ecosystems, with native support across virtually all major data processing frameworks including Apache Spark, Apache Hive, Apache Impala, Presto, Trino, Apache Drill, and cloud data warehouses like Amazon Athena, Google BigQuery, Azure Synapse, and Snowflake. The format supports rich data types including nested and repeated structures (arrays, maps, and complex records), making it ideal for storing semi-structured data from JSON or Avro sources while maintaining query efficiency. Parquet’s internal structure uses techniques like dictionary encoding for low-cardinality columns, bit-packing for small integers, run-length encoding for repeated values, and delta encoding for sorted data, all of which contribute to both storage efficiency and query speed. The format includes column statistics (min/max values, null counts) stored in file metadata that enable predicate pushdown—allowing query engines to skip entire row groups or files that don’t contain relevant data based on filter conditions. This combination of columnar organization, advanced encoding schemes, efficient compression, predicate pushdown, and schema evolution support makes Parquet the optimal choice for data warehouse tables, analytical datasets, machine learning feature stores, time-series data, and any scenario where fast analytical queries over large datasets are required, often achieving 10-100x improvements in query performance and storage efficiency compared to row-oriented formats.
License: Apache 2.0
Tags: Data, Serialization, Binary
Properties: Columnar Storage Format, Column-Oriented, Apache Project, Open Source, Twitter-Cloudera Collaboration, Big Data Format, Analytics Optimized, Self-Describing, Schema Embedded, Metadata Rich, Binary Format, Efficient Storage, High Compression, Compression Support, Snappy Compression, Gzip Compression, LZO Compression, Brotli Compression, Zstandard Compression, Uncompressed Option, Column-Level Compression, Encoding Schemes, Dictionary Encoding, Run-Length Encoding, Bit-Packing, Delta Encoding, Delta Binary Packing, Plain Encoding, Byte Stream Split, Hybrid Encoding, Efficient Reads, Selective Column Access, Column Pruning, Projection Pushdown, Predicate Pushdown, Filter Pushdown, Statistics-Based Filtering, Row Group Skipping, Page-Level Statistics, Column Statistics, Min/Max Values, Null Counts, Distinct Counts, Bloom Filters, File-Level Metadata, Row Group Metadata, Column Chunk Metadata, Page Metadata, Schema Evolution, Schema Compatibility, Add Columns, Remove Columns, Rename Columns, Type Evolution, Nested Data Support, Complex Types, Struct Types, Array Types, Map Types, List Types, Repeated Fields, Optional Fields, Required Fields, Hierarchical Data, Semi-Structured Data, JSON Compatible, Avro Compatible, Thrift Compatible, Protocol Buffers Compatible, Rich Data Types, Primitive Types, Boolean Type, Integer Types, INT32, INT64, INT96, Float Type, Double Type, Binary Type, Fixed-Length Binary, String Type, UTF-8 Strings, Decimal Type, Date Type, Time Type, Timestamp Type, UUID Type, Enum Type, Logical Types, Converted Types, Annotation Support, Row Groups, Columnar Chunks, Data Pages, Dictionary Pages, Index Pages, Header Pages, Footer Structure, Magic Number, Version Number, File Format Version, Parquet Format 2.0, Apache Arrow Integration, Arrow Flight, In-Memory Format, Zero-Copy Reads, Memory Mapping, Lazy Loading, Streaming Reads, Batch Reads, Vectorized Processing, SIMD Optimization, CPU Efficiency, I/O Efficiency, Network Efficiency, Query Performance, Fast Scans, Aggregate Performance, Join Performance, Analytical Workloads, OLAP Queries, Data Warehouse Format, Data Lake Format, Cloud Storage Optimized, S3 Optimized, Azure Blob Compatible, Google Cloud Storage Compatible, HDFS Compatible, Object Storage, Distributed Storage, Splittable Files, Parallel Processing, Multi-Threaded Reads, Concurrent Access, Apache Spark Integration, Spark SQL, DataFrame Support, Dataset Support, PySpark Support, Apache Hive Integration, Hive Tables, HiveQL Support, Impala Support, Presto Support, Trino Support, Apache Drill Support, Dremio Support, ClickHouse Support, DuckDB Support, Snowflake Compatible, BigQuery Compatible, Redshift Spectrum, Athena Compatible, Azure Synapse, Databricks Support, EMR Support, Dataproc Compatible, AWS Glue, Data Catalog Integration, Table Format Support, Delta Lake, Apache Iceberg, Apache Hudi, Time Travel, ACID Transactions, Schema Registry, Metastore Integration, Partition Support, Partitioned Tables, Partition Pruning, Bucketing Support, Sorted Data, Clustered Data, Data Organization, File Organization, Directory Structure, Hive Partitioning, Key-Based Partitioning, Date Partitioning, ETL Integration, Data Pipelines, Batch Processing, Stream Processing, Real-Time Analytics, Apache Kafka Integration, Apache Flink Support, Streaming Writes, Micro-Batching, Change Data Capture, Incremental Updates, Upsert Support, Delete Support, Merge Support, Compaction, Small File Problem, File Consolidation, Optimization, Vacuum Operations, Machine Learning, Feature Store, Training Datasets, Model Input, Data Science, Pandas Integration, NumPy Compatible, Scikit-learn, TensorFlow Datasets, PyTorch DataLoader, Jupyter Notebooks, R Support, Julia Support, Command Line Tools, parquet-tools, parquet-cli, File Inspection, Schema Extraction, Row Count, File Size, Compression Ratio, Storage Metrics, Performance Metrics, Benchmark Results, Query Optimization, Cost-Based Optimization, Statistics Collection, Cardinality Estimation, Data Profiling, Data Quality, Data Validation, Type Safety, Schema Validation, Constraint Checking, Business Rules, Programming Language Support, Java Support, Scala Support, Python Support, C++ Support, Go Support, Rust Support, JavaScript Support, .NET Support, Arrow Parquet, PyArrow, FastParquet, parquet-cpp, parquet-mr, parquet-format, Specification, Open Standard, Vendor Neutral, Cross-Platform, Portable Format, Interoperability, Data Exchange, Data Sharing, Data Publishing, Open Data, Public Datasets, Reproducible Research, Version Control Friendly, Git LFS Compatible, Data Versioning, Data Lineage, Provenance Tracking, Audit Trails, Compliance Support, GDPR Compatible, Data Governance, Access Control, Encryption Support, Encryption at Rest, Column Encryption, Transparent Encryption, Security, Authentication, Authorization, Row-Level Security, Column Masking, Data Redaction, PII Protection, Sensitive Data, Anonymization, Pseudonymization, Production Ready, Enterprise Grade, Mission Critical, High Performance, Scalable, Petabyte Scale, Exabyte Scale, Large Datasets, Wide Tables, Deep Nesting, High Cardinality, Low Cardinality, Sparse Data, Dense Data, Time Series Data, Event Data, Log Data, Metrics Data, Sensor Data, IoT Data, Clickstream Data, User Behavior, Transaction Data, Financial Data, Scientific Data, Genomics Data, Weather Data, Geospatial Data, GIS Integration, Location Data, Coordinates, Spatial Queries, Temporal Queries, Historical Data, Archive Format, Cold Storage, Data Retention, Backup Format, Disaster Recovery, Long-Term Storage, Cost Optimization, Storage Savings, Cloud Cost Reduction, Bandwidth Savings, Compute Efficiency, Resource Optimization, Green Computing, Energy Efficient, Carbon Footprint, Sustainability, Industry Standard, Widely Adopted, Battle Tested, Mature Technology, Active Development, Community Support, Documentation, Examples, Tutorials, Best Practices, Design Patterns, Anti-Patterns, Troubleshooting, Debugging, Profiling Tools, Performance Tuning, Optimization Guides, Migration Tools, Conversion Tools, CSV to Parquet, JSON to Parquet, Avro to Parquet, ORC Alternative, Comparison Benchmarks, Format Selection, Use Case Specific, Analytics First, Write Once Read Many, WORM, Append-Only, Immutable Files, Idempotent Writes, Exactly-Once Semantics, Consistency, Durability, Reliability, Fault Tolerance, Error Handling, Data Integrity, Checksum Validation, CRC Checks, Corruption Detection, Self-Healing, Backward Compatible, Forward Compatible, Version Migration, Legacy Support, Modern Format, Future Proof, Ecosystem Integration, Tool Support, BI Tools, Tableau Support, Power BI, Looker, Qlik, Metabase, Superset, Grafana, Monitoring, Observability, Telemetry, Usage Tracking, Access Patterns, Query Patterns, Workload Analysis
Website: https://parquet.apache.org/
9.15 - Avro
Compact binary data serialization.
Apache Avro is a data serialization framework developed within the Apache Hadoop project that provides a compact, fast binary data format along with rich data structures and schema definitions. Created by Doug Cutting (the creator of Hadoop) in 2009, Avro addresses the need for efficient data serialization in big data ecosystems where massive volumes of data must be stored and transmitted efficiently. Unlike JSON or XML which use verbose text-based formats, Avro serializes data into a compact binary representation that significantly reduces storage requirements and network bandwidth while maintaining fast serialization and deserialization performance. Avro schemas are defined using JSON, making them human-readable and language-independent, and these schemas travel with the data (either embedded in files or referenced through a schema registry), ensuring that any system can correctly interpret the serialized data without prior knowledge of its structure. This self-describing nature makes Avro particularly valuable in distributed systems where different services written in different languages need to exchange data reliably.
One of Avro’s most powerful features is its robust support for schema evolution, which allows data schemas to change over time without breaking compatibility between producers and consumers of that data. Avro supports both forward compatibility (new code can read old data) and backward compatibility (old code can read new data) through features like default values for fields, optional fields, and union types. This makes it ideal for long-lived data storage and streaming systems where data structures evolve as business requirements change. Avro has become a cornerstone technology in the big data ecosystem, widely used with Apache Kafka for streaming data pipelines (where the Confluent Schema Registry manages Avro schemas), Apache Spark for data processing, Apache Hive for data warehousing, and as the serialization format for Hadoop’s remote procedure calls. Avro supports rich data types including primitive types (null, boolean, int, long, float, double, bytes, string), complex types (records, enums, arrays, maps, unions, fixed), and logical types (decimals, dates, timestamps), and provides code generation capabilities that create type-safe classes in languages like Java, C++, C#, Python, Ruby, and PHP. Its combination of compact binary encoding, strong schema support, language independence, and schema evolution capabilities makes Avro the preferred serialization format for many data-intensive applications, particularly in streaming architectures and data lakes.
License: Apache 2.0
Tags: Data, Serialization, Binary
Properties: Data Serialization System, Binary Format, Compact Encoding, Schema-Based, JSON Schema Definition, Self-Describing Data, Apache Project, Apache Hadoop Ecosystem, Doug Cutting Created, Open Source, Language-Independent, Platform-Independent, Cross-Language Support, Rich Data Structures, Schema Evolution, Forward Compatibility, Backward Compatibility, Full Compatibility, Default Values, Optional Fields, Field Addition, Field Deletion, Field Renaming, Union Types, Schema Resolution, Schema Registry Support, Confluent Schema Registry, Schema Versioning, Schema ID, Schema Discovery, Dynamic Typing, Static Typing, Code Generation, Type-Safe Classes, Java Support, C++ Support, C# Support, Python Support, Ruby Support, PHP Support, JavaScript Support, Perl Support, Haskell Support, Rust Support, Go Support, Primitive Types, Null Type, Boolean Type, Integer Types, Int Type, Long Type, Float Type, Double Type, Bytes Type, String Type, Complex Types, Record Type, Enum Type, Array Type, Map Type, Union Type, Fixed Type, Logical Types, Decimal Type, Date Type, Time Type, Timestamp Type, Duration Type, UUID Type, Nested Structures, Recursive Types, Named Types, Namespace Support, Documentation Fields, Aliases, Order Specification, File Format, Object Container Files, Data Files, Block-Based Storage, Compression Support, Deflate Compression, Snappy Compression, Bzip2 Compression, XZ Compression, Zstandard Compression, Sync Markers, Splittable Files, Hadoop Compatible, MapReduce Compatible, HDFS Storage, Distributed Storage, Big Data Processing, Streaming Data, Apache Kafka Integration, Kafka Serialization, Producer Support, Consumer Support, Apache Spark Integration, Spark SQL, DataFrame Support, Dataset Support, Apache Hive Integration, Hive Tables, Metastore Integration, Apache Flink Support, Apache Storm Integration, Data Pipeline, ETL Processes, Data Lakes, Data Warehousing, Message Queue Format, Event Sourcing, Log Aggregation, RPC Framework, Remote Procedure Calls, IDL Support, Interface Definition, Service Definition, Protocol Definition, Request/Response, Binary Protocol, Efficient Serialization, Fast Deserialization, Low Overhead, Small Payload Size, Network Efficient, Storage Efficient, Memory Efficient, CPU Efficient, Performance Optimized, High Throughput, Low Latency, Scalable, Version Control Friendly, Schema Registry, Centralized Schema Management, Schema Validation, Schema Compatibility Checking, Breaking Change Detection, Migration Support, Data Transformation, Schema Mapping, Type Conversion, Field Mapping, Data Migration Tools, Schema Tools, Command Line Tools, Avro Tools JAR, Schema Validation Tools, File Inspection, Data Inspection, JSON Conversion, Avro to JSON, JSON to Avro, File Reading, File Writing, Stream Processing, Batch Processing, Real-Time Processing, Container Format, Metadata Support, Custom Metadata, User Metadata, Codec Support, Encoding Options, Generic Records, Specific Records, Reflect Records, Dynamic Schema, Runtime Schema, Compile-Time Schema, Type Safety, Null Safety, Missing Field Handling, Extra Field Handling, Type Promotion, Numeric Promotion, String Encoding, UTF-8, Byte Arrays, Binary Data, Large Object Support, Chunked Data, Block Size Configuration, Buffer Management, Memory Allocation, Object Reuse, Object Pooling, Zero-Copy, Direct Buffers, NIO Support, Async IO, Streaming API, Iterator Support, Random Access, Sequential Access, Index Support, Projection Support, Column Pruning, Predicate Pushdown, Filter Support, Query Optimization, Partition Support, Sharding, Distribution, Replication, Fault Tolerance, Data Integrity, Checksum Support, CRC Validation, Error Detection, Error Handling, Exception Handling, Validation Rules, Constraint Enforcement, Business Rules, Industry Standard, Production Ready, Enterprise Grade, Mission Critical, High Availability, Disaster Recovery, Backup Format, Archive Format, Long-Term Storage, Cold Storage, Hot Storage, Warm Storage, Tiered Storage, Cloud Storage Compatible, S3 Compatible, Azure Blob Storage, Google Cloud Storage, Object Storage, Distributed File Systems, Network File Systems, Local File Systems, Database Storage, NoSQL Databases, Document Stores, Column Stores, Key-Value Stores, Time Series Databases, Graph Databases, Search Engines, Elasticsearch Support, Solr Support, Analytics Engines, Data Science, Machine Learning Datasets, Training Data, Feature Storage, Model Serialization, Experiment Tracking, MLflow Integration, Data Versioning, DVC Support, Data Lineage, Data Provenance, Audit Trails, Compliance, GDPR Support, Data Governance, Data Quality, Data Catalog, Metadata Management, Documentation, API Documentation, Schema Documentation, Field Documentation, Type Documentation, Example Data, Sample Files, Test Data, Mock Data, Debugging Tools, Profiling Tools, Performance Monitoring, Metrics Collection, Logging Support, Tracing, Observability, Monitoring Integration, Alerting, Community Support, Active Development, Regular Releases, Bug Fixes, Security Patches, Performance Improvements, Feature Additions, Backward Compatible Releases, Stable API, Mature Technology, Battle Tested, Widely Adopted, Industry Proven, Ecosystem Integration, Tool Support, IDE Plugins, Editor Support, Build Tool Integration, Maven Support, Gradle Support, SBT Support, NPM Packages, PyPI Packages, Package Managers, Dependency Management, Transitive Dependencies, Minimal Dependencies, Lightweight, Portable, Embeddable, Library Form, Framework Integration, Microservices, Service Mesh, Container Support, Docker Compatible, Kubernetes Support, Cloud Native, Serverless Compatible, Lambda Functions, Edge Computing, IoT Data, Sensor Data, Telemetry, Metrics, Events, Notifications, Webhooks, API Responses, Interoperability, Protocol Buffers Alternative, Thrift Alternative, MessagePack Alternative, BSON Alternative, Parquet Complementary, ORC Complementary, Specification, Standard Format, Open Standard, Vendor Neutral, Community Driven
Website: https://avro.apache.org/
9.16 - Agent2Agent
Communicating the interoperability between systems using AI agents.
The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.
License: Apache 2.0
Tags: agents
Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context
Website: https://a2a-protocol.org/latest/
Standards: JSON-RPC 2.0, gRPC
9.17 - JSON Schema
Annotating and validating JSON artifacts.
JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.
The specification traces back to early proposals by Kris Zyp in 2007 and has evolved through draft-04, draft-06, and draft-07 to the current 2020-12 release.
JSON Schema provides a rich set of keywords—such as title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref—to describe and validate data used in business operations.
To get involved with the community, visit the JSON Schema GitHub organization, subscribe to the blog via RSS, join discussions and meetings in the Slack workspace, and follow updates on LinkedIn.
JSON Schema is a foundational standard used by many other specifications, tools, and services. It’s the workhorse for defining and validating the digital data that keeps modern businesses running.
License: Academic Free License version 3.0
Tags: Schema, Validation
Properties: schema, title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref
Website: https://json-schema.org
9.18 - Protocol Buffers
Fast binary serialized structured data.
Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.
Protocol Buffers began inside Google in the early 2000s as an internal, compact, schema-driven serialization format; in 2008 Google open-sourced it as proto2. Most recently in 2023, Google introduced “Protobuf Editions” to evolve semantics without fragmenting the language into proto2 vs. proto3, while the project continues to refine tooling, compatibility guidance, and release processes across a broad open-source community.
Owner: Google
License: BSD-3-Clause License
Tags: Schema, Data, Binary, Serialization
Properties: messages, types, fields, cardinality, comments, reserved values, scalars, defaults, enumerations, nested types, vinary, unknown fields, oneOf, maps, packages, and services
Website: https://protobuf.dev/
9.19 - Schema.org
Community-driven schema vocabulary for people, places, and things.
Schema.org is a collaborative, community-driven vocabulary that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content. Publishers annotate pages using formats like JSON-LD (now the common choice), Microdata, or RDFa to express this structured data, which enables features such as rich results, knowledge panels, and better content discovery. The project maintains core and extension vocabularies, evolves through open proposals and discussion, and focuses on practical, interoperable semantics rather than being tied to a single standard body.
License: Creative Commons Attribution-ShareAlike License (CC BY-SA 3.0)
Tags: Schema
Properties: Thing, Action, AchieveAction, LoseAction, TieAction, WinAction, AssessAction, ChooseAction, VoteAction, IgnoreAction, ReactAction, AgreeAction, DisagreeAction, DislikeAction, EndorseAction, LikeAction, WantAction, ReviewAction, ConsumeAction, DrinkAction, EatAction, InstallAction, ListenAction, PlayGameAction, ReadAction, UseAction, WearAction, ViewAction, WatchAction, ControlAction, ActivateAction, AuthenticateAction, DeactivateAction, LoginAction, ResetPasswordAction, ResumeAction, SuspendAction, CreateAction, CookAction, DrawAction, FilmAction, PaintAction, PhotographAction, WriteAction, FindAction, CheckAction, DiscoverAction, TrackAction, InteractAction, BefriendAction, CommunicateAction, AskAction, CheckInAction, CheckOutAction, CommentAction, InformAction, ConfirmAction, RsvpAction, InviteAction, ReplyAction, ShareAction, FollowAction, JoinAction, LeaveAction, MarryAction, RegisterAction, SubscribeAction, UnRegisterAction, MoveAction, ArriveAction, DepartAction, TravelAction, OrganizeAction, AllocateAction, AcceptAction, AssignAction, AuthorizeAction, RejectAction, ApplyAction, BookmarkAction, PlanAction, CancelAction, ReserveAction, ScheduleAction, PlayAction, ExerciseAction, PerformAction, SearchAction, SeekToAction, SolveMathAction, TradeAction, BuyAction, OrderAction, PayAction, PreOrderAction, QuoteAction, RentAction, SellAction, TipAction, TransferAction, BorrowAction, DonateAction, DownloadAction, GiveAction, LendAction, MoneyTransfer, ReceiveAction, ReturnAction, SendAction, TakeAction, UpdateAction, AddAction, InsertAction, AppendAction, PrependAction, DeleteAction, ReplaceAction
Website: https://schema.org/g/latest/
9.20 - JSON-LD
Introducing semantics into JSON so machines can understand meaning.
JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).
Properties: base, containers, context, direction, graph, imports, included, language, lists, nests, prefixesm propagate, protected, reverse, set, types, values, versions, and vocabulary
Website: https://json-ld.org/
9.21 - Spectral
Enforcing style guides across JSON artifacts to govern schema.
Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.
While Spectral is a tool, its rules format is increasingly treated as a de facto standard. Spectral traces its roots to Speccy, an API linting engine created by Phil Sturgeon at WeWork. Phil later brought the concept to Stoplight, where Spectral and the next iteration of the rules format were developed; Stoplight was subsequently acquired by SmartBear.
With Spectral, you define rules and rulesets using properties such as given, then, description, message, severity, formats, recommended, and resolved. These can be applied to any JSON or YAML artifact, with primary adoption to date around OpenAPI and AsyncAPI.
The project’s GitHub repository hosts active issues and discussions, largely focused on the CLI. Development continues under SmartBear, including expanding how rules are applied across API operations and support for Arazzo workflow use cases.
Most commonly, Spectral is used to lint and govern OpenAPI and AsyncAPI specifications during design and development. It is expanding into Arazzo workflows and can be applied to any standardized JSON or YAML artifact validated with JSON Schema—making it a flexible foundation for governance across the API lifecycle.
License: Apache
Tags: Rules, Governance
Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties
GitHub: https://github.com/stoplightio/spectral
Standards: JSON Schema
9.22 - Vacuum
Enforcing style guides across JSON artifacts to govern schema.
VACUUM rules in the context of API linting are configuration definitions that specify quality and style requirements for OpenAPI specifications. RuleSets serve as comprehensive style guides where each individual rule represents a specific requirement that the API specification must meet. These rules are configured using YAML or JSON and follow the Spectral Ruleset model, making them fully compatible with Spectral rulesets while adding vacuum-specific enhancements like an id property for backward compatibility and flexible naming. A RuleSet contains a collection of rules that define what to check, where to check it, and how violations should be handled, allowing organizations to enforce consistent API design standards across their specifications.
Each rule within a RuleSet consists of several key components: a given property that uses JSONPath expressions (supporting both RFC 9535 and JSON Path Plus) to target specific sections of the OpenAPI document, a severity level (such as error, warning, or info) that indicates the importance of the rule, and a then clause that specifies which built-in function to apply and what field to evaluate. For example, a rule might target all tag objects in an API specification using $.tags[*] as the JSONPath expression, then apply the truthy function to verify that each tag has a description field populated. Built-in core functions like casing, truthy, and pattern provide the logic for evaluating whether specifications comply with the defined rules, enabling automated validation of API documentation quality, consistency, and adherence to organizational or industry standards.
Vacuum is a soft fork of Spectral, keeping the base format for interoperability, while also taking the specification into a new direction to support OpenAPI Doctor and Vacuum linting rules functionality in tooling and pipelines.
License: Apache
Tags: Rules, Governance
Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties
GitHub: https://quobix.com/vacuum/rulesets/understanding/
Standards: JSON Schema, Spectral
9.23 - Open Policy Agent (OPA)
Unifies policy enforcement for authentication, security, and auditability.
OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.
Originally developed at Styra in 2016, OPA was donated to the Cloud Native Computing Foundation (CNCF) in 2018 and graduated in 2021.
Rego includes rules and rulesets, unit tests, functions and built-ins, reserved keywords, conditionals, comprehensions/iterations, lookups, assignment, and comparison/equality operators—giving you a concise, expressive way to author and validate policy.
You can contribute on GitHub, follow updates via the blog and its RSS feed, and join conversations in the community Slack and on the OPA LinkedIn page.
OPA works across platforms and operational layers, standardizing policy for key infrastructure such as Kubernetes, API gateways, Docker, CI/CD, and more. It also helps normalize policy across diverse data and API integration patterns used in application and agent automation.
License: Apache
Tags: Policies, Authentication, Authorization
Properties: rules, language, tests, functions, reserved names, grammar, conditionals, iterations, lookups, assignment, equality
Website: https://www.openpolicyagent.org/
9.24 - CSV
Lighter weight data serialization format for data exchange.
CSV (Comma-Separated Values) is a simple, plain-text file format used to store tabular data in a structured way where each line represents a row and values within each row are separated by commas (or other delimiters like semicolons, tabs, or pipes). This straightforward format makes CSV one of the most universally supported data exchange formats, readable by spreadsheet applications like Microsoft Excel, Google Sheets, and LibreOffice Calc, as well as databases, data analysis tools, and virtually every programming language. CSV files are human-readable when opened in a text editor, showing data in a grid-like structure that closely mirrors how it would appear in a spreadsheet. The format’s simplicity—requiring no special markup, tags, or complex syntax—makes it ideal for representing datasets, lists, reports, and any tabular information where relationships between columns and rows need to be preserved.
Despite its simplicity, CSV has become essential for data import/export operations, data migration between systems, bulk data loading into databases, and sharing datasets for data analysis and machine learning. The format is particularly valuable in business contexts for handling customer lists, financial records, inventory data, sales reports, and scientific datasets. CSV files are compact and efficient, requiring minimal storage space compared to more verbose formats like XML or JSON, which makes them ideal for transferring large datasets over networks or storing historical data archives. However, CSV has limitations: it lacks standardized support for data types (everything is typically treated as text unless parsed), has no built-in schema definition, struggles with representing hierarchical or nested data, and can encounter issues with special characters, line breaks within fields, or commas in data values (typically addressed by enclosing fields in quotes). Despite these constraints, CSV remains the go-to format for flat, rectangular data exchange due to its universal compatibility, ease of use, and the fact that it can be created and edited with the most basic tools, from text editors to sophisticated data processing frameworks.
Tags: Data Format
Properties: Text-Based, Plain Text Format, Tabular Data, Row-Based Structure, Column-Based Structure, Delimiter-Separated, Comma Delimiter, Alternative Delimiters, Tab-Separated Values, Pipe-Separated, Semicolon-Separated, Human-Readable, Machine-Parsable, Flat File Format, Simple Syntax, Minimal Markup, No Tags, No Attributes, Lightweight, Compact, Small File Size, Efficient Storage, Fast Parsing, Universal Support, Cross-Platform, Language-Agnostic, Spreadsheet Compatible, Excel Compatible, Google Sheets Compatible, LibreOffice Compatible, Database Import/Export, SQL Bulk Loading, Data Exchange Format, Data Migration, Line-Based Records, Newline Row Separator, Field Delimiter, Quote Encapsulation, Double-Quote Escaping, Escape Characters, Header Row Support, Column Names, Schema-Less, No Data Types, Text-Only Values, No Type Enforcement, No Metadata, No Validation, No Comments, No Processing Instructions, RFC 4180, MIME Type text/csv, File Extension .csv, UTF-8 Encoding, ASCII Compatible, Character Encoding Support, Special Character Handling, Embedded Commas, Embedded Quotes, Embedded Newlines, Field Quoting, Optional Quoting, Whitespace Handling, Trailing Spaces, Leading Spaces, Empty Fields, Null Values, Missing Data Support, Sparse Data, Dense Data, Rectangular Grid, Fixed Columns, Variable Rows, No Nesting, No Hierarchy, No Relationships, Flat Structure, Single Table, No Joins, No Foreign Keys, Streaming Compatible, Incremental Processing, Line-by-Line Reading, Memory Efficient, Large File Support, Append-Only, Chronological Data, Time Series Data, Log Files, Sequential Access, Random Access, Indexing Support, Sorting Compatible, Filtering Compatible, Aggregation Compatible, Data Analysis, Statistical Analysis, Machine Learning Datasets, Training Data, Feature Vectors, Pandas Compatible, R Compatible, Python CSV Module, Java CSV Libraries, .NET CSV Support, Excel Formula Support, Cell Formatting Loss, No Styling, No Colors, No Fonts, No Borders, No Images, No Charts, Data-Only Format, Export Format, Import Format, Batch Processing, ETL Operations, Data Warehousing, Business Intelligence, Reporting Format, Audit Trails, Transaction Logs, Customer Lists, Contact Lists, Inventory Data, Sales Data, Financial Records, Scientific Data, Sensor Data, Measurement Data, Survey Results, Poll Data, Census Data, Demographic Data, Geographic Data, Coordinate Data, Latitude Longitude, Address Lists, Email Lists, Product Catalogs, Price Lists, Stock Data, Market Data, Historical Data, Archive Format, Backup Format, Version Control Friendly, Diff-Friendly, Merge-Friendly, Git Compatible, Text Editor Compatible, Command Line Tools, Awk Processing, Sed Processing, Grep Searching, Cut Command, Unix Tools, Shell Scripting, Automation Friendly, Cron Job Compatible, Scheduled Exports, API Responses, Web Scraping Output, Data Dumps, Bulk Downloads, FTP Transfer, Email Attachments, Cloud Storage, S3 Compatible, Azure Blob Storage, Google Cloud Storage, Database Export, MySQL Export, PostgreSQL Export, SQLite Export, Oracle Export, SQL Server Export, MongoDB Export, NoSQL Export, Data Conversion, Format Transformation, JSON to CSV, XML to CSV, Excel to CSV, CSV to JSON, Interoperability, Legacy System Support, Backwards Compatible, Universal Standard, Industry Standard, De Facto Standard, Widely Adopted, Mature Format, Production Ready, Battle Tested, Simple Implementation, Easy Generation, Easy Parsing, Minimal Dependencies, No External Libraries Required, Low Overhead, High Performance, Scalable, Concatenation Support, Split Support, Chunking Support, Partitioning Support, Compression Compatible, Gzip Compatible, Zip Compatible, Tar Compatible
Wikipedia: https://en.wikipedia.org/wiki/Comma-separated_values
9.25 - HTML
The standard markup language powering the Web.
HTML (HyperText Markup Language) is the foundational markup language of the World Wide Web, created by Tim Berners-Lee in 1991, that defines the structure and content of web pages through a system of elements enclosed in angle-bracket tags. HTML provides the semantic framework for organizing information on the web, using tags like
for headings,
for paragraphs, for hyperlinks, for images,