This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Standards

These are the standards that we come across as part of this market research, gathering all the specifications and formats we see companies using and speaking about, then documenting to understand where they fit into the landscape before graduating to being part of the technology stack.

1 - APIOps Cycles

APIOps Cycles is a Lean and service design–inspired methodology for designing, improving, and scaling APIs throughout their entire lifecycle. Developed since 2017 and continuously refined through community contributions and real-world projects across industries, APIOps Cycles provides a structured approach to API strategy using a distinctive metro map visualization where stations and lines represent critical aspects of the API lifecycle.

Aligning engineering with products when it comes to APIs.

The method is built around a collection of strategic canvas templates that help teams systematically address everything from customer journey mapping and value proposition definition to domain modeling, capacity planning, and risk assessment. As an open-source framework released under the Creative Commons Attribution–ShareAlike 4.0 license, APIOps Cycles is freely available for anyone to use, adapt, and share, with the complete method consisting of localized JSON and markdown files that power both the official website and open tooling available as an npm package. Whether you’re a developer integrating the method into your products and services, or an organization seeking to establish API product strategy and best practices, APIOps Cycles offers a proven, community-backed approach supported by a network of partners who can provide guidance and expertise in implementing the methodology effectively.

License: Creative Commons Attribution–ShareAlike 4.0

Tags: Products, Operations

Website: https://www.apiopscycles.com/

APIOps Cycles Canvases Outline

  1. Customer Journey Canvas
  • Persona
  • Customer Discovers Need
  • Customer Need Is Resolved
  • Journey Steps
  • Pains
  • Gains
  • Inputs & Outputs
  • Interaction & Processing Rules
  1. API Value Proposition Canvas
  • Tasks
  • Gain Enabling Features
  • Pain Relieving Features
  • API Products
  1. API Business Model Canvas
  • API Value Proposition
  • API Consumer Segments
  • Developer Relations
  • Channels
  • Key Resources
  • Key Activities
  • Key Partners
  • Benefits
  • Costs
  1. Domain Canvas
  • Selected Customer Journey Steps
  • Core Entities & Business Meaning
  • Attributes & Business Importance
  • Relationships Between Entities
  • Business, Compliance & Integrity Rules
  • Security & Privacy Considerations
  1. Interaction Canvas
  • CRUD Interactions
  • CRUD Input & Output Models
  • CRUD Processing & Validation
  • Query-Driven Interactions
  • Query-Driven Input & Output Models
  • Query-Driven Processing & Validation
  • Command-Driven Interactions
  • Command-Driven Input & Output Models
  • Command-Driven Processing & Validation
  • Event-Driven Interactions
  • Event-Driven Input & Output Models
  • Event-Driven Processing & Validation
  1. REST Canvas
  • API Resources
  • API Resource Model
  • API Verbs
  • API Verb Example
  1. GraphQL Canvas
  • API Name
  • Consumer Goals
  • Key Types
  • Relationships
  • Queries
  • Mutations
  • Subscriptions
  • Authorization Rules
  • Consumer Constraints
  • Notes / Open Questions
  1. Event Canvas
  • User Task / Trigger
  • Input / Event Payload
  • Processing / Logic
  • Output / Event Result
  1. Capacity Canvas
  • Current Business Volumes
  • Future Consumption Trends
  • Peak Load and Availability Requirements
  • Caching Strategies
  • Rate Limiting Strategies
  • Scaling Strategies
  1. Business Impact Canvas
  • Availability Risks
  • Mitigate Availability Risks
  • Security Risks
  • Mitigate Security Risks
  • Data Risks
  • Mitigate Data Risks
  1. Locations Canvas
  • Location Groups
  • Location Group Characteristics
  • Locations
  • Location Characteristics
  • Location Distances
  • Location Distance Characteristics
  • Location Endpoints
  • Location Endpoint Characteristics

2 - Open Context Protocol (OCP)

Open Context Protocol (OCP) is an open standard that automatically transforms APIs into intelligent agent tools through HTTP headers and OpenAPI specifications, enabling seamless tool discovery and context-aware interactions without requiring API modifications.

Open standard for tranforming APis into agent tooling.

Open Context Protocol (OCP) is a web-native standard designed specifically for transforming existing web APIs into intelligent agent tools without requiring infrastructure changes or new servers. Unlike approaches that treat web APIs like desktop applications, OCP embraces the web’s existing architecture—using OpenAPI specifications for automatic tool discovery and HTTP headers for maintaining conversational context across requests. APIs work with agents immediately in their current form, with optional enhancements available for those that want deeper agent integration.

OCP is a complete, production-ready ecosystem that includes a formal specification, a public registry of indexed APIs, client libraries, a VS Code extension, and OpenAPI schema extensions. It operates on a compatibility model where existing APIs function as agent tools out of the box (Level 1), while APIs can optionally implement context-aware features for richer interactions (Level 2). Positioned as complementary rather than competitive to desktop-focused solutions like MCP, OCP provides the thin, standards-based layer that web APIs need to participate in agent workflows without abandoning the HTTP, REST, and OpenAPI foundations that already power the modern web.

License: MIT License

Tags: Tools, APIs, Discovery

Website: https://opencontextprotocol.io/ GitHub: https://github.com/opencontextprotocol

3 - JSON Structure

JSON Structure is a schema language that can describe data types and structures whose definitions map cleanly to programming language types and database constructs as well as to the popular JSON data encoding. The type model reflects the needs of modern applications and allows for rich annotations with semantic information that can be evaluated and understood by developers and by large language models (LLMs).

Describe data types and structures for use in programming language types and database constructs.

4 - TypeSpec

TypeSpec is a language for defining cloud service APIs and shapes. TypeSpec is a highly extensible language with primitives that can describe API shapes common among REST, OpenAPI, gRPC, and other protocols.

Open-source collection format.

TypeSpec is a language for defining cloud service APIs and shapes. TypeSpec is a highly extensible language with primitives that can describe API shapes common among REST, OpenAPI, gRPC, and other protocols.

TypeSpec is excellent for generating many different API description formats, client and service code, documentation, and other assets while keeping your TypeSpec definition as a single source of truth.

Using TypeSpec, you can create reusable patterns for all aspects of an API and package those reusable patterns into libraries. These patterns establish “guardrails” for API designers and make it easier to follow best practices than to deviate from them. TypeSpec also has a rich linter framework with the ability to flag anti-patterns as well as an emitter framework that lets you control the output to ensure it follows the patterns you want.

TypeSpec is a Microsoft-built, community-supported project. Your ideas, feedbacks, and code make all the difference and we deeply appreciate the support from the community.

License: Apache License

Tags: Collections

Website: https://typespec.io/

5 - Agent Skills

Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently.

A simple, open format for giving agents new capabilities and expertise.

Agents are increasingly capable, but often don’t have the context they need to do real work reliably. Skills solve this by giving agents access to procedural knowledge and company-, team-, and user-specific context they can load on demand. Agents with access to a set of skills can extend their capabilities based on the task they’re working on.

License: Apache 2.0

Tags: Skills

Website: https://agentskills.io/home

6 - Agents.md

AGENTS.md complements this by containing the extra, sometimes detailed context coding agents need, with build steps, tests, and conventions that might clutter a README or aren’t relevant to human contributors.

A simple, open format for guiding coding agents, used by over 60k open-source projects.

README.md files are for humans: quick starts, project descriptions, and contribution guidelines. AGENTS.md complements this by containing the extra, sometimes detailed context coding agents need: build steps, tests, and conventions that might clutter a README or aren’t relevant to human contributors.

License: MIT License

Tags: Discovery

Properties: name, description, images, created, modified, URLs, versions, tags, properties, media types, data, common properties, overlays, includes, networks, maintainers

Website: https://agents.md/

7 - Airbyte

Airbyte is an open-source data integration platform that enables organizations to move data from over 600 sources into data warehouses, data lakes, and other destinations AirbyteCelerdata.

A simple, open format for guiding coding agents, used by over 60k open-source projects.

Airbyte is an open-source data integration platform that enables organizations to move data from over 600 sources into data warehouses, data lakes, and other destinations AirbyteCelerdata. The platform addresses complexities in data movement, transformation, and synchronization through a flexible and user-friendly interface Celerdata. It’s designed to help teams move data reliably and provide AI agents with real-time access to context, whether replicating databases into warehouses for analytics or building applications requiring live data from SaaS APIs GitHub. Founded in 2020, the company follows a product-led growth model and has raised $181 million from investors including Benchmark, Accel, Altimeter, and Y Combinator Airbyte. The platform’s open-source nature allows users to modify and extend it for specific needs, while its Connector Development Kit enables efficient creation of custom connectors Celerdata. Airbyte operates on an Extract-Load-Transform (ELT) paradigm and is used by over 25,000 companies for centralizing data across their technology stack.

License: MIT License

Tags: Data

Website: https://airbyte.com/ JSON Schema: https://github.com/airbytehq/airbyte-python-cdk/blob/main/airbyte_cdk/sources/declarative/declarative_component_schema.yaml

8 - FinOps Focus

FinOps Open Cost & Usage Specification is an open specification that normalizes billing datasets across cloud, SaaS, data center, and other technology vendors to reduce complexity for FinOps Practitioners.

The Unifying Language for Technology Value

FinOps Open Cost & Usage Specification is an open specification that normalizes billing datasets across cloud, SaaS, data center, and other technology vendors to reduce complexity for FinOps Practitioners.

License: MIT LIcense

Tags: Budgets, Plans, Pricing, Costs

Website: https://focus.finops.org/

9 - Goose

An open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM.

A simple, open format for guiding coding agents, used by over 60k open-source projects.

Goose is your on-machine AI agent, capable of automating complex development tasks from start to finish. More than just code suggestions, goose can build entire projects from scratch, write and execute code, debug failures, orchestrate workflows, and interact with external APIs - autonomously.

Whether you’re prototyping an idea, refining existing code, or managing intricate engineering pipelines, goose adapts to your workflow and executes tasks with precision.

Designed for maximum flexibility, goose works with any LLM and supports multi-model configuration to optimize performance and cost, seamlessly integrates with MCP servers, and is available as both a desktop app as well as CLI - making it the ultimate AI assistant for developers who want to move faster and focus on innovation.

License: Apache 2.0

Tags: Agents

Website: https://block.github.io/goose/

10 - Microcks Examples

APIExamples format is Microcks’ own specification format for defining examples intended to be used by Microcks mocks.

APIExamples documents are intended to be imported as secondary artifacts only ; thanks to the Multi-Artifacts support.

APIExamples format is Microcks’ own specification format for defining examples intended to be used by Microcks mocks. It can be seen as a lightweight, general purpose specification to solely serve the need to provide mock datasets. The goal of this specification is to keep the Microcks adoption curve very smooth with development teams but also for non developers.

APIExamples files are simple YAML and aim to be very easy to understand and edit. More over, the description is independent from the API protocol! We’re rather attached to describe examples depending on the API interaction style: Request/Response based or Event-driven/Asynchronous.

For ease of use, we provide a JSON Schema that you can download here. Thus, you can integrate it in your code editor and benefit from code completion and validation.

License: Apache 2.0

Tags: Examples

Website: https://microcks.io/documentation/references/examples/

11 - Agent2Agent

This specification updates the HTTP SEARCH method originally defined in [RFC5323].

Search baked into HTTP.

This specification updates the HTTP SEARCH method originally defined in [RFC5323]. Many existing HTTP-based applications use the HTTP GET and POST methods in various ways to implement the functionality provided by SEARCH. Using a GET request with some combination of query parameters included within the request URI (as illustrated in the example below) is arguably the most common mechanism for implementing search in web applications. With this approach, implementations are required to parse the request URI into distinct path (everything before the ‘?’) and query elements (everything after the ‘?’). The path identifies the resource processing the query (in this case ‘http://example.org/feed') while the query identifies the specific parameters of the search operation.

Tags: HTTP, Search, Discovery

Website: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-method-w-body-00.html

12 - Bruno Collection

Bruno collections are organized sets of API requests and environments within the Bruno API client, allowing developers to structure, test, and share their API workflows efficiently.

Open source client specification.

Bruno collections are structured groups of API requests, variables, and environments used within the Bruno API client to help developers organize and manage their API workflows. Each collection acts as a self-contained workspace where you can store requests, define authentication, set environment values, document behaviors, and run tests. Designed with a filesystem-first approach, Bruno collections are easy to version-control and share, making them especially useful for teams collaborating on API development or maintaining consistent testing practices across environments.

License: MIT license

Tags: Clients, Executable

Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings

Website: https://www.usebruno.com/

13 - Bruno Environment

A Bruno environment is a set of key–value variables that let you switch configurations—such as URLs, tokens, or credentials—so you can run the same API requests across different contexts like development, staging, or production.

A open-source client environment.

A Bruno environment is a configurable set of key–value variables that allows you to run the same API requests across different deployment contexts, such as local development, staging, and production. Environments typically store values like base URLs, authentication tokens, headers, or other parameters that may change depending on where an API is being tested. By separating these values from the requests themselves, Bruno makes it easy to switch contexts, maintain cleaner collections, and ensure consistency when collaborating with others or automating API workflows.

License: MIT license

Tags: Name, Variables, Enabled, Secret, Ephemeral, Persisted Value

Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings

Website: https://www.usebruno.com/

14 - vCard Ontology

A vCard is a digital file format used to store and exchange contact information, such as names, phone numbers, email addresses, and addresses, in a standardized, portable way.

Portable contact format.

A vCard is a standardized digital format for storing and exchanging contact information across different applications and devices. It can contain details such as names, phone numbers, email addresses, physical addresses, organization information, URLs, and even photos or custom fields. Commonly shared as .vcf files, vCards allow contacts to be imported or exported easily between email clients, mobile phones, CRM systems, and other address book tools. This makes them a convenient, portable way to transfer personal or business contact details in a consistent, machine-readable format.

License: Simplified BSD License

Tags: Contacts

Properties: BEGIN, END, SOURCE, KIND, XML, FN, N, NICKNAME, PHOTO, BDAY, ANNIVERSARY, GENDER, ADR, TEL, EMAIL, IMPP, LANG, TZ, GEO, TITLE, ROLE, LOGO, ORG, MEMBER, RELATED, CATEGORIES, NOTE, PRODID, REV, SOUND, UID, CLIENTPIDMAP, URL, VERSION, KEY, FBURL, CALADRURI, CALURI, BIRTHPLACE, DEATHPLACE, DEATHDATE, EXPERTISE, HOBBY, INTEREST, ORG-DIRECTORY, CONTACT-URI, CREATED, GRAMGENDER, LANGUAGE, PRONOUNS, SOCIALPROFILE, JSPROP

Website: https://www.w3.org/TR/vcard-rdf/

15 - Open Digital Rights Language (ODRL)

The Open Digital Rights Language (ODRL) is a proposed language for the Digital Rights Management (DRM) community for the standardisation of expressing rights information over content.

Language for expressing information rights over content.

The Open Digital Rights Language (ODRL) is a proposed language for the Digital Rights Management (DRM) community for the standardisation of expressing rights information over content. The ODRL is intended to provide flexible and interoperable mechanisms to support transparent and innovative use of digital resources in publishing, distributing and consuming of electronic publications, digital images, audio and movies, learning objects, computer software and other creations in digital form. The ODRL has no license requirements and is available in the spirit of “open source” software.

License: Open access

Tags: Digital Rights, Content, Information

Website: https://www.w3.org/TR/odrl/

16 - Wardley Maps

Wardley Mapping offers a wide range of benefits for organizations seeking to make better strategic, operational, and investment decisions.

Mapping value and commodities on a x / y access.

Wardley Maps are visual tools that help organizations understand their strategic landscape by mapping the components of a business, system, or ecosystem according to their value to the user and their stage of evolution—from innovation to commodity. By showing how technologies, capabilities, and practices mature over time, Wardley Maps reveal where to focus innovation, where to standardize, and where to optimize costs. They provide situational awareness—a clear view of what exists, what is changing, and why—to guide better strategic decisions. This approach helps teams align around a shared understanding of their environment, anticipate market shifts, and prioritize investments, ultimately enabling organizations to act with greater clarity, agility, and purpose.

  1. Situational Awareness

Wardley Maps visualize the landscape in which an organization operates — showing components (activities, data, practices, technologies) and how they evolve from genesis → custom-built → product → commodity. This clarity helps teams understand where they are and what’s changing, rather than relying on abstract strategy slides.

  1. Informed Strategic Decision-Making

By showing the position and evolution of components, leaders can:

Identify where to innovate versus where to standardize.

Spot opportunities for outsourcing, automation, or partnerships.

Avoid wasting resources on commoditized areas.

  1. Alignment Across Teams

Wardley Maps act as a shared language between business, product, and technical teams. They help align everyone’s understanding of the environment and justify why specific actions (e.g., building vs buying) make sense.

  1. Anticipation of Change

Because the framework is grounded in the idea of evolution, it helps anticipate how technologies, markets, or practices are likely to shift — preparing organizations to adapt ahead of competitors.

  1. Improved Communication and Governance

Wardley Maps make complex systems visual and explicit, which supports better governance, clearer rationale for investments, and improved storytelling to stakeholders or boards.

  1. Competitive Advantage

By understanding where the organization sits in the broader ecosystem and how components are evolving, teams can exploit weak signals, disrupt incumbents, and avoid being disrupted.

  1. Resource Optimization

Mapping helps direct attention and funding toward high-value or differentiating areas, reducing waste and redundancy across commoditized layers.

  1. Continuous Learning

Because maps evolve over time, they provide a feedback loop that builds organizational learning — enabling teams to track how strategies play out and refine future moves.

Tags: Strategy, Awareness, Change, Communication, Governance

Properties: value, commodiities, investments, services

Wikipedia: https://en.wikipedia.org/wiki/Wardley_map

17 - OAuth Client ID Metadata Document

This specification defines a mechanism through which an OAuth client can identify itself to authorization servers, without prior dynamic client registration or other existing registration. This is through the usage of a URL as a client_id in an OAuth flow, where the URL refers to a document containing the necessary client metadata, enabling the authorization server to fetch the metadata about the client as needed.

An OAuth client identifying itself to authorization servers.

In order for an OAuth 2.0 [RFC6749] client to utilize an OAuth 2.0 authorization server, the client needs to establish a unique identifier, and needs to to provide the server with metadata about the application, such as the application name, icon and redirect URIs. In cases where a client is interacting with authorization servers that it has no relationship with, manual registration is impossible.

While Dynamic Client Registration [RFC7591] can provide a method for a previously unknown client to establish itself at an authorization server and obtain a client identifier, this is not always practical in some deployments and can create additional challenges around management of the registration data and cleanup of inactive clients.

This specification describes how an OAuth 2.0 client can publish its own registration information and avoid the need for pre-registering at each authorization server.

License: BSD License

Tags: Authentication, OAuth, Security

Properties: client_id, client_name, client_uri, logo_uri, redirect_uris, token_endpoint_auth_method, grant_types, response_types, scope, jwks_uri, jwks, contacts, software_id, software_version, client_id_metadata_document_supported

Website: https://www.ietf.org/archive/id/draft-parecki-oauth-client-id-metadata-document-00.html

Standards: OAuth

18 - KCL

KCL is an open-source configuration and policy language hosted by the Cloud Native Computing Foundation (CNCF) as a Sandbox Project.

Simplifies logic writing, offers easy-to-use automation APIs, and seamlessly integrates with existing systems.

KCL is an open-source configuration and policy language hosted by the Cloud Native Computing Foundation (CNCF) as a Sandbox Project. Built on a foundation of constraints and functional programming principles, KCL enhances the process of writing complex configurations, particularly in cloud-native environments. By leveraging advanced programming language techniques, KCL promotes improved modularity, scalability, and stability in configuration management. It simplifies logic writing, offers easy-to-use automation APIs, and seamlessly integrates with existing systems.

License: Apache 2.0

Tags: agents

Properties: Declarative, Compiled, Statically Typed, Schema-Centric, Functional, Constraint-Based, Immutable, API-Aware, Modular, Extensible, High-Performance, Secure, Predictable, Automation-Friendly, Stable, Scalable, Portable, IDE-Integrated, Toolchain-Supported, Multi-Language SDK, Cloud-Native, Configuration-Oriented, Policy-Driven, Validation-Enabled, Integration-Ready, Production-Tested

Website: https://www.kcl-lang.io/

Standards: JSON-RPC 2.0, gRPC

19 - Agent2Agent

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

20 - Agents.json

Agents.json is an open-source JSON specification that formally describes contracts for API and AI agent interactions, built on top of the OpenAPI standard.

Describing the surface area of API operations to make discoverable.

Agents.json is an open-source JSON specification that formally describes contracts for API and AI agent interactions, built on top of the OpenAPI standard.

License: MIT License

Tags: Agents

Website: https://github.com/wild-card-ai/agents-json

Standards: JSON Schema

21 - API Commons

API Commons is a collection of open-source building blocks for API operations. It began as a machine-readable way to define the parts of an API, and works in concert with APIs.json to translate human-readable aspects of your API program into machine-readable artifacts that can standardize and automate your ecosystem.

Common schema that can be used to describe API operations.

API Commons is a collection of open-source building blocks for API operations. It began as a machine-readable way to define the parts of an API, and works in concert with APIs.json to translate human-readable aspects of your API program into machine-readable artifacts that can standardize and automate your ecosystem.

License: Creative Commons Attribution 3.0 Unported

Tags: Schema, Change, Onboarding, Licensing, Pricing, Support, SDKs

Properties: Change Log, Road Map, Getting Started, Interface Licenses, Pricing, Rate Limits, Tiers, SDKs, Support, Use Cases, and Versioning

Website: https://apicommons.org/

Standards: JSON Schema

22 - APIs.json

APIs.json is a machine-readable specification that API providers use to describe their API operations—much like sitemap.xml describes a website. It offers an index of internal, partner, and public APIs that includes not only machine-readable artifacts (OpenAPI, JSON Schema, etc.) but also traditionally human-readable assets such as documentation, pricing, and terms of service.

Describing the surface area of API operations to make discoverable.

APIs.json is a machine-readable specification that API providers use to describe their API operations—much like sitemap.xml describes a website. It offers an index of internal, partner, and public APIs that includes not only machine-readable artifacts (OpenAPI, JSON Schema, etc.) but also traditionally human-readable assets such as documentation, pricing, and terms of service.

License: MIT License

Tags: Discovery

Properties: name, description, images, created, modified, URLs, versions, tags, properties, media types, data, common properties, overlays, includes, networks, maintainers

Website: https://apisjson.org

Standards: JSON Schema

23 - Arazzo

The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Describing your business processes and workflows using OpenAPI.

The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Arazzo emerged from a need identified in the OpenAPI community for orchestration and automation across APIs described with OpenAPI. Version 1 of the specification is available, and work on future iterations is guided by a public roadmap.

With Arazzo, you can define elements such as: Info, Sources, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusables, Criteria, Request Bodies, and Payload Replacements—providing a consistent approach to delivering a wide range of automation outcomes.

You can engage with the Arazzo community via the GitHub repository for each version and participate in GitHub Discussions to stay current on meetings and interact with the specification’s stewards and the broader community.

Arazzo is the logical layer on top of OpenAPI: it goes beyond documentation, mocking, and SDKs to focus on defining real business workflows that use APIs. Together, Arazzo and OpenAPI help align API operations with the rest of the business.

License: Apache 2.0

Tags: Workflows, Automation

Properties: Info, Source, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusable, Criterion, Request Bodies, and Payload Replacements

Website: https://spec.openapis.org/arazzo/latest.html

24 - AsyncAPI

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

Describing the surface area of your event-driven infrastructure.

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

The specification began as an open-source side project and was later donated to the Linux Foundation after the team joined Postman, establishing it as a standard with formal governance.

AsyncAPI lets you define servers, producers and consumers, channels, protocols, and messages used in event-driven API operations—providing a common, tool-friendly way to describe the surface area of event-driven APIs.

To get involved, visit the AsyncAPI GitHub repository and blog, follow the LinkedIn page, tune into the YouTube or Twitch channels, and join the conversation in the community Slack.

AsyncAPI can be used to define HTTP APIs much like OpenAPI, and it further supports multiple protocols such as Pub/Sub, Kafka, MQTT, NATS, Redis, SNS, Solace, AMQP, JMS, and WebSockets—making it useful across many approaches to delivering APIs.

License: Apache

Tags: Event-Driven

Properties: Servers, Producers, Consumers, Channels, Protocols, and Messages

Website: https://www.asyncapi.com

25 - Cedar

Cedar is a simple yet expressive policy language purpose-built for authorization, supporting common models such as role-based (RBAC) and attribute-based (ABAC) access control. It is fast, scalable, and designed for automated reasoning, enabling analysis tools that optimize policies and formally verify that your security model behaves as intended.

Expressing policies for authorization and role-based access control.

Cedar is a simple yet expressive policy language purpose-built for authorization, supporting common models such as role-based (RBAC) and attribute-based (ABAC) access control. It is fast, scalable, and designed for automated reasoning, enabling analysis tools that optimize policies and formally verify that your security model behaves as intended.

License: Apache v2.0 License

Tags: Policies, Governance

Properties: policies, evaluation, templates, entities, authorization, RBAC, ABAC, namespaces, groups, hiearchies, schema, validation, grammer, examples, levels

Website: https://www.cedarpolicy.com/en

26 - gRPC

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

27 - HTTP

HTTP (Hypertext Transfer Protocol) is a stateless application-layer protocol that defines how web clients and servers format and exchange requests and responses over the internet.

The foundation of the World Wide Web.

HTTP (Hypertext Transfer Protocol) is a stateless application-layer protocol that defines how web clients and servers format and exchange requests and responses over the internet.

HTTP (Hypertext Transfer Protocol) is the stateless application-layer protocol that underpins the web, defining how clients (like browsers) and servers exchange requests and responses to retrieve or modify resources identified by URLs. It specifies methods (e.g., GET, POST, PUT, DELETE), headers for metadata and content negotiation, and status codes that indicate outcomes. Although HTTP is stateless, features like cookies and authentication headers enable sessions and identity. HTTP traditionally runs over TCP (HTTP/1.1 and HTTP/2 with multiplexing and header compression) and, in its latest version HTTP/3, over QUIC to reduce latency and improve reliability, almost always secured with TLS.

28 - HTTP 2.0

HTTP/2 is a binary, multiplexed version of HTTP that uses streams, header compression (HPACK), and optional server push to reduce latency and improve performance over a single TCP connection.

The last version of HTTP that was quickly interated upon.

HTTP/2 is a binary, multiplexed version of HTTP that uses streams, header compression (HPACK), and optional server push to reduce latency and improve performance over a single TCP connection.

HTTP/2 speeds up web communication by sending many HTTP requests and responses concurrently over a single TCP connection using lightweight streams, eliminating the HTTP/1.1 head-of-line blocking at the application layer. It encodes headers efficiently with HPACK to cut bandwidth, supports stream prioritization and flow control so more important resources arrive sooner, and frames messages in a binary format that’s faster and less error-prone to parse. While TLS isn’t required by the spec, it’s almost always used in practice, and HTTP/2 is backward-compatible with existing HTTP semantics (methods, status codes, URIs). It also introduced server push for proactively sending assets—though many platforms now discourage or disable push in favor of alternatives like 103 Early Hints or preload hints.

29 - HTTP 3.0

HTTP/3 is the latest HTTP version that runs over QUIC (on UDP), providing multiplexed streams with built-in TLS 1.3 and connection migration to avoid TCP head-of-line blocking and improve performance.

The latest high performance version of HTTP.

HTTP/3 is the latest HTTP version that runs over QUIC (on UDP), providing multiplexed streams with built-in TLS 1.3 and connection migration to avoid TCP head-of-line blocking and improve performance.

HTTP/3 is the newest version of the Hypertext Transfer Protocol that runs over QUIC, a transport built on UDP, to deliver faster and more reliable web transfers—especially on mobile and lossy networks. By multiplexing many streams within one connection, it avoids TCP’s head-of-line blocking, integrates TLS 1.3 by design (with 0-RTT/1-RTT handshakes), and supports connection migration so downloads continue smoothly when a device changes networks. HTTP/3 keeps the same HTTP semantics (methods, status codes, headers) while using binary framing and QPACK header compression for efficiency, and it’s now widely supported by major browsers and CDNs.

30 - Hypertext Application Language

HAL (Hypertext Application Language) is a hypermedia format for representing links in JSON or XML. Introduced in 2012 for JSON, it now supports both JSON and XML via the application/hal+json and application/hal+xml media types. The specification remains an Internet-Draft; the latest edition is version 11 from October 10, 2023.

Expressing hypermedia controls using link relations.

HAL (Hypertext Application Language) is a hypermedia format for representing links in JSON or XML. Introduced in 2012 for JSON, it now supports both JSON and XML via the application/hal+json and application/hal+xml media types. The specification remains an Internet-Draft; the latest edition is version 11 from October 10, 2023.

The JSON Hypertext Application Language (HAL) is a standard which establishes conventions for expressing hypermedia controls, such as links, with JSON [RFC4627]. HAL is a generic media type with which Web APIs can be developed and exposed as series of links. Clients of these APIs can select links by their link relation type and traverse them in order to progress through the application.

HAL’s conventions result in a uniform interface for serving and consuming hypermedia, enabling the creation of general-purpose libraries that can be re-used on any API utilising HAL. The primary design goals of HAL are generality and simplicity. HAL can be applied to many different domains, and imposes the minimal amount of structure necessary to cover the key requirements of a hypermedia Web API.

**License: BSD License

Tags: Hypermedia

Properties: Resources, Links, Link Relations, Discovery, and Curies

Website: - https://stateless.group/hal_specification.html

Internet Draft: - https://www.ietf.org/archive/id/draft-kelly-json-hal-11.html

31 - JSON

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

32 - JSON RPC

JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses. A client sends an object with jsonrpc “2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.

Lightweight transport-agnostic remote procedure call protocol.

JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses: a client sends an object with jsonrpc:“2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.

JSON-RPC emerged in the mid-2000s as a community-driven, lightweight RPC protocol using JSON, with an informal 1.0 spec (c. 2005) that defined simple request/response messaging and “notifications” (no reply). A 1.1 working draft (around 2008) tried to broaden and formalize features but never became canonical. The widely adopted JSON-RPC 2.0 specification (2010) simplified and standardized the model—introducing the mandatory “jsonrpc”:“2.0” version tag, clearer error objects, support for both positional and named parameters, and request batching—while remaining transport-agnostic (HTTP, WebSocket, pipes, etc.).

License: Apache License 2.0 or MIT License

Tags: RPC

Properties: methods, parameters, identifier, results, errors, codes, messages, data

Website: https://www.jsonrpc.org/

Forum:** https://groups.google.com/g/json-rpc

33 - JSON Schema

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

Annotating and validating JSON artifacts.

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

The specification traces back to early proposals by Kris Zyp in 2007 and has evolved through draft-04, draft-06, and draft-07 to the current 2020-12 release.

JSON Schema provides a rich set of keywords—such as title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref—to describe and validate data used in business operations.

To get involved with the community, visit the JSON Schema GitHub organization, subscribe to the blog via RSS, join discussions and meetings in the Slack workspace, and follow updates on LinkedIn.

JSON Schema is a foundational standard used by many other specifications, tools, and services. It’s the workhorse for defining and validating the digital data that keeps modern businesses running.

License: Academic Free License version 3.0

Tags: Schema, Validation

Properties: schema, title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref

Website: https://json-schema.org

34 - JSON-LD

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Introducing semantics into JSON so machines can understand meaning.

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Properties: base, containers, context, direction, graph, imports, included, language, lists, nests, prefixesm propagate, protected, reverse, set, types, values, versions, and vocabulary

Website: https://json-ld.org/

35 - Model Context Protocol (MCP)

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Allowing applications to connect to large language models (LLMs).

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Introduced by Anthropic as an open-source effort, MCP addresses the challenge of integrating AI models with external tools and data. It aims to serve as a universal “USB port” for AI, allowing models to access real-time information and perform actions.

MCP defines concepts and properties such as hosts, clients, servers, protocol negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, and logging—providing a standardized approach to connecting applications with LLMs.

The MCP community organizes around a GitHub repository (with issues and discussions), plus a Discord, blog, and RSS feed to track updates and changes to the specification.

MCP is seeing growing adoption among API and tooling providers for agent interactions. Many related API/AI specifications reference, integrate with, or overlap with MCP—despite the project being an open-source protocol currently stewarded by a single company, which has not been contributed to a foundation.

Owner: Anthropic

License: MIT License

Tags: agents, workflows

Properties: hosts, clients, servers, protocols, negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, logging

Website: https://modelcontextprotocol.io/

Standards: JSON-RPC 2.0, JSON Schema

36 - NLWeb

NLWeb simplifies building conversational interfaces for websites. It natively supports the Model Context Protocol (MCP), allowing the same natural-language APIs to serve both humans and AI agents.

Conversational interfaces with semantics for websites.

NLWeb simplifies building conversational interfaces for websites. It natively supports the Model Context Protocol (MCP), allowing the same natural-language APIs to serve both humans and AI agents.

Schema.org and related semi-structured formats like RSS—used by over 100 million websites—have become not only de facto syndication mechanisms but also a semantic layer for the web. NLWeb leverages these standards to make natural-language interfaces easier to implement.

NLWeb provides features such as queries, site scoping, previous-query context, decontextualized queries, streaming, modes, scoring, schemas, summarization, generation, prompts, and authentication—streamlining conversational experiences for both people and agents.

A public GitHub repository with issues and discussions supports the specification’s evolution and keeps the community engaged.

NLWeb has announced launch partners and has received press coverage. Providers including Cloudflare and Snowflake are beginning to adopt and advocate for the specification, highlighting its potential as the web evolves alongside AI.

GitHub Repository: https://github.com/nlweb-ai/NLWeb

37 - OAuth 2.0

OAuth 2.0 allows users to grant applications secure, limited access to their data without sharing their passwords.

Allow users to grant access to their applications.

OAuth 2.0 is an industry-standard protocol that enables secure, delegated access to APIs without requiring users to share their passwords with applications. Instead of handing over credentials, a user authorizes a trusted identity provider—such as Google, Microsoft, or an enterprise login system—to issue short-lived access tokens to a client application. These tokens define who is allowed to access what and for how long. By separating authentication (verifying identity) from authorization (granting specific permissions), OAuth 2.0 provides a flexible, scalable way for web, mobile, and server applications to safely interact with protected resources while maintaining strong security and user control.

License: Simplified BSD License

Tags: Authentication, Authorization, Security

Properties: Client Id, Client Secret, Redirect Uri, Scope, Response Type, Grant Type, Code, State, Access Token, Refresh Token, Token Type, Expires In

Website: https://oauth.net/2/

38 - Open Collections

A modern, developer-first specification pioneered by Bruno for defining and sharing API collections. Designed for simplicity and collaboration.

Open-source collection format.

The OpenCollection Specification is a format for describing API collections, including requests, authentication, variables, and scripts. This specification enables tools to understand and work with API collections in a standardized way.

License: Apache License

Tags: Collections

Website: https://www.opencollection.com/

39 - Open Policy Agent (OPA)

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Unifies policy enforcement for authentication, security, and auditability.

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Originally developed at Styra in 2016, OPA was donated to the Cloud Native Computing Foundation (CNCF) in 2018 and graduated in 2021.

Rego includes rules and rulesets, unit tests, functions and built-ins, reserved keywords, conditionals, comprehensions/iterations, lookups, assignment, and comparison/equality operators—giving you a concise, expressive way to author and validate policy.

You can contribute on GitHub, follow updates via the blog and its RSS feed, and join conversations in the community Slack and on the OPA LinkedIn page.

OPA works across platforms and operational layers, standardizing policy for key infrastructure such as Kubernetes, API gateways, Docker, CI/CD, and more. It also helps normalize policy across diverse data and API integration patterns used in application and agent automation.

License: Apache

Tags: Policies, Authentication, Authorization

Properties: rules, language, tests, functions, reserved names, grammar, conditionals, iterations, lookups, assignment, equality

Website: https://www.openpolicyagent.org/

40 - OpenAI Model Spec

The Model Spec outlines the intended behavior for the models that power OpenAI’s products, including the API platform. Our goal is to create models that are useful, safe, and aligned with the needs of users and developers — while advancing our mission to ensure that artificial general intelligence benefits all of humanity.

To realize this vision, we need to:

  • Iteratively deploy models that empower developers and users.
  • Prevent our models from causing serious harm to users or others.
  • Maintain OpenAI’s license to operate by protecting it from legal and reputational harm.

These goals can sometimes conflict, and the Model Spec helps navigate these trade-offs by instructing the model to adhere to a clearly defined chain of command.

We are training our models to align to the principles in the Model Spec. While the public version of the Model Spec may not include every detail, it is fully consistent with our intended model behavior. Our production models do not yet fully reflect the Model Spec, but we are continually refining and updating our systems to bring them into closer alignment with these guidelines.

The Model Spec is just one part of our broader strategy for building and deploying AI responsibly. It is complemented by our usage policies, which outline our expectations for how people should use the API and ChatGPT, as well as our safety protocols, which include testing, monitoring, and mitigating potential safety issues.

By publishing the Model Spec, we aim to increase transparency around how we shape model behavior and invite public discussion on ways to improve it. Like our models, the spec will be continuously updated based on feedback and lessons from serving users across the world. To encourage wide use and collaboration, the Model Spec is dedicated to the public domain and marked with the Creative Commons CC0 1.0 deed.

License: Creative Commons CC0 1.0

Tags: Large Language Models, Artificial Intelligence

Website: https://model-spec.openai.com/2025-10-27.html

41 - OpenAPI

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

Describing the surface area of HTTP APIs and Webhooks.

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

OpenAPI was formerly known as Swagger. In 2015, SmartBear donated the specification to the Linux Foundation, establishing the OpenAPI Initiative (OAI) and a formal, community-driven governance model that anyone can participate in.

An OpenAPI document can be written in JSON or YAML and typically defines elements such as: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security.

OpenAPI has an active GitHub organization, blog, LinkedIn page, and Slack channel to encourage community participation. In addition, OAI membership helps fund projects and events that drive awareness and adoption.

The OpenAPI Specification can be used alongside two other OAI specifications: (1) the Arazzo specification for defining API-driven workflows, and (2) OpenAPI Overlays, which allow additional information to be overlaid onto an OpenAPI document.

License: Apache

Tags: HTTP APIs, Webhooks

Properties: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security

Website: https://www.openapis.org

42 - OpenAPI Overlays

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.

Define metadata, operations, and data structures for overlaying on top of OpenAPI.

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.

OpenAPI Overlays emerged from the need to adapt APIs for varied use cases, from improving developer experience to localizing documentation. The first version was recently released, and the roadmap is being developed within the OpenAPI Initiative.

The specification provides three constructs for augmenting an OpenAPI description: Info, Overlays, and Actions. How these are applied is being worked out across different tools and industries to accommodate the diversity of APIs being delivered.

To get involved, participate via the GitHub repository, where you’ll find discussions, meeting notes, and related topics. There’s also a dedicated channel within the broader OpenAPI Initiative Slack.

OpenAPI Overlays offer a robust way to manage the complexity of producing and consuming APIs across industries, regions, and domains. As the specification matures, it presents a strong opportunity to ensure documentation, mocks, examples, code generation, tests, and other artifacts carry the right context for different situations.

License: Apache License

Tags: Overlays

Properties: info, overlays, and actions

Website: https://spec.openapis.org/overlay/v1.0.0.html

Standards: JSON Schema

43 - Postman Collections

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Executable artifact for automating APi requests and responses for testing.

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Postman Collections started as a simple way to save and share API requests in the early Postman client (2013), then grew into a formal JSON format with the v1 schema published in 2015. The format then stabilized as v2.0.0 and shortly after as v2.1.0 in 2017, which remains the common export/import version today.

Owner: Postman

License: Apache 2.0

Properties: Metadata, Requests, Scripts, Variables, Authentication, Methods, Headers, URLs, Bodies, Events, Responses

Website: https://postman.com

44 - Protocol Buffers

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Fast binary serialized structured data.

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Protocol Buffers began inside Google in the early 2000s as an internal, compact, schema-driven serialization format; in 2008 Google open-sourced it as proto2. Most recently in 2023, Google introduced “Protobuf Editions” to evolve semantics without fragmenting the language into proto2 vs. proto3, while the project continues to refine tooling, compatibility guidance, and release processes across a broad open-source community.

Owner: Google

License: BSD-3-Clause License

Tags: Schema, Data, Binary, Serialization

Properties: messages, types, fields, cardinality, comments, reserved values, scalars, defaults, enumerations, nested types, vinary, unknown fields, oneOf, maps, packages, and services

Website: https://protobuf.dev/

45 - Resource Description Framework (RDF)

RDF is a standard model for data interchange on the Web. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed.

Facilitate the interchange of data on the web using a standardized model.

RDF is a standard model for data interchange on the Web. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed.

RDF extends the linking structure of the Web to use URIs to name the relationship between things as well as the two ends of the link (this is usually referred to as a “triple”). Using this simple model, it allows structured and semi-structured data to be mixed, exposed, and shared across different applications.

This linking structure forms a directed, labeled graph, where the edges represent the named link between two resources, represented by the graph nodes. This graph view is the easiest possible mental model for RDF and is often used in easy-to-understand visual explanations.

The Resource Description Framework (RDF) emerged in the late 1990s from W3C efforts to describe web resources with machine-readable metadata. RDF became a W3C Recommendation in 1999, followed by RDF Schema (RDFS) in 2000 to provide basic vocabularies and typing, weith major revision in 2004 clarified the abstract data model and semantics, stabilizing the notion of triples and graphs.

Owner: W3C

License: Apache 2.0

Tags: Interchange, Schema, Semantics

Properties: Triples, Graphs, Identifiers, Schema, Semantics, Serializations, Queries, Inferencing, Reification, Annotations, Comments, Domains

Website: https://www.w3.org/RDF/

46 - Robots Exclusion Protocol

The Robots Exclusion Protocol (REP) is a web standard that lets site owners tell automated crawlers which parts of a site should or shouldn’t be accessed by publishing a plain-text robots.txt file at the site root (e.g., /robots.txt). It uses directives like User-agent, Disallow, and Allow (plus nonstandard ones such as Crawl-delay) to set per-crawler rules; compliance is voluntary rather than legally enforceable.

Allows website owners tell automated crawlers what they can crawl.

The Robots Exclusion Protocol (REP) is a web standard that lets site owners tell automated crawlers which parts of a site should or shouldn’t be accessed by publishing a plain-text robots.txt file at the site root (e.g., /robots.txt). It uses directives like User-agent, Disallow, and Allow (plus nonstandard ones such as Crawl-delay) to set per-crawler rules; compliance is voluntary rather than legally enforceable. REP is separate from page-level controls like the robots meta tag or X-Robots-Tag header, which govern indexing/serving behavior rather than crawl access.

The Robots Exclusion Protocol began in early 1994 when Martijn Koster proposed a simple “robots.txt” convention on the www-talk list to stop ill-behaved crawlers from overloading sites, leading to a community consensus document on June 30, 1994 and rapid adoption by early search engines. Over the years it remained a de facto standard (documented at robotstxt.org) and was interpreted similarly by major engines, with Microsoft, Yahoo, and Google coordinating on consistent behavior by 2008. In July 2019 Google—working with Koster and others—pushed to formalize REP at the IETF, resulting in an Internet-Draft and, ultimately, the official specification as RFC 9309 published on September 12, 2022.

License: IETF Trust Legal Provisions (TLP)

Tags: Robots, Crawling

Properties: User-agent, Disallow, Allow, Sitemap, Crawl-delay, Host, Clean-param, Request-rate, Visit-time, noindex, nofollow.

Website: https://www.rfc-editor.org/rfc/rfc9309.html

47 - Schema.org

Schema.org is a collaborative, community-driven vocabulary (launched in 2011 by Google, Microsoft, Yahoo!, and Yandex) that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content.

Community-driven schema vocabulary for people, places, and things.

Schema.org is a collaborative, community-driven vocabulary that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content. Publishers annotate pages using formats like JSON-LD (now the common choice), Microdata, or RDFa to express this structured data, which enables features such as rich results, knowledge panels, and better content discovery. The project maintains core and extension vocabularies, evolves through open proposals and discussion, and focuses on practical, interoperable semantics rather than being tied to a single standard body.

License: Creative Commons Attribution-ShareAlike License (CC BY-SA 3.0)

Tags: Schema

Properties: schema

Website: https://schema.org/g/latest/

48 - Smithy

Modeling a service should be easy, no matter the interface. Smithy is extensible, typesafe, protocol agnostic, and powers services at AWS.

Open-source collection format.

Build APIs your customers will love using the Smithy Interface Definition Language (IDL).

The Smithy IDL provides an intuitive syntax that codifies best practices learned from years of experience building services and SDKs in over a dozen programming languages.

Use Smithy’s extensible model validation tools to ensure the quality and consistency of your APIs. Customizable linting, validation, and backwards-compatibility checks integrate with your IDE and CI/CD pipelines so you catch API quality issues before your customers do.

Smithy’s build tool integrations and plugin system make it easy to get started generating code from a Smithy model. Use one of the many open-source plugins for Smithy or create your own to make everything from model diagrams to SDKs.

Write your API model once and generate clients, servers, and documentation for multiple programming languages with Smithy’s CLI.

License: Apache License

Tags: Collections

Website: https://smithy.io/

49 - Spectral

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

Enforcing style guides across JSON artifacts to govern schema.

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

While Spectral is a tool, its rules format is increasingly treated as a de facto standard. Spectral traces its roots to Speccy, an API linting engine created by Phil Sturgeon at WeWork. Phil later brought the concept to Stoplight, where Spectral and the next iteration of the rules format were developed; Stoplight was subsequently acquired by SmartBear.

With Spectral, you define rules and rulesets using properties such as given, then, description, message, severity, formats, recommended, and resolved. These can be applied to any JSON or YAML artifact, with primary adoption to date around OpenAPI and AsyncAPI.

The project’s GitHub repository hosts active issues and discussions, largely focused on the CLI. Development continues under SmartBear, including expanding how rules are applied across API operations and support for Arazzo workflow use cases.

Most commonly, Spectral is used to lint and govern OpenAPI and AsyncAPI specifications during design and development. It is expanding into Arazzo workflows and can be applied to any standardized JSON or YAML artifact validated with JSON Schema—making it a flexible foundation for governance across the API lifecycle.

License: Apache

Tags: Rules, Governance

Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties

GitHub: https://github.com/stoplightio/spectral

Standards: JSON Schema

50 - Universal Tool Calling Protocol (UTCP)

UTCP is a lightweight, secure, and scalable standard that enables AI agents and applications to discover and call tools directly using their native protocols - no wrapper servers required.

Communicating the interoperability between systems using AI agents.

UTCP (Universal Tool Call Protocol) is a lightweight standard that enables AI agents to discover and directly call tools, APIs, and services using their native protocols without requiring intermediary wrapper servers or additional infrastructure. It works by providing a standardized “manual” that describes how to interact with your tools - similar to how OpenAPI documents APIs for human developers, but enhanced with agent-focused features like categorization tags and multi-protocol support (HTTP, CLI, gRPC, MCP). The key innovation is that agents can read these manuals to understand how to call your existing APIs directly with their original authentication and security mechanisms intact, eliminating the latency, complexity, and maintenance burden of traditional middleware approaches while making any human-callable API instantly accessible to AI systems.

License: Apache 2.0

Tags: agents

Properties: Auth, Data, Exceptions, Implementations, INterfaces, Plugins, Clients,

Website: https://www.utcp.io/

51 - XML

XML (eXtensible Markup Language) is a text-based, Unicode-friendly format for representing structured data using nested elements (tags) and attributes, making documents both human- and machine-readable. It’s “extensible” because you define your own vocabulary (element and attribute names), organize data hierarchically, and use namespaces to avoid naming collisions.

Text-based unicode-friendly format for representing structured data.

XML (eXtensible Markup Language) is a text-based, Unicode-friendly format for representing structured data using nested elements (tags) and attributes, making documents both human- and machine-readable. It’s “extensible” because you define your own vocabulary (element and attribute names), organize data hierarchically, and use namespaces to avoid naming collisions. XML supports validation with DTDs or XML Schema (XSD), and a rich toolset—XPath for querying, XSLT for transformation, DOM/SAX for parsing—though it’s more verbose than alternatives like JSON. Common uses include configuration files, document publishing, and system-to-system data exchange.

XML grew out of SGML (ISO 8879:1986) in the mid-1990s as a simpler, web-friendly subset led at the W3C by Jon Bosak and others; the XML 1.0 spec became a W3C Recommendation in February 1998 (with a minor 1.1 revision in 2004). Around it, a family of standards emerged—Namespaces (1999), XPath and XSLT 1.0 (1999), DOM levels (late 1990s–2000s), XML Schema 1.0 (2001), and related formats like SVG (2001), SOAP (2000/2003), RSS/Atom (early 2000s), and office document formats (ODF, OOXML). XML powered early web services, configuration, publishing, and data interchange across enterprises; while JSON later became dominant for browser-centric APIs, XML remains entrenched in many protocols, document workflows, and enterprise systems.

License: W3C Document License

Tags: Data Formats

Properties: Namespaces, Infosets, Tree Models, Character Data Forms, Instructions, Links, Declarations

GitHub: https://www.w3.org/TR/xml/

52 - YAML

YAML (“YAML Ain’t Markup Language”) is a human-friendly data serialization format used for configuration and data exchange, built around indentation to express structure (mappings/objects, sequences/arrays, and scalars). It supports comments (#), multi-document streams (—), anchors/aliases for reuse (&id, *id), and optional type tags.

Human-friendly data serialization format for data exchange.

YAML (“YAML Ain’t Markup Language”) is a human-friendly data serialization format used for configuration and data exchange, built around indentation to express structure (mappings/objects, sequences/arrays, and scalars). It supports comments (#), multi-document streams (—), anchors/aliases for reuse (&id, *id), and optional type tags. YAML 1.2 aligns closely with JSON (JSON is a subset of YAML 1.2), but YAML is more concise and readable for humans—hence its popularity in app configs, CI/CD pipelines, Kubernetes manifests, and Docker Compose—while also being sensitive to whitespace and, historically, to schema ambiguities (e.g., 1.1 vs 1.2 boolean parsing).

YAML was introduced in 2001 by Clark Evans (with Oren Ben-Kiki and Ingy döt Net) as a human-friendly data serialization format inspired by scripting languages and intended to be simpler than XML for configs and data exchange. The 1.1 spec (mid-2000s) broadened typing via schemas but also introduced notorious ambiguities (e.g., unquoted strings like “on”/“off” parsing as booleans). YAML 1.2 (2009) realigned the language with JSON—making JSON a subset of YAML—and clarified many edge cases, after which YAML gained wide adoption across developer tooling and ops: Rails configs, Ansible playbooks, Docker Compose files, CI/CD pipelines, and, most visibly, Kubernetes manifests in the 2010s. The spec continues to be maintained under the 1.2 line with incremental clarifications and errata.

License: MIT License

Tags: Data Formats

Properties: Scalars, Comments, Documents, Streams, Anchors, Aliases, Tags, Types

GitHub: https://yaml.org/