HTTP Standards

The HTTP (Hypertext Transfer Protocol) standard is an application-layer protocol that defines how messages are formatted and transmitted between web browsers and servers, forming the foundation of data communication on the World Wide Web.

HTTP 1.1

HTTP (Hypertext Transfer Protocol) is a stateless application-layer protocol that defines how web clients and servers format and exchange requests and responses over the internet.

Read more

HTTP/2

HTTP/2 is a binary, multiplexed version of HTTP that uses streams, header compression (HPACK), and optional server push to reduce latency and improve performance over a single TCP connection.

Read more

HTTP/3

HTTP/3 is the latest HTTP version that runs over QUIC (on UDP), providing multiplexed streams with built-in TLS 1.3 and connection migration to avoid TCP head-of-line blocking and improve performance.

Read more

Schema Formats

JSON Schema is central to everything here, playing a central role in defining schema, but also validating all the other specifications that are listed on this page.

JSON Schema

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

Read more

Data Formats

Data formats are standardized ways of encoding and structuring information so software can store, transmit, and interpret it consistently as text (CSV, JSON, XML, YAML), using schema for validation.

CSV

CSV (Comma-Separated Values) is a simple text format for storing tabular data where each line represents a row and values within rows are separated by commas (or other delimiters). CSV is common import and export format for spreadsheets, making it a ubiquitous data format.

Read more

JSON

JSON (JavaScript Object Notation) is a lightweight, text-based data interchange format that uses human-readable syntax with key-value pairs, arrays, and primitive data types (strings, numbers, booleans, null) to represent structured data. JSON reduced the size of data over the wire with mobile applications, then achieved wider adoption.

Read more

XML

XML (eXtensible Markup Language) is a text-based, Unicode-friendly format for representing structured data using nested elements (tags) and attributes, making documents both human- and machine-readable. It’s “extensible” because you define your own vocabulary (element and attribute names), organize data hierarchically, and use namespaces to avoid naming collisions.

Read more

YAML

YAML (“YAML Ain’t Markup Language”) is a human-friendly data serialization format used for configuration and data exchange, built around indentation to express structure (mappings/objects, sequences/arrays, and scalars). It supports comments (#), multi-document streams (—), anchors/aliases for reuse (&id, *id), and optional type tags.

Read more

HTML

HTML (HyperText Markup Language) is the standard markup language used to create and structure content on web pages, defining elements like headings, paragraphs, links, images, and forms through a system of tags that web browsers interpret and render as visual displays.

Read more

Markdown

Markdown is a lightweight markup language that uses plain text formatting syntax (such as asterisks for emphasis, hashes for headings, and brackets for links) to create formatted documents that are easy to read in raw form and can be converted to HTML and other formats.

Read more

Binary Data Formats

Binary data formats encode information as compact, machine-readable byte sequences—often with schemas for type safety and fast serialization—to reduce size and speed I/O (e.g., Protocol Buffers, Avro, MessagePack, CBOR, Parquet, ORC).

Avro

Apache Avro is a data serialization system that provides compact binary encoding of structured data along with schema definitions, enabling efficient data exchange and storage with built-in schema evolution capabilities that allow data structures to change over time while maintaining compatibility between different versions.

Read more

Parquet

Apache Parquet is a columnar storage file format designed for efficient data storage and retrieval in big data processing frameworks, optimizing for analytics workloads by storing data column-by-column rather than row-by-row, which enables compression, encoding, and query performance optimizations.

Read more

Protocol Buffers

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Read more

Data Connectors

Database connectors are drivers/adapters that let applications talk to databases by implementing the database’s wire protocol or a standardized API, handling auth, connections, type mapping, queries, and results.

JDBC

JDBC (Java Database Connectivity) is a Java API that provides a standard interface for Java applications to connect to and interact with relational databases, allowing developers to execute SQL queries, retrieve results, and manage database transactions in a database-agnostic way.

Read more

ODBC

ODBC (Open Database Connectivity) is a standard API specification that provides a database-agnostic interface for applications to connect to and interact with various relational database management systems through database-specific drivers, enabling cross-platform database access without requiring application code changes.

Read more

OpenAPI Standards

OpenAPI is the leading standard for describing the surface area of HTTP APIs, expanding to provide overlays to artifacts, as well as workflows using the Arazzo specification.

OpenAPI

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

Read more

Overlays

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API.

Read more

Arazzo

The Arazzo Specification is a community-driven standard for defining a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Read more

AsyncAPI Standard

AsyncAPI emerged as a sister specification to OpenAPI, focusing on event-driven APIs, providing the ability to describe the surface area of Kafka, Websockets, and other event-driven approaches to doing APIs.

AsyncAPI

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

Read more

Governance Policies

We use Spectral, Vacuum, and Open Policy Agents (OPA) to govern all of the schema and APIs in use as part of delivering capabilities reliably via Naftiko Engines.

Spectral

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

Read more

Vacuum

Vacuum rules are how to configure vacuum to know which rules to run for each specification, and how it should evaluate those rules, and a RuleSet is a style guide with each rule being an individual requirement as a part of the overall guide.

Read more

Open Policy Agent (OPA)

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Read more

Artificial Intelligence Standards

There are many new standards emerging to support AI integration, and these are the formats we are currently supporting, with additional ones being evaluated as part of market research and being added regularly.

Model Context Protocol (MCP)

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Read more

Agent2Agent (A2A)

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Read more

API Client Standards

Multiple derivatives of API standards have evolved over the years to support API clients and automation, providing for competing formats that augment and compliment existing API standards.

Postman Collections

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Read more

Bruno Collections

Bruno collections are structured groups of API requests, variables, and environments used within the Bruno API client to help developers organize and manage their API workflows. Each collection acts as a self-contained workspace where you can store requests, define authentication, set environment values, document behaviors, and run tests.

Read more

Open Collections

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Read more

API Client Environment Standards

Augmenting the API client standards above, each of the providers have their own environment standard for storing of key / value pairs that can be used during testing and automation.

Postman Environments

Postman environments are a powerful feature that allow you to manage different sets of variables for your API testing and development workflow. An environment is essentially a named collection of key-value pairs (variables) that you can switch between depending on your context—such as development, staging, or production.

Read more

Bruno Environments

A Bruno environment is a configurable set of key–value variables that allows you to run the same API requests across different deployment contexts, such as local development, staging, and production, storing values like base URLs, authentication tokens, headers, or other parameters that may change depending on where an API is being tested.

Read more

Authentication Standards

While API keys are dominant pattern, there are two authentication standards at the top of the list when building Naftiko, which help secure the data and resources being made available.

OAuth 2.0

OAuth 2.0 is an industry-standard protocol that enables secure, delegated access to APIs without requiring users to share their passwords with applications. Instead of handing over credentials, a user authorizes a trusted identity provider—such as Google, Microsoft, or an enterprise login system—to issue short-lived access tokens to a client application.

Read more

JWT

A JSON Web Token (JWT) enables secure transmission of information between a client and server by encoding user identity and claims into a digitally signed token. When a user logs in, the server generates a JWT containing information like the user’s ID and permissions, then sends it to the client. The client stores this token (typically in local storage or a cookie) and includes it in subsequent requests to prove their identity without needing to send credentials each time.

Read more

Semantic Standards

Semantics are needed to enrich data and the interfaces used to access the data, with Schema.org leading the way defining the vocabulary, and JSON-LD used to augment existing artifacts with that vocablary.

Schema.org

Schema.org is a collaborative, community-driven vocabulary (launched in 2011 by Google, Microsoft, Yahoo!, and Yandex) that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content.

Read more

JSON-LD

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Read more

Discovery Standards

We are using APIs.json and API Commons to describe the technical and business details of the services we are making available as part of the development of capabilities.

APIs.json

APIs.json is a machine-readable specification that API providers use to describe their API operations—much like sitemap.xml describes a website. It offers an index of internal, partner, and public APIs that includes not only machine-readable artifacts (OpenAPI, JSON Schema, etc.) but also traditionally human-readable assets such as documentation, pricing, and terms of service.

Read more

API Commons

API Commons is a collection of open-source building blocks for API operations. It began as a machine-readable way to define the parts of an API, and works in concert with APIs.json to translate human-readable aspects of your API program into machine-readable artifacts that can standardize and automate your ecosystem.

Read more

Examples Standard

OpenAPI and AsyncAPI both provide the ability to include examples of API requests, responses, and messages, but Microcks has begun to develop a standard dedicated to mocking and testing examples.

Microcks Examples

APIExamples format is Microcks’ own specification format for defining examples intended to be used by Microcks mocks. It can be seen as a lightweight, general purpose specification to solely serve the need to provide mock datasets. The goal of this specification is to keep the Microcks adoption curve very smooth with development teams but also for non developers.

Read more