This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

Naftiko is developing capabilities, framework, and engine out in the open, using this documentation as a guide.

1 - Overview

Documenting the vision, research, standards, and tooling behind Naftiko, while aligning with the Naftiko Manifest, helping shape the product road map for Naftiko open-source and commercial software that will shape integrations.

Naftiko has just begun building. We have decided that we want to build out in the open. The Naftiko Manifest, and this documentation will guide what we are building. Within this documenation you will find the open-source standards and tooling that we use for the foundation of Naftiko, and as we do the work to develop capabilities, the framework, engine, and fabric, we encourage you to share, participate, and let us know where else we can get involved across any complimentary community.

The framework, engine, and fabric sections of this documentation are just requirements placeholders for what is currently under development. As the road map comes into focus, and releases emerge we will update these sections. The standards and tooling sections reflect the open-source standards and tooling Naftiko is building on top of, ensuring that we tap into the existing standards and tooling our customers are already using–investing in the open-source ecosystem as we work to contribiute our own commercial open-source standards and tooling.

2 - Capabilities

Capabilities are open-source, declarative, standards-based integrations that are aligned with business outcomes within specific domains, providing what is needed to deliver and automate integrations across a variety of use cases.

Problem

Modern software delivery is buckling under the scale, complexity, and sprawl of the systems we’ve stitched together. We have more APIs, services, data pipelines, and integrations than ever before—but building and connecting systems feels harder, not easier. As organizations push to adopt AI, automate with agents, and leverage years of API investments, our traditional interface designs are hitting their limits. result is a growing sense of strain and fragmentation.

Solution

This is where capabilities enter the conversation. Capabilities are open-source, declarative, standards-based integrations aligned with business outcomes within specific domains. They provide the building blocks needed to deliver and automate integrations across countless internal, partner, and third-party systems. Capabilities help rebalance how we think about and execute integrations across the sprawling ecosystems we depend on.

Thinking

Capabilities start with a mindset shift. Capability thinking moves our focus from APIs, endpoints, and resources to the higher-level business functions that actually matter. Instead of designing around tables or CRUD operations, we design around outcomes. Capabilities become the primary building blocks of a platform—not low-level resources, but clear expressions of what the business can do.

A capability is both human-readable and machine-executable. It explains itself. It carries semantic meaning. It can be composed, automated, reused, governed, and observed. In a world of AI agents, event-driven systems, and interconnected ecosystems, capabilities become the logical unit of work and interaction.

Characteristics

A capability is not just a label. It is a strongly typed, governed, standardized set of artifacts. Mature capabilities have qualities that make them predictable, reusable, and understandable across teams, tools, and domains.

  • Business-aligned, with intuitive metadata, clear boundaries, and domain language
  • Human- and machine-readable, discoverable, and semantically meaningful
  • Composable and reusable, interoperable with other capabilities
  • Declarative, event-driven, automated, and predictable
  • Governed, policy-driven, secure, role-aware, and monitored
  • Executable, shareable, versioned, and lifecycle-aware
  • Integrated across APIs, data connections, file systems, and protocols
  • Collaborative, bringing product, engineering, and domain experts together
  • Insightful, observable, and traceable everywhere they run

Properties

These are some of the properties currently being explored to help shape what a capability might be, exploring what is needed across conversations with different companies, as well as beginning to define what capabilities we will need to operate Naftiko. Taking a schema-first and declarative approach to shaping what we are capable of as we are developing the framework and engine to run them, and the fabric to bring it together.

  • naftiko - A place to track administrative items required.
    • schema - A reference to the schema for a capability.
  • info - A place to put all the metadata and info required.
    • name - A plain language name for a capability.
    • description - A paragraph description of what a capability does.
    • icon - A URL for an image to represent what a cpability does.
    • tags - Keywords and phrases that describe what a capability does.
    • jsonLd - A JSON-LD reference for capability semantics.
    • pinned - Determines whether a capability is pinned or not.
    • featured - Determines whether a capability is featured or not.
  • useCase - A reference to a use case object for a capability.
  • stakeholders - All of the people involved with a capability.
  • capabilities - References to other child capabilities.
  • events - References to CloudEvents objects for capability.
  • change - The place where all changes are being logged.
    • state - Defining the state of a cpability.
    • schemaVersion - The version of the capability schema used.
    • capabilityVersion - The version of the capability itself.
    • updated - When the capability was last updated.
    • created - When the capability was created.
  • source - Information about capability source control.
    • httpUrl - The HTTP url to the capability Git repository.
    • sshUrl - The SSH url to the capability Git repository.
    • dockertHub - The HTTP url to the capability Docker image.
  • support - Any information regarding the support of a capability.
    • issues - A url to where issues can be found for capability.
    • discussions - A url to where discussions can be found for capability.
    • slack - A url to Slack account or thread for a capability.
  • license - The licensing applied to a capability.
    • data - The license for the data.
    • code - The license for the code.
    • api - The license for the API.
  • standards - References to any standards being used to support capability.
  • services - Referneces to any services being used to support capability.
  • observability - Any resources regarding the observability of a capability.
    • visibility - Whether a capability is public or private.

These are all just proposed properties for a capability based upon the handful of capabilities we have mocked up from the variety of conversations we are having with companies, and will continue to change and evolve based upon feedback and road map priorities.

Examples

These are some of the current capabilities being iterated upon which are being driven by conversations with different companies, and used to inform the schema and this documentation for what a capability is.

Some other references regarding what a capability is and can be, based upon ecosystem conversations.

  • Naftiko Discussions - A dedicated conversation on discussion forum about what a capability can be, engaging with the community.
  • Capoability Schema - A current draft JSON Schema and examples to define what a capability can be in a governed way.
  • What is an API Capability - This is a story in an ongoing series to drive a conversation about what a capability is.

3 - Framework

Open-source, industrial-grade development framework for managing integrations across multiple patterns and protocols, providing you with what you need to execute capabilities as part of your existing infrastructure.

Problem

Doing business today requires a diverse data and API integration toolbox, requiring teams to have an increasing number of skills when it comes to performing even simple integrations and automation across the increasing sprawl of internal and 3rd-party APIs being used to power applications. Organizations face:

  • Integration Complexity - Each API and data source requires custom code, specialized knowledge, and ongoing maintenance
  • Tool Fragmentation - Teams juggle multiple frameworks, libraries, and platforms to handle different integration patterns
  • Deployment Overhead - Moving from development to production requires significant containerization and orchestration effort
  • Testing Challenges - Mocking dependencies and testing integrations remains time-consuming and error-prone
  • Skill Gaps - Finding developers proficient across diverse integration technologies becomes increasingly difficult

Solution

The Naftiko framework brings an open-source, industrial-grade approach to managing a diverse range of integrations effortlessly, helping simplify and abstract away unnecessary complexity, while meeting developers where they already are working, leveraging a GitOps approach to development. By defining integrations through declarative models rather than imperative code, teams can:

  • Accelerate Development - Generate fully-functional integration components from models, eliminating boilerplate code
  • Standardize Patterns - Apply consistent integration patterns across REST, GraphQL, messaging, and data pipelines
  • Streamline Deployment - Leverage pre-built container packaging that works seamlessly with existing CI/CD workflows
  • Improve Quality - Utilize built-in mocking and testing capabilities powered by the same models that drive production code
  • Reduce Maintenance - Update integrations by modifying models rather than refactoring code across multiple repositories

Features

These are the current proposed features of the Naftiko framework, providing what is needed for developers to develop, execute, and iterate upon capabilities using their existing development environment, providing the basic features they need to consistentlky deliver capabilities at scale.

  • Model-Driven Development - Define integrations using declarative models that describe what you want to accomplish rather than how to accomplish it. The framework automatically generates optimized integration code, API clients, data transformations, and testing artifacts from these models. This approach reduces development time by up to 70% while ensuring consistency across your integration landscape. Models serve as living documentation that stays synchronized with implementation, making onboarding and maintenance significantly easier.
  • Container Packaging - Every integration component is automatically packaged as a container-ready artifact with optimized resource consumption and startup times. Naftiko generates Dockerfiles, health check endpoints, and configuration management scaffolding that follows container best practices. The framework supports both JVM-based and GraalVM native image compilation, allowing you to choose between fast iteration cycles during development and minimal resource footprint in production.
  • VSCode Integration - First-class support for Visual Studio Code includes syntax highlighting, model validation, code completion, and inline documentation for Naftiko models. The extension provides real-time feedback as you design integrations, catching errors before generation. One-click generation and local testing capabilities let you validate integrations without leaving your editor. The extension integrates with VSCode’s debugging tools, allowing you to step through generated integration code seamlessly.
  • IntelliJ Integration - Comprehensive IntelliJ IDEA plugin offering the same model-driven development experience for JetBrains users. The plugin provides advanced refactoring support, visual model designers, and deep integration with IntelliJ’s testing and debugging infrastructure. Navigate seamlessly between models and generated code, with the plugin maintaining traceability between your high-level integration definitions and the underlying implementation.
  • DockerHub Integration - Pre-built base images and runtime components are available on DockerHub, providing a foundation for your integration containers. These images are regularly updated with security patches and performance optimizations. The framework can push your generated integration containers directly to DockerHub or any OCI-compliant registry as part of your CI/CD pipeline, with automatic tagging and versioning based on your GitOps workflow.
  • Java Libraries - Extensive collection of Java libraries providing reusable integration components, protocol handlers, data transformers, and utility functions. These libraries handle common integration patterns—retry logic, circuit breakers, rate limiting, caching, and error handling—allowing your models to reference battle-tested implementations rather than requiring custom code. All libraries are designed for high performance and low latency, suitable for demanding production environments.
  • API Client Generation - Automatically generate type-safe, fully-featured API clients from OpenAPI specifications, GraphQL schemas, or Naftiko models. Generated clients include connection pooling, automatic retry with exponential backoff, request/response logging, and comprehensive error handling. Clients support both synchronous and asynchronous invocation patterns, allowing you to choose the right approach for your use case. Client code is optimized for the underlying HTTP client implementation, ensuring minimal overhead.
  • API Mocking - Built-in mocking capabilities allow you to generate mock servers directly from your integration models or API specifications. Mock servers simulate realistic API behavior including latency, error conditions, and stateful interactions, enabling comprehensive testing without dependencies on external services. Mocks can be deployed as standalone containers for integration testing or embedded in unit tests for rapid feedback. The mocking engine uses the same Apache Calcite processing layer as production integrations, ensuring behavioral consistency.

As the road map for the Naftiko framework comes into focus for 2026, this list will evolve. This list is meant to drive discussion around what is needed in the ecosystem and help communicate what we are planning as we are doing the work to formalize.

  • Naftiko Discussion - A dedicated conversation on discussion forum about what the Naftiko Freamework is and can be, engaging with the community along the way.

4 - Engine

Performant and scalable engine for executing capabilities while controlling inbound and outbound traffic, managing encryption, key and token issueing and rotation, rate limiting, and other runtime needs.

Problem

Modern enterprises run on an ever-expanding landscape of APIs, data sources, 3rd party services, and cloud platforms. As this environment grows more distributed and more dynamic, teams need a reliable, unified way to execute business capabilities while handling the operational complexity that comes with integrating internal and external systems.

Solution

The Naftiko Engine is built for exactly this moment—a performant, scalable runtime designed to execute, automate, and govern business capabilities with precision, consistency, and security. Whether you’re powering copilots, agentic automation, internal microservices, or integrations with 3rd-party SaaS solutions, the Naftiko Engine provides the foundation you need.

Features

At its core, the Naftiko Engine is responsible for executing capabilities—business operations exposed through a mix of APIs, event-driven architecture, and data connections. But unlike a traditional API gateway or API consumption engine, it is designed specifically for capability-driven integration, where business semantics, intent, and governance carry as much weight as the API calls being brokered.

  • Executes internal and external capabilities at scale
  • Controls inbound and outbound traffic
  • Manages encryption, key and token issuance, and rotation
  • Enforces rate limits and quotas
  • Applies dynamic, personalized data protections
  • Observes, audits, and reports activity for compliance
  • Mediates protocols and versions to maintain long-term stability

This is a runtime built for the next decade of automation—not just to move data, but to move business value with the context and governance needed to confidently do business across hundreds of 3rd-party platforms.

Capability-Driven

What makes the Naftiko Engine different isn’t just its runtime—it’s the methodology behind it. Capability-driven integration ensures that teams think in terms of business operations, not raw APIs, data connectors, and other tooling.

The Engine supports this lifecycle through five key phases:

  • Discover & Learn - Identify your first use case and select the source capabilities you will consume.Understand what exists, what’s missing, and where the value is.
  • Explore & Create - Explore the catalog of existing capabilities.Create new ones as needed—always aligned to business domains.
  • Deliver & Govern - Define the external and internal consumption policies.Apply guardrails and governance without blocking innovation.
  • Mediate & Broker - Version capabilities, add ports and adapters, and mediate protocols.The Engine handles the translation—teams focus on the business need.
  • Compose & Orchestrate - Design composite capabilities that bring it all together. Consumers (humans, copilots, or agents) define what they need, and the Engine makes it possible.

This flow enables teams to move quickly, safely, and consistently—without reinventing the basics every time a version changes, and when a new integration or automation is required.

Outbound Policies

These are the current proposed outbound API consumption policies being proposed for the Naftiko Engine, standardizing the layer in between your applications, internal systems, and 3rd-party external APIs–getting a handle on all of your outbound traffic.

  • API Authentication
  • API Token Refresh
  • API Rate LImiting
  • API Budget Management
  • API Discovery
  • API Mocking
  • API Failover
  • API Scaling
  • API Consumption Observability
  • API Consumption Auditing
  • API Consumption Reporting

Inbound Policies

These are the current proposed inbound API consumption policies being proposed for the Naftiko Engine, standardizing the layer you build your server, desktop, web, mobile, and artificial intelligence applications, and the 3rd-party or internal APIs they consume.

  • Internal Client Authentication
  • Internal Client Authorization
  • Internal Rate Limitation
  • Internal Quota Management
  • Personalized Data Masking
  • Personalized Encryption
  • Semantic Data Validation
  • Semantic Data Formatting
  • Network Management
  • Conditional Caching

These are the proposed features for the open-source Naftiko Engine and will be changing and evolving as work begins on the engine, and the road map comes into focus for 2026, developing this tooling out in the open based upon what the ecossytem is needing today.

  • Naftiko Discussion - A dedicated conversation on discussion forum about what the Naftiko Engine is and can be, engaging with the community along the way.
  • Engine Schema - The draft schema being used to define the configuration of each Naftiko engine.

5 - Fabric

Bringing together the capabilities across your domains into a single fabric that allows you to observe, control, and understand traffic across internal and 3rd-party API consumption across different applications.

Problem

For more than a decade, enterprises have invested heavily in APIs, middleware, and integration platforms—yet AI is revealing the limits of these systems faster than anyone expected. While predictions about AI’s transformative potential grow louder, most organizations still struggle to operationalize even their most promising proofs-of-concept. The gap between AI’s promise and enterprise reality is no longer technical; it’s architectural.

Challenges

AI agents cannot thrive in a world built for static API contracts, brittle scripts, legacy connectors, and manually orchestrated integrations. These tools were designed for a different era—an era of predictable workflows and human-driven interactions—not dynamic, autonomous systems capable of reasoning, planning, and acting. If enterprises want AI to produce real, scalable value, the foundation must evolve.

Failure

Most AI integration efforts rely on wrapping existing APIs with multi-modal prompts, auto-generating MCP endpoints, or leaning on aging iPaaS pipelines and connector SDKs. These approaches may work for demos, but break under real operational pressure:

  • Connectors are brittle, inconsistent, and rarely secure enough for autonomous execution.
  • API gateways enforce traffic rules but know nothing about business meaning.
  • ESBs and iPaaS tools create centralized bottlenecks and mapping spaghetti.
  • Agent frameworks generate wrappers, not reliable, governed business behaviors.

Enterprises are discovering the painful truth: AI can call APIs, but it cannot operate an enterprise made of APIs. Not without a better substrate.

Capabilities

A Capability is more than an API, function, or connector. It’s a governed, discoverable, composable, observable unit of business function—designed for humans and agents - capabilities elevate traditional APIs by embedding:

  • Business meaning: semantics, expected outcomes, constraints
  • Operational guardrails: authentication, masking, cost controls, rate limits
  • Governed autonomy: safe execution patterns for AI-specific workloads
  • Polyglot access: REST, events, functions, agent protocols, and MCP
  • Composability: the ability to build higher-order capabilities from lower-level ones

This shift answers the central question enterprises are grappling with: What is the smallest, safest, most reusable unit of business function an AI agent should be allowed to operate? The answer is Capabilities.

Fabric

When dozens or hundreds of capabilities exist across teams and systems, enterprises need a cohesive way to govern, observe, and compose them. This is where the Capability Fabric emerges—a new architectural pattern purpose-built for AI-era integration.

A Capability Fabric is not a gateway. It is not a service mesh. It is not an ESB. It is not an iPaaS. It is a distributed, domain-aligned, policy-driven coordination layer for autonomous interactions between humans, agents, and systems. Where a gateway handles traffic, where a mesh handles service connectivity, where an ESB handles workflows,the Capability Fabric handles meaning and safe execution.

  • Domain-level composition aligned with strategic business functions
  • Layered governance across source, domain, and experience contexts
  • Operational observability with metrics tied to cost, security, and quality
  • Policy controls at capability boundaries
  • AI-ready execution for autonomous agents and multi-modal experiences

It becomes the connective tissue that lets AI actually do work—securely, consistently, and with business alignment.

Web of Capabilities

Once capabilities proliferate within and between organizations, something larger starts to emerge—a new layer of the web. A Web of Capabilities. This is the network the Semantic Web aspired to create but couldn’t achieve, because it lacked:

  • Executable semantics
  • Distributed governance
  • Cross-domain policy
  • Practical adoption inside enterprises

Capabilities supply the missing ingredients. The Capability Fabric supplies the execution environment. Agents supply the demand. Together, they form an Agentic Web: a global fabric of purposeful, governed, inter-operable business capabilities. This is where Naftiko is focused—not on short-lived AI demos, but on building the architecture that will empower autonomous systems to act safely and meaningfully at scale.

  • Naftiko Discussion - A dedicated conversation on discussion forum about what the Naftiko Fabric is and can be, engaging with the community along the way.

6 - Services

These are all of the services we are currently supporting when it comes to building capabilities, offering a variety of services to support conversations with different companies, adding new ones as they are requested and needed as part of our ongoing Nafitko Signals work.

6.1 - Anthropic

Claude is an AI assistant created by Anthropic that helps people with a wide variety of tasks through natural conversation. I can assist with writing and editing, answer questions on many topics, help with analysis and research, provide coding support, engage in creative projects, and offer explanations of complex concepts.

Claude is an AI assistant created by Anthropic that helps people with a wide variety of tasks through natural conversation. I can assist with writing and editing, answer questions on many topics, help with analysis and research, provide coding support, engage in creative projects, and offer explanations of complex concepts.

Listing: https://contracts.apievangelist.com/store/anthropic/

Repo: https://github.com/api-evangelist/anthropic

APIs

Properties

6.2 - Atlassian

Atlassian is a software company that develops collaboration, productivity, and project management tools to help teams work more efficiently. Its products are designed to enhance teamwork, streamline workflows, and support project tracking across a wide range of industries.

Atlassian is a software company that develops collaboration, productivity, and project management tools to help teams work more efficiently. Its products are designed to enhance teamwork, streamline workflows, and support project tracking across a wide range of industries.

Listing: https://contracts.apievangelist.com/store/atlassian/

Repo: https://github.com/api-evangelist/atlassian

APIs

6.3 - Avalara

Avalara is a tax compliance software company that automates sales tax, VAT, and other transaction taxes for businesses. It calculates the correct tax rates for each transaction based on location and product type across thousands of jurisdictions, then handles tax return filing and compliance monitoring. Businesses use it because sales tax rules are extremely complex and constantly changing, especially when selling across multiple states or online, and Avalara’s automation saves them from having to manually track and comply with thousands of different tax requirements.

Avalara is a tax compliance software company that automates sales tax, VAT, and other transaction taxes for businesses. It calculates the correct tax rates for each transaction based on location and product type across thousands of jurisdictions, then handles tax return filing and compliance monitoring. Businesses use it because sales tax rules are extremely complex and constantly changing, especially when selling across multiple states or online, and Avalara’s automation saves them from having to manually track and comply with thousands of different tax requirements.

Listing: https://contracts.apievangelist.com/store/avalara/

Repo: https://github.com/api-evangelist/avalara

APIs

Properties

6.4 - BigCommerce

BigCommerce is a NASDAQ-listed ecommerce platform that provides software as a service services to retailers. The company’s platform includes online store creation, search engine optimization, hosting, and marketing and security from small to Enterprise sized businesses.

BigCommerce is a NASDAQ-listed ecommerce platform that provides software as a service services to retailers. The company’s platform includes online store creation, search engine optimization, hosting, and marketing and security from small to Enterprise sized businesses.

Listing: https://contracts.apievangelist.com/store/bigcommerce/

Repo: https://github.com/api-evangelist/bigcommerce

APIs

Properties

6.5 - Cvent

Cvent is a leading event management software company that helps organizations plan, promote, and execute successful events. Their comprehensive platform allows users to easily create event websites, manage registrations, and track attendee engagement. With features such as event budgeting, email marketing, and attendee analytics, Cvent streamlines the event planning process and helps businesses maximize their return on investment. Additionally, their mobile app and on-site check-in tools ensure a seamless experience for both event organizers and attendees. Overall, Cvent empowers organizations to deliver impactful and memorable events that drive business results.

Cvent is a leading event management software company that helps organizations plan, promote, and execute successful events. Their comprehensive platform allows users to easily create event websites, manage registrations, and track attendee engagement. With features such as event budgeting, email marketing, and attendee analytics, Cvent streamlines the event planning process and helps businesses maximize their return on investment. Additionally, their mobile app and on-site check-in tools ensure a seamless experience for both event organizers and attendees. Overall, Cvent empowers organizations to deliver impactful and memorable events that drive business results.

Listing: https://contracts.apievangelist.com/store/cvent/

Repo: https://github.com/api-evangelist/cvent

APIs

Properties

6.6 - Datadog

Datadog is a monitoring and analytics platform that helps organizations gain insight into their infrastructure, applications, and services. It allows users to collect, visualize, and analyze real-time data from a variety of sources, including servers, databases, and cloud services. Datadog’s platform enables companies to track performance metrics, troubleshoot issues, and optimize their systems for peak efficiency. With its customizable dashboards and alerting system, Datadog empowers teams to proactively monitor their environments and ensure smooth operations. Ultimately, Datadog helps businesses make data-driven decisions and improve the overall performance of their technology stack.

Datadog is a monitoring and analytics platform that helps organizations gain insight into their infrastructure, applications, and services. It allows users to collect, visualize, and analyze real-time data from a variety of sources, including servers, databases, and cloud services. Datadog’s platform enables companies to track performance metrics, troubleshoot issues, and optimize their systems for peak efficiency. With its customizable dashboards and alerting system, Datadog empowers teams to proactively monitor their environments and ensure smooth operations. Ultimately, Datadog helps businesses make data-driven decisions and improve the overall performance of their technology stack.

Listing: https://contracts.apievangelist.com/store/datadog/

Repo: https://github.com/api-evangelist/datadog

APIs

Properties

6.7 - Docker

Docker is a software platform that allows developers to package, distribute, and run applications in containers. Containers are lightweight, standalone, and portable environments that contain everything needed to run an application, including code, runtime, system tools, libraries, and settings. Docker provides a way to streamline the development and deployment process by isolating applications in containers, making it easier to manage dependencies, scale applications, and ensure consistency across different environments. Docker simplifies the process of building, deploying, and managing applications, ultimately leading to increased efficiency and productivity for developers.

Docker is a software platform that allows developers to package, distribute, and run applications in containers. Containers are lightweight, standalone, and portable environments that contain everything needed to run an application, including code, runtime, system tools, libraries, and settings. Docker provides a way to streamline the development and deployment process by isolating applications in containers, making it easier to manage dependencies, scale applications, and ensure consistency across different environments. Docker simplifies the process of building, deploying, and managing applications, ultimately leading to increased efficiency and productivity for developers.

Listing: https://contracts.apievangelist.com/store/docker/

Repo: https://github.com/api-evangelist/docker

APIs

6.8 - Figma

Figma’s mission is to make design accessible to everyone. Our two products help people from different backgrounds and roles express their ideas visually and make things together.

Figma’s mission is to make design accessible to everyone. Our two products help people from different backgrounds and roles express their ideas visually and make things together.

Listing: https://contracts.apievangelist.com/store/figma/

Repo: https://github.com/api-evangelist/figma

APIs

Properties

6.9 - GitHub

GitHub is a cloud-based platform for software development and version control, built on Git. It enables developers to store, manage, and collaborate on code. In addition to Gits distributed version control, GitHub offers access control, bug tracking, feature requests, task management, continuous integration, and wikis for projects. Headquartered in California, it has operated as a subsidiary of Microsoft since 2018.

GitHub is a cloud-based platform for software development and version control, built on Git. It enables developers to store, manage, and collaborate on code. In addition to Gits distributed version control, GitHub offers access control, bug tracking, feature requests, task management, continuous integration, and wikis for projects. Headquartered in California, it has operated as a subsidiary of Microsoft since 2018.

Listing: https://contracts.apievangelist.com/store/github/

Repo: https://github.com/api-evangelist/github

APIs

Properties

6.10 - Google

Google Cloud APIs are programmatic interfaces to Google Cloud Platform services. They are a key part of Google Cloud Platform, allowing you to easily add the power of everything from computing to networking to storage to machine-learning-based data analysis to your applications.

Google Cloud APIs are programmatic interfaces to Google Cloud Platform services. They are a key part of Google Cloud Platform, allowing you to easily add the power of everything from computing to networking to storage to machine-learning-based data analysis to your applications.

Listing: https://contracts.apievangelist.com/store/google/

Repo: https://github.com/api-evangelist/google

APIs

Properties

6.11 - Grafana

Grafana is a powerful open-source platform for data visualization and monitoring. It allows users to create interactive, customizable dashboards that display real-time data from multiple sources in a visually appealing way. With Grafana, users can easily connect to databases, cloud services, and other data sources, and then display that data in various chart types, tables, and histograms. Grafana also offers advanced alerting capabilities, enabling users to set up alerts based on specified conditions and thresholds. Overall, Grafana is a versatile tool that helps organizations make sense of their data and monitor the performance of their systems in a centralized, user-friendly interface.

Grafana is a powerful open-source platform for data visualization and monitoring. It allows users to create interactive, customizable dashboards that display real-time data from multiple sources in a visually appealing way. With Grafana, users can easily connect to databases, cloud services, and other data sources, and then display that data in various chart types, tables, and histograms. Grafana also offers advanced alerting capabilities, enabling users to set up alerts based on specified conditions and thresholds. Overall, Grafana is a versatile tool that helps organizations make sense of their data and monitor the performance of their systems in a centralized, user-friendly interface.

Listing: https://contracts.apievangelist.com/store/grafana/

Repo: https://github.com/api-evangelist/grafana

APIs

6.12 - HubSpot

HubSpot is a leading CRM platform that provides software and support to help businesses grow better. Our platform includes marketing, sales, service, and website management products that start free and scale to meet our customers' needs at any stage of growth. Today, thousands of customers around the world use our powerful and easy-to-use tools and integrations to attract, engage, and delight customers.

HubSpot is a leading CRM platform that provides software and support to help businesses grow better. Our platform includes marketing, sales, service, and website management products that start free and scale to meet our customers' needs at any stage of growth. Today, thousands of customers around the world use our powerful and easy-to-use tools and integrations to attract, engage, and delight customers.

Listing: https://contracts.apievangelist.com/store/hubspot/

Repo: https://github.com/api-evangelist/hubspot

APIs

Properties

6.13 - Kong

Kong provides the foundation that enables any company to securely adopt AI and become an API-first company speeding up time to market, creating new business opportunities, and delivering superior products and services.

Kong provides the foundation that enables any company to securely adopt AI and become an API-first company speeding up time to market, creating new business opportunities, and delivering superior products and services.

Listing: https://contracts.apievangelist.com/store/kong/

Repo: https://github.com/api-evangelist/kong

APIs

Properties

6.14 - LinkedIn

LinkedIn is a social networking site for professionals to connect with colleagues, employers, and other professionals. It’s a place to share ideas, information, and opportunities, and to find jobs, research companies, and learn about industry news.

LinkedIn is a social networking site for professionals to connect with colleagues, employers, and other professionals. It’s a place to share ideas, information, and opportunities, and to find jobs, research companies, and learn about industry news.

Listing: https://contracts.apievangelist.com/store/linkedin/

Repo: https://github.com/api-evangelist/linkedin

APIs

Properties

6.15 - Mailchimp

Mailchimp’s developer tools provide everything you need to integrate your data with intelligent marketing tools and event-driven transactional email.

Mailchimp’s developer tools provide everything you need to integrate your data with intelligent marketing tools and event-driven transactional email.

Listing: https://contracts.apievangelist.com/store/mailchimp/

Repo: https://github.com/api-evangelist/mailchimp

APIs

Properties

6.16 - Meta

Meta Platforms, Inc., doing business as Meta, and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California. The company owns and operates Facebook, Instagram, Threads, and WhatsApp, among other products and services.

Meta Platforms, Inc., doing business as Meta, and formerly named Facebook, Inc., and TheFacebook, Inc., is an American multinational technology conglomerate based in Menlo Park, California. The company owns and operates Facebook, Instagram, Threads, and WhatsApp, among other products and services.

Listing: https://contracts.apievangelist.com/store/meta/

Repo: https://github.com/api-evangelist/meta

APIs

Properties

6.17 - Microsoft Graph

Microsoft Graph is the gateway to data and intelligence in Microsoft cloud services like Microsoft Entra and Microsoft 365. Use the wealth of data accessible through Microsoft Graph to build apps for organizations and consumers that interact with millions of users.

Microsoft Graph is the gateway to data and intelligence in Microsoft cloud services like Microsoft Entra and Microsoft 365. Use the wealth of data accessible through Microsoft Graph to build apps for organizations and consumers that interact with millions of users.

Listing: https://contracts.apievangelist.com/store/microsoft-graph/

Repo: https://github.com/api-evangelist/microsoft-graph

APIs

Properties

6.18 - New Relic

New Relic is a software analytics company that helps businesses monitor and analyze their applications and infrastructure in real-time. By providing detailed insights into the performance and user experience of their systems, New Relic enables organizations to identify and fix issues quickly, optimize performance, and ultimately deliver better digital experiences to their customers. With a range of products and services, including application performance monitoring, infrastructure monitoring, and synthetic monitoring, New Relic empowers businesses to make data-driven decisions and drive digital transformation.

New Relic is a software analytics company that helps businesses monitor and analyze their applications and infrastructure in real-time. By providing detailed insights into the performance and user experience of their systems, New Relic enables organizations to identify and fix issues quickly, optimize performance, and ultimately deliver better digital experiences to their customers. With a range of products and services, including application performance monitoring, infrastructure monitoring, and synthetic monitoring, New Relic empowers businesses to make data-driven decisions and drive digital transformation.

Listing: https://contracts.apievangelist.com/store/new-relic/

Repo: https://github.com/api-evangelist/new-relic

APIs

Properties

6.19 - Notion

Notion is a versatile all-in-one workspace tool that helps individuals and teams organize their tasks, projects, and ideas in a centralized and collaborative platform. With features such as databases, boards, calendars, and documents, Notion allows users to create personalized workflows, track progress, and manage information efficiently. Users can customize their workspace to fit their unique needs, whether it be for project management, note-taking, or knowledge sharing. Notion aims to streamline workflows and enhance productivity by providing a flexible and intuitive platform for organizing and managing projects and information.

Notion is a versatile all-in-one workspace tool that helps individuals and teams organize their tasks, projects, and ideas in a centralized and collaborative platform. With features such as databases, boards, calendars, and documents, Notion allows users to create personalized workflows, track progress, and manage information efficiently. Users can customize their workspace to fit their unique needs, whether it be for project management, note-taking, or knowledge sharing. Notion aims to streamline workflows and enhance productivity by providing a flexible and intuitive platform for organizing and managing projects and information.

Listing: https://contracts.apievangelist.com/store/notion/

Repo: https://github.com/api-evangelist/notion

APIs

Properties

6.20 - OpenAI

OpenAI is a research organization that focuses on artificial intelligence (AI) and machine learning. Their mission is to ensure that AI benefits all of humanity, and they work on developing AI technology in a way that is safe and beneficial for society. OpenAI conducts cutting-edge research in fields such as natural language processing, reinforcement learning, and robotics. They also develop and release tools and models that help advance the field of AI and are open-source and accessible to the public. Additionally, OpenAI engages in outreach and advocacy efforts to promote the responsible development and deployment of AI technologies.

OpenAI is a research organization that focuses on artificial intelligence (AI) and machine learning. Their mission is to ensure that AI benefits all of humanity, and they work on developing AI technology in a way that is safe and beneficial for society. OpenAI conducts cutting-edge research in fields such as natural language processing, reinforcement learning, and robotics. They also develop and release tools and models that help advance the field of AI and are open-source and accessible to the public. Additionally, OpenAI engages in outreach and advocacy efforts to promote the responsible development and deployment of AI technologies.

Listing: https://contracts.apievangelist.com/store/openai/

Repo: https://github.com/api-evangelist/openai

APIs

Properties

6.21 - Salesforce

Salesforce is a cloud-based customer relationship management (CRM) platform that helps businesses manage and track their interactions with customers and leads. It provides a range of services including sales automation, marketing automation, customer service and analytics. Salesforce allows businesses to store all customer data in one centralized location, making it easier to collaborate and communicate with team members and provide personalized experiences for customers. With Salesforce, businesses can streamline their processes, increase efficiency, and ultimately drive growth and success.

Salesforce is a cloud-based customer relationship management (CRM) platform that helps businesses manage and track their interactions with customers and leads. It provides a range of services including sales automation, marketing automation, customer service and analytics. Salesforce allows businesses to store all customer data in one centralized location, making it easier to collaborate and communicate with team members and provide personalized experiences for customers. With Salesforce, businesses can streamline their processes, increase efficiency, and ultimately drive growth and success.

Listing: https://contracts.apievangelist.com/store/salesforce/

Repo: https://github.com/api-evangelist/salesforce

APIs

6.22 - SendGrid

SendGrid is a cloud-based customer communication platform that provides tools for email marketing and transactional email delivery. It helps businesses of all sizes easily create and send emails to their customers, enabling them to build stronger relationships and drive engagement. SendGrid also offers analytics and reporting tools to track the success of email campaigns, as well as features for managing subscriber lists and personalizing emails for targeted communications. Overall, SendGrid’s platform allows businesses to streamline their email marketing efforts and improve their overall communication strategies.

SendGrid is a cloud-based customer communication platform that provides tools for email marketing and transactional email delivery. It helps businesses of all sizes easily create and send emails to their customers, enabling them to build stronger relationships and drive engagement. SendGrid also offers analytics and reporting tools to track the success of email campaigns, as well as features for managing subscriber lists and personalizing emails for targeted communications. Overall, SendGrid’s platform allows businesses to streamline their email marketing efforts and improve their overall communication strategies.

Listing: https://contracts.apievangelist.com/store/sendgrid/

Repo: https://github.com/api-evangelist/sendgrid

APIs

Properties

6.23 - ServiceNow

ServiceNow is a cloud-based platform that provides a wide range of services for businesses to manage their IT operations, customer service, human resources, and other functions. The platform allows organizations to automate and streamline their workflows, improving efficiency and productivity. ServiceNow offers various applications and modules that help companies track and resolve issues, manage projects, and enhance collaboration among employees. Additionally, ServiceNow provides tools for data analytics, reporting, and monitoring to help businesses make informed decisions and optimize their operations. Overall, ServiceNow helps organizations simplify and improve their processes, leading to better customer satisfaction and business outcomes.

ServiceNow is a cloud-based platform that provides a wide range of services for businesses to manage their IT operations, customer service, human resources, and other functions. The platform allows organizations to automate and streamline their workflows, improving efficiency and productivity. ServiceNow offers various applications and modules that help companies track and resolve issues, manage projects, and enhance collaboration among employees. Additionally, ServiceNow provides tools for data analytics, reporting, and monitoring to help businesses make informed decisions and optimize their operations. Overall, ServiceNow helps organizations simplify and improve their processes, leading to better customer satisfaction and business outcomes.

Listing: https://contracts.apievangelist.com/store/servicenow/

Repo: https://github.com/api-evangelist/servicenow

APIs

Properties

6.24 - Shopify

Shopify is an e-commerce platform that enables businesses to create and operate their online stores. It provides a wide range of tools and features that help merchants manage their inventory, process payments, track shipments, and create customized storefronts. With Shopify, businesses can easily set up their online presence, sell products, and reach customers all over the world. The platform also offers various marketing and analytics tools to help businesses grow and succeed in the competitive online marketplace. Overall, Shopify simplifies the process of building and running an online store, making it a popular choice for businesses of all sizes.

Shopify is an e-commerce platform that enables businesses to create and operate their online stores. It provides a wide range of tools and features that help merchants manage their inventory, process payments, track shipments, and create customized storefronts. With Shopify, businesses can easily set up their online presence, sell products, and reach customers all over the world. The platform also offers various marketing and analytics tools to help businesses grow and succeed in the competitive online marketplace. Overall, Shopify simplifies the process of building and running an online store, making it a popular choice for businesses of all sizes.

Listing: https://contracts.apievangelist.com/store/shopify/

Repo: https://github.com/api-evangelist/shopify

APIs

Properties

6.25 - Slack

Slack is a cloud-based collaboration tool that brings teams together to work more efficiently and effectively. It allows team members to communicate in real-time through instant messaging, group chats, and video calls. Users can share files, collaborate on projects, and stay organized with task management features. Slack also integrates seamlessly with other tools and services, making it easy for teams to streamline their workflow and stay connected, no matter where they are located. With its user-friendly interface and robust features.

Slack is a cloud-based collaboration tool that brings teams together to work more efficiently and effectively. It allows team members to communicate in real-time through instant messaging, group chats, and video calls. Users can share files, collaborate on projects, and stay organized with task management features. Slack also integrates seamlessly with other tools and services, making it easy for teams to streamline their workflow and stay connected, no matter where they are located. With its user-friendly interface and robust features.

Listing: https://contracts.apievangelist.com/store/slack/

Repo: https://github.com/api-evangelist/slack

APIs

Properties

6.26 - Snowflake

Snowflake is a cloud-based data platform that provides data warehousing, data lake, and data sharing capabilities. It enables organizations to store, process, and analyze large volumes of structured and semi-structured data using SQL, while offering scalability, concurrency, and performance across multiple cloud providers. Snowflake is widely used for analytics, business intelligence, and data collaboration.

Snowflake is a cloud-based data platform that provides data warehousing, data lake, and data sharing capabilities. It enables organizations to store, process, and analyze large volumes of structured and semi-structured data using SQL, while offering scalability, concurrency, and performance across multiple cloud providers. Snowflake is widely used for analytics, business intelligence, and data collaboration.

Listing: https://contracts.apievangelist.com/store/snowflake/

Repo: https://github.com/api-evangelist/snowflake

APIs

Properties

6.27 - Stripe

Stripe is a technology company that provides a platform for online payment processing. They offer a secure and seamless way for businesses to accept payments from customers, handling transactions in multiple currencies and payment methods. Stripe’s software and APIs make it easy for businesses of all sizes to manage their online payments, track transactions, and analyze their revenue streams. With features such as fraud prevention, subscription billing, and mobile payment options, Stripe is a valuable tool for e-commerce businesses looking to streamline their payment processes and provide a better user experience for their customers.

Stripe is a technology company that provides a platform for online payment processing. They offer a secure and seamless way for businesses to accept payments from customers, handling transactions in multiple currencies and payment methods. Stripe’s software and APIs make it easy for businesses of all sizes to manage their online payments, track transactions, and analyze their revenue streams. With features such as fraud prevention, subscription billing, and mobile payment options, Stripe is a valuable tool for e-commerce businesses looking to streamline their payment processes and provide a better user experience for their customers.

Listing: https://contracts.apievangelist.com/store/stripe/

Repo: https://github.com/api-evangelist/stripe

APIs

Properties

6.28 - Twilio

Twilio is a cloud communications platform that enables developers to integrate voice, messaging, and video capabilities into their applications. Through its APIs, Twilio allows businesses to easily build and scale communication solutions, such as customer support helplines, appointment reminders, and two-factor authentication services. By partnering with Twilio, organizations can enhance their customer engagement strategies and streamline their communication channels, ultimately driving greater efficiency and customer satisfaction. In essence, Twilio empowers developers to create innovative and personalized communication experiences that connect people in new and meaningful ways.

Twilio is a cloud communications platform that enables developers to integrate voice, messaging, and video capabilities into their applications. Through its APIs, Twilio allows businesses to easily build and scale communication solutions, such as customer support helplines, appointment reminders, and two-factor authentication services. By partnering with Twilio, organizations can enhance their customer engagement strategies and streamline their communication channels, ultimately driving greater efficiency and customer satisfaction. In essence, Twilio empowers developers to create innovative and personalized communication experiences that connect people in new and meaningful ways.

Listing: https://contracts.apievangelist.com/store/twilio/

Repo: https://github.com/api-evangelist/twilio

APIs

Properties

6.29 - Youtube

The Youtube API provides the ability to retrieve feeds related to videos, users, and playlists. It also provides the ability to manipulate these feeds, such as creating new playlists, adding videos as favorites, and sending messsages. The API is also able to upload videos.

The Youtube API provides the ability to retrieve feeds related to videos, users, and playlists. It also provides the ability to manipulate these feeds, such as creating new playlists, adding videos as favorites, and sending messsages. The API is also able to upload videos.

Listing: https://contracts.apievangelist.com/store/youtube/

Repo: https://github.com/api-evangelist/youtube

APIs

Properties

6.30 - Zendesk

Zendesk provides customer service and engagement software that helps businesses manage support tickets, automate workflows, and offer multi-channel supportincluding email, chat, social media, and phonethrough a unified platform.

Zendesk provides customer service and engagement software that helps businesses manage support tickets, automate workflows, and offer multi-channel supportincluding email, chat, social media, and phonethrough a unified platform.

Listing: https://contracts.apievangelist.com/store/zendesk/

Repo: https://github.com/api-evangelist/zendesk

APIs

Properties

6.31 - Zoom

Zoom is a video conferencing platform that allows users to connect with others through virtual meetings, webinars, and chat features. It enables individuals and businesses to communicate and collaborate remotely, making it easier to work together from different locations. With its user-friendly interface and high-quality audio and video capabilities, Zoom has become a popular tool for businesses, schools, and other organizations to stay connected and productive. Whether it’s hosting a team meeting, conducting a virtual workshop, or catching up with friends and family, Zoom provides a seamless and reliable way to communicate in real-time.

Zoom is a video conferencing platform that allows users to connect with others through virtual meetings, webinars, and chat features. It enables individuals and businesses to communicate and collaborate remotely, making it easier to work together from different locations. With its user-friendly interface and high-quality audio and video capabilities, Zoom has become a popular tool for businesses, schools, and other organizations to stay connected and productive. Whether it’s hosting a team meeting, conducting a virtual workshop, or catching up with friends and family, Zoom provides a seamless and reliable way to communicate in real-time.

Listing: https://contracts.apievangelist.com/store/zoom/

Repo: https://github.com/api-evangelist/zoom

APIs

7 - Organizations

Naftiko is depends on the work of these organizations when it comes to our open-source core, making the standards they produce important, but also other activities that move forward the open-source ecosystem, central to the Naftiko vision and road map.

7.1 - Cloud Native Computing Foundation (CNCF)

The Cloud Native Computing Foundation (CNCF) is a subsidiary of the Linux Foundation founded in 2015 that serves as a vendor-neutral hub for supporting and hosting fast-growing cloud-native open-source projects like Kubernetes, Prometheus, and Envoy, bringing together developers, end users, and vendors to advance cloud-native computing.

The Cloud Native Computing Foundation (CNCF) is a subsidiary of the Linux Foundation founded in 2015 that serves as a vendor-neutral home for cloud-native open-source projects. The foundation hosts and provides support, oversight, and direction for critical components of modern cloud infrastructure, including flagship projects like Kubernetes, Prometheus, Envoy, and dozens of other tools focused on container orchestration, microservices, observability, service meshes, and application delivery. CNCF brings together the world’s top developers, end users, and vendors—including major public cloud providers, enterprise software companies, and innovative startups—to collaborate on advancing cloud-native technologies. The foundation organizes these projects into three maturity levels: Sandbox (early-stage projects being evaluated), Incubating (growing projects gaining adoption), and Graduated (mature, widely-adopted projects that have demonstrated sustained development and production use).

CNCF’s mission is to make cloud-native computing ubiquitous by fostering a robust ecosystem of tools that help organizations build, scale, and secure modern, containerized applications. Beyond hosting projects, the foundation plays a pivotal role in shaping industry standards and best practices through working groups and special interest groups (SIGs) that develop guidelines, policy frameworks, and hardening standards. CNCF also enhances workforce readiness through certifications such as the Certified Kubernetes Administrator (CKA) and Certified Kubernetes Security Specialist (CKS), ensuring consistent skills development across the industry. The foundation runs major conferences like KubeCon + CloudNativeCon, which serve as gathering points for the cloud-native community to exchange ideas, share innovations, and drive the industry forward.

Website - https://www.cncf.io/

7.2 - IANA

IANA (Internet Assigned Numbers Authority) is the organization responsible for coordinating the global allocation and management of internet protocol resources, including IP addresses, domain name system (DNS) root zones, and protocol parameter registries.

The Internet Assigned Numbers Authority (IANA) is a standards organization that oversees the global coordination of critical Internet resources to ensure the network functions smoothly and reliably. IANA’s primary responsibilities fall into three main categories: managing the DNS root zone (including the delegation of top-level domains like .com, .org, and country code domains), coordinating the global allocation of IP addresses and Autonomous System Numbers to Regional Internet Registries, and maintaining protocol parameter registries that define the unique codes and numbering systems used in Internet protocols published as RFCs (Request for Comments). These registries ensure that protocols have globally unique meanings so that computers and networks around the world can communicate effectively with each other.

While the Internet is renowned for being decentralized without central coordination, IANA provides the necessary technical coordination for key elements that must be globally managed to keep the Internet running. IANA works closely with the Internet Engineering Task Force (IETF) to allocate and maintain the unique codes and numbering systems used in technical standards, ensuring that billions of devices can communicate effectively across the Internet. Operating as part of ICANN (Internet Corporation for Assigned Names and Numbers) since 1999, IANA’s work happens behind the scenes every time someone accesses a website, sends an email, or uses any online service—making it possible for devices worldwide to find each other and exchange information reliably and securely.

Website - https://www.iana.org/

7.3 - Internet Engineering Task Force (IETF)

HTTP/3 is the latest HTTP version that runs over QUIC (on UDP), providing multiplexed streams with built-in TLS 1.3 and connection migration to avoid TCP head-of-line blocking and improve performance.

IETF is an open international community of network designers, operators, vendors, and researchers who develop and promote voluntary internet standards, particularly the standards that comprise the Internet Protocol Suite (TCP/IP). The IETF is responsible for creating many of the core technical standards that make the internet work, including protocols like HTTP, TLS/SSL, DNS, and many others.

The IETF operates through working groups that focus on specific areas of internet technology, and their standards are published as RFCs (Requests for Comments). It’s a volunteer-driven organization with no formal membership requirements - anyone interested in internet standards can participate in their work.

Website - https://www.ietf.org/

7.4 - Linux Foundation

The Linux Foundation is a nonprofit organization founded in 2000 that provides a neutral, trusted hub for developers and organizations to code, manage, and scale open source technology projects, supporting approximately 1,000 projects across various industries and delivering tools, training, events, and infrastructure that create an economic impact not achievable by any single company.

The Linux Foundation is a nonprofit organization founded in 2000 that serves as a neutral, trusted hub for developers and organizations to collaborate on open source technology projects. While originally focused on promoting the Linux operating system, the Foundation has evolved into what it calls a “foundation of foundations,” now supporting approximately 1,000 open source projects across software, hardware, standards, and data initiatives in diverse industries including cloud computing, networking, embedded systems, automotive, energy, and more. The Foundation provides essential infrastructure and services that enable these projects to thrive, including project governance frameworks, legal support, trademark and domain management, marketing, event hosting, and financial administration. With over 1,800 company members, the Linux Foundation brings together developers, vendors, and end users to collaborate on solving complex technology problems through shared investment in open source.

Beyond hosting projects, the Linux Foundation provides comprehensive training and certification programs to equip developers with essential skills in open source technologies, holds over 250 events worldwide annually (including major conferences like KubeCon and Open Source Summit), and offers free foundational courses to make technology education more accessible. The Foundation operates on principles of organizational neutrality—ensuring that no single company can control or take away community assets—and maintains a clear separation between financial support and technical participation, meaning that funding doesn’t grant companies the ability to steer technical direction without contributing code. As Executive Director Jim Zemlin describes it, the Linux Foundation acts as “the supporting cast or janitors of open source,” handling all the necessary infrastructure, legal, financial, and administrative work so that developers can focus on writing code and building innovative solutions that deliver economic impact impossible for any single organization to achieve alone.

Website - https://www.ietf.org/

7.5 - World Wide Web Consortium (W3C)

The World Wide Web Consortium (W3C) is an international public-interest nonprofit organization founded in 1994 by web inventor Tim Berners-Lee that develops open standards and guidelines to help build a web based on the principles of accessibility, internationalization, privacy, and security, ensuring the long-term growth and interoperability of the World Wide Web.

The World Wide Web Consortium (W3C) develops technical standards and guidelines for web technologies that ensure the web remains open, accessible, secure, and interoperable for all users worldwide. W3C creates the standards that define how websites and web applications function, including foundational technologies like HTML (Hypertext Markup Language), CSS (Cascading Style Sheets), XML (Extensible Markup Language), and protocols such as HTTP. These standards provide a framework that ensures consistency across browsers, devices, and operating systems, enabling websites and applications to function seamlessly regardless of platform. Through a transparent, consensus-driven process involving member organizations, full-time staff, and public participation, W3C works to foster compatibility and agreement among industry members in adopting new standards, preventing the fragmentation that could occur if different vendors offered incompatible versions of web technologies.

Beyond core web development standards, W3C addresses critical aspects of the modern web including accessibility (through guidelines like WCAG that make the web usable for people with disabilities), internationalization (ensuring the web works in every language and writing system), privacy, and security (developing authentication technologies and standards to enhance user privacy and secure communications). The organization operates through various working groups where external experts collaborate to develop standards that go through rigorous stages of development—from working drafts to candidate recommendations to final W3C Recommendations—ensuring that standards undergo extensive review and testing under both theoretical and practical conditions. W3C’s work ensures a fair and accessible web where developers can have confidence in the tools they’re using, knowing they’ve been vetted by experts, while users experience consistent, high-quality web applications that work across all platforms and devices.

Website - https://www.w3.org/

8 - Standards

Naftiko is standards-first, working with the standards you already use—and those your SaaS providers use—while giving you a framework to maximize common standards and connect the dots across your operational domains.

8.1 - OpenAPI

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

Describing the surface area of HTTP APIs and Webhooks.

The OpenAPI Specification (OAS) is a formal standard for describing HTTP APIs. It enables teams to understand how an API works and how multiple APIs interoperate, generate client code, create tests, apply design standards, and more.

OpenAPI was formerly known as Swagger. In 2015, SmartBear donated the specification to the Linux Foundation, establishing the OpenAPI Initiative (OAI) and a formal, community-driven governance model that anyone can participate in.

An OpenAPI document can be written in JSON or YAML and typically defines elements such as: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security.

OpenAPI has an active GitHub organization, blog, LinkedIn page, and Slack channel to encourage community participation. In addition, OAI membership helps fund projects and events that drive awareness and adoption.

The OpenAPI Specification can be used alongside two other OAI specifications: (1) the Arazzo specification for defining API-driven workflows, and (2) OpenAPI Overlays, which allow additional information to be overlaid onto an OpenAPI document.

License: Apache

Tags: HTTP APIs, Webhooks

Properties: Info, Contact, License, Servers, Components, Paths and Operations, Parameters, Request Bodies, Media Types and Encoding, Responses, Callbacks, Examples, Links, Headers, Tags, Schemas, and Security

Website: https://www.openapis.org

8.2 - OpenAPI Overlays

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.

Define metadata, operations, and data structures for overlaying on top of OpenAPI.

The Overlay Specification is an auxiliary standard that complements the OpenAPI Specification. An OpenAPI description defines API operations, data structures, and metadata—the overall shape of an API. An Overlay lists a series of repeatable changes to apply to a given OpenAPI description, enabling transformations as part of your API workflows.

OpenAPI Overlays emerged from the need to adapt APIs for varied use cases, from improving developer experience to localizing documentation. The first version was recently released, and the roadmap is being developed within the OpenAPI Initiative.

The specification provides three constructs for augmenting an OpenAPI description: Info, Overlays, and Actions. How these are applied is being worked out across different tools and industries to accommodate the diversity of APIs being delivered.

To get involved, participate via the GitHub repository, where you’ll find discussions, meeting notes, and related topics. There’s also a dedicated channel within the broader OpenAPI Initiative Slack.

OpenAPI Overlays offer a robust way to manage the complexity of producing and consuming APIs across industries, regions, and domains. As the specification matures, it presents a strong opportunity to ensure documentation, mocks, examples, code generation, tests, and other artifacts carry the right context for different situations.

License: Apache License

Tags: Overlays

Properties: info, overlays, and actions

Website: https://spec.openapis.org/overlay/v1.0.0.html

Standards: JSON Schema

8.3 - Arazzo

The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Describing your business processes and workflows using OpenAPI.

The Arazzo Specification is a community-driven, open standard within the OpenAPI Initiative (a Linux Foundation Collaborative Project). It defines a programming-language-agnostic way to express sequences of calls and the dependencies between them to achieve a specific outcome.

Arazzo emerged from a need identified in the OpenAPI community for orchestration and automation across APIs described with OpenAPI. Version 1 of the specification is available, and work on future iterations is guided by a public roadmap.

With Arazzo, you can define elements such as: Info, Sources, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusables, Criteria, Request Bodies, and Payload Replacements—providing a consistent approach to delivering a wide range of automation outcomes.

You can engage with the Arazzo community via the GitHub repository for each version and participate in GitHub Discussions to stay current on meetings and interact with the specification’s stewards and the broader community.

Arazzo is the logical layer on top of OpenAPI: it goes beyond documentation, mocking, and SDKs to focus on defining real business workflows that use APIs. Together, Arazzo and OpenAPI help align API operations with the rest of the business.

License: Apache 2.0

Tags: Workflows, Automation

Properties: Info, Source, Workflows, Steps, Parameters, Success Actions, Failure Actions, Components, Reusable, Criterion, Request Bodies, and Payload Replacements

Website: https://spec.openapis.org/arazzo/latest.html

8.4 - AsyncAPI

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

Describing the surface area of your event-driven infrastructure.

AsyncAPI is an open-source, protocol-agnostic specification for describing event-driven APIs and message-driven applications. It serves as the OpenAPI of the asynchronous, event-driven world—overlapping with, and often going beyond, what OpenAPI covers.

The specification began as an open-source side project and was later donated to the Linux Foundation after the team joined Postman, establishing it as a standard with formal governance.

AsyncAPI lets you define servers, producers and consumers, channels, protocols, and messages used in event-driven API operations—providing a common, tool-friendly way to describe the surface area of event-driven APIs.

To get involved, visit the AsyncAPI GitHub repository and blog, follow the LinkedIn page, tune into the YouTube or Twitch channels, and join the conversation in the community Slack.

AsyncAPI can be used to define HTTP APIs much like OpenAPI, and it further supports multiple protocols such as Pub/Sub, Kafka, MQTT, NATS, Redis, SNS, Solace, AMQP, JMS, and WebSockets—making it useful across many approaches to delivering APIs.

License: Apache

Tags: Event-Driven

Properties: Servers, Producers, Consumers, Channels, Protocols, and Messages

Website: https://www.asyncapi.com

8.5 - APIOps Cycles

APIOps Cycles is a Lean and service design–inspired methodology for designing, improving, and scaling APIs throughout their entire lifecycle. Developed since 2017 and continuously refined through community contributions and real-world projects across industries, APIOps Cycles provides a structured approach to API strategy using a distinctive metro map visualization where stations and lines represent critical aspects of the API lifecycle.

Aligning engineering with products when it comes to APIs.

The method is built around a collection of strategic canvas templates that help teams systematically address everything from customer journey mapping and value proposition definition to domain modeling, capacity planning, and risk assessment. As an open-source framework released under the Creative Commons Attribution–ShareAlike 4.0 license, APIOps Cycles is freely available for anyone to use, adapt, and share, with the complete method consisting of localized JSON and markdown files that power both the official website and open tooling available as an npm package. Whether you’re a developer integrating the method into your products and services, or an organization seeking to establish API product strategy and best practices, APIOps Cycles offers a proven, community-backed approach supported by a network of partners who can provide guidance and expertise in implementing the methodology effectively.

License: Creative Commons Attribution–ShareAlike 4.0

Tags: Products, Operations

Website: https://www.apiopscycles.com/

APIOps Cycles Canvases Outline

  1. Customer Journey Canvas
  • Persona
  • Customer Discovers Need
  • Customer Need Is Resolved
  • Journey Steps
  • Pains
  • Gains
  • Inputs & Outputs
  • Interaction & Processing Rules
  1. API Value Proposition Canvas
  • Tasks
  • Gain Enabling Features
  • Pain Relieving Features
  • API Products
  1. API Business Model Canvas
  • API Value Proposition
  • API Consumer Segments
  • Developer Relations
  • Channels
  • Key Resources
  • Key Activities
  • Key Partners
  • Benefits
  • Costs
  1. Domain Canvas
  • Selected Customer Journey Steps
  • Core Entities & Business Meaning
  • Attributes & Business Importance
  • Relationships Between Entities
  • Business, Compliance & Integrity Rules
  • Security & Privacy Considerations
  1. Interaction Canvas
  • CRUD Interactions
  • CRUD Input & Output Models
  • CRUD Processing & Validation
  • Query-Driven Interactions
  • Query-Driven Input & Output Models
  • Query-Driven Processing & Validation
  • Command-Driven Interactions
  • Command-Driven Input & Output Models
  • Command-Driven Processing & Validation
  • Event-Driven Interactions
  • Event-Driven Input & Output Models
  • Event-Driven Processing & Validation
  1. REST Canvas
  • API Resources
  • API Resource Model
  • API Verbs
  • API Verb Example
  1. GraphQL Canvas
  • API Name
  • Consumer Goals
  • Key Types
  • Relationships
  • Queries
  • Mutations
  • Subscriptions
  • Authorization Rules
  • Consumer Constraints
  • Notes / Open Questions
  1. Event Canvas
  • User Task / Trigger
  • Input / Event Payload
  • Processing / Logic
  • Output / Event Result
  1. Capacity Canvas
  • Current Business Volumes
  • Future Consumption Trends
  • Peak Load and Availability Requirements
  • Caching Strategies
  • Rate Limiting Strategies
  • Scaling Strategies
  1. Business Impact Canvas
  • Availability Risks
  • Mitigate Availability Risks
  • Security Risks
  • Mitigate Security Risks
  • Data Risks
  • Mitigate Data Risks
  1. Locations Canvas
  • Location Groups
  • Location Group Characteristics
  • Locations
  • Location Characteristics
  • Location Distances
  • Location Distance Characteristics
  • Location Endpoints
  • Location Endpoint Characteristics

8.6 - Postman Collections

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Executable artifact for automating APi requests and responses for testing.

A Postman Collection is a portable JSON artifacts that organizes one or more API requests—plus their params, headers, auth, scripts, and examples—so you can run, share, and automate them in the Postman desktop or web client application. Collections can include folders, collection- and environment-level variables, pre-request and test scripts, examples, mock server definitions, and documentation.

Postman Collections started as a simple way to save and share API requests in the early Postman client (2013), then grew into a formal JSON format with the v1 schema published in 2015. The format then stabilized as v2.0.0 and shortly after as v2.1.0 in 2017, which remains the common export/import version today.

Owner: Postman

License: Apache 2.0

Properties: Metadata, Requests, Scripts, Variables, Authentication, Methods, Headers, URLs, Bodies, Events, Responses

Website: https://postman.com

8.7 - Postman Environments

Postman environments are collections of variables that let you easily switch between different configurations (like development, staging, and production server URLs) without manually changing values throughout your API requests.

Storing variables for running along with Postman Collections.

Postman environments are a powerful feature that allow you to manage different sets of variables for your API testing and development workflow. An environment is essentially a named collection of key-value pairs (variables) that you can switch between depending on your context—such as development, staging, or production. For example, you might have different base URLs, authentication tokens, or API keys for each environment. Instead of manually updating these values in every request when you switch from testing locally to hitting a production server, you can simply select a different environment from a dropdown menu, and all your requests will automatically use the appropriate variables. This makes it much easier to maintain consistency, avoid errors, and streamline your workflow when working across multiple environments or sharing collections with team members.

Owner: Postman

License: Apache 2.0

Properties: Variables, Variable name, Initial value, Current value, Type, Environment Name, Environment ID, Scope, State

Website: https://learning.postman.com/docs/sending-requests/variables/managing-environments/

8.8 - Backstage Software Catalog Format

Backstage’s Software Catalog format is a structured, YAML-based specification that describes software components, services, APIs, resources, and their relationships, enabling teams to discover, document, and manage ownership and lifecycle information in a centralized developer portal.

The core catalog format for the Backstage environment.

The Backstage Software Catalog provides a centralized system for registering, organizing, and discovering software components across an organization. By using a standardized, YAML-based format, it captures metadata such as ownership, dependencies, lifecycle stage, documentation links, and operational context for services, APIs, libraries, and infrastructure resources. This allows teams to understand how systems fit together, who is responsible for them, and how they should be operated, while enabling automation, governance, and consistent developer experiences through a single, searchable developer portal.

Website: https://backstage.io/docs/features/software-catalog/descriptor-format/

8.9 - Bruno Collection

Bruno collections are organized sets of API requests and environments within the Bruno API client, allowing developers to structure, test, and share their API workflows efficiently.

Open source client specification.

Bruno collections are structured groups of API requests, variables, and environments used within the Bruno API client to help developers organize and manage their API workflows. Each collection acts as a self-contained workspace where you can store requests, define authentication, set environment values, document behaviors, and run tests. Designed with a filesystem-first approach, Bruno collections are easy to version-control and share, making them especially useful for teams collaborating on API development or maintaining consistent testing practices across environments.

License: MIT license

Tags: Clients, Executable

Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings

Website: https://www.usebruno.com/

8.10 - Bruno Environment

A Bruno environment is a set of key–value variables that let you switch configurations—such as URLs, tokens, or credentials—so you can run the same API requests across different contexts like development, staging, or production.

A open-source client environment.

A Bruno environment is a configurable set of key–value variables that allows you to run the same API requests across different deployment contexts, such as local development, staging, and production. Environments typically store values like base URLs, authentication tokens, headers, or other parameters that may change depending on where an API is being tested. By separating these values from the requests themselves, Bruno makes it easy to switch contexts, maintain cleaner collections, and ensure consistency when collaborating with others or automating API workflows.

License: MIT license

Tags: Name, Variables, Enabled, Secret, Ephemeral, Persisted Value

Properties: Name, Type, Version, Description, Variables, Environment, Folders, Requests, Auth, Headers, Scripts, Settings

Website: https://www.usebruno.com/

8.11 - Open Collections

A modern, developer-first specification pioneered by Bruno for defining and sharing API collections. Designed for simplicity and collaboration.

Open-source collection format.

The OpenCollection Specification is a format for describing API collections, including requests, authentication, variables, and scripts. This specification enables tools to understand and work with API collections in a standardized way.

License: Apache License

Tags: Collections

Website: https://www.opencollection.com/

8.12 - gRPC

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

8.13 - JSON RPC

JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses. A client sends an object with jsonrpc “2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.

Lightweight transport-agnostic remote procedure call protocol.

JSON-RPC is a lightweight, transport-agnostic remote procedure call (RPC) protocol that uses JSON to encode requests and responses: a client sends an object with jsonrpc:“2.0”, a method name, optional params (positional or named), and an id; the server replies with either a result or an error (including standardized error codes), and it also supports notifications (no id, no response) and request batching.

JSON-RPC emerged in the mid-2000s as a community-driven, lightweight RPC protocol using JSON, with an informal 1.0 spec (c. 2005) that defined simple request/response messaging and “notifications” (no reply). A 1.1 working draft (around 2008) tried to broaden and formalize features but never became canonical. The widely adopted JSON-RPC 2.0 specification (2010) simplified and standardized the model—introducing the mandatory “jsonrpc”:“2.0” version tag, clearer error objects, support for both positional and named parameters, and request batching—while remaining transport-agnostic (HTTP, WebSocket, pipes, etc.).

License: Apache License 2.0 or MIT License

Tags: RPC

Properties: methods, parameters, identifier, results, errors, codes, messages, data

Website: https://www.jsonrpc.org/

Forum:** https://groups.google.com/g/json-rpc

8.14 - Model Context Protocol (MCP)

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Allowing applications to connect to large language models (LLMs).

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to diverse data sources and tools, enabling agents and complex workflows that link models to the outside world.

Introduced by Anthropic as an open-source effort, MCP addresses the challenge of integrating AI models with external tools and data. It aims to serve as a universal “USB port” for AI, allowing models to access real-time information and perform actions.

MCP defines concepts and properties such as hosts, clients, servers, protocol negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, and logging—providing a standardized approach to connecting applications with LLMs.

The MCP community organizes around a GitHub repository (with issues and discussions), plus a Discord, blog, and RSS feed to track updates and changes to the specification.

MCP is seeing growing adoption among API and tooling providers for agent interactions. Many related API/AI specifications reference, integrate with, or overlap with MCP—despite the project being an open-source protocol currently stewarded by a single company, which has not been contributed to a foundation.

Owner: Anthropic

License: MIT License

Tags: agents, workflows

Properties: hosts, clients, servers, protocols, negotiation, lifecycle, transports, authorization, resources, prompts, tools, sampling, roots, elicitation, progress, cancellation, errors, logging

Website: https://modelcontextprotocol.io/

Standards: JSON-RPC 2.0, JSON Schema

8.15 - Apache Parquet

Apache Parquet is a columnar storage file format designed for efficient data storage and retrieval in big data processing frameworks, optimizing for analytics workloads by storing data column-by-column rather than row-by-row, which enables compression, encoding, and query performance optimizations.

Compact binary data serialization.

Apache Parquet is a columnar storage file format specifically designed for efficient data storage and processing in big data analytics environments, developed as a collaboration between Twitter and Cloudera in 2013 and now part of the Apache Software Foundation. Unlike traditional row-oriented formats (like CSV or Avro) that store data records sequentially, Parquet organizes data by columns, grouping all values from the same column together in storage. This columnar approach provides significant advantages for analytical workloads where queries typically access only a subset of columns from wide tables—instead of reading entire rows and discarding unneeded columns, Parquet allows systems to read only the specific columns required for a query, dramatically reducing I/O operations and improving query performance. The format also enables highly effective compression since values in the same column tend to have similar characteristics and patterns, allowing compression algorithms like Snappy, Gzip, LZO, and Zstandard to achieve much better compression ratios than they would on mixed-type row data. Parquet files are self-describing, containing schema information and metadata that allow any processing system to understand the data structure without external schema definitions.

Parquet has become the de facto standard for analytical data storage in modern data lakes and big data ecosystems, with native support across virtually all major data processing frameworks including Apache Spark, Apache Hive, Apache Impala, Presto, Trino, Apache Drill, and cloud data warehouses like Amazon Athena, Google BigQuery, Azure Synapse, and Snowflake. The format supports rich data types including nested and repeated structures (arrays, maps, and complex records), making it ideal for storing semi-structured data from JSON or Avro sources while maintaining query efficiency. Parquet’s internal structure uses techniques like dictionary encoding for low-cardinality columns, bit-packing for small integers, run-length encoding for repeated values, and delta encoding for sorted data, all of which contribute to both storage efficiency and query speed. The format includes column statistics (min/max values, null counts) stored in file metadata that enable predicate pushdown—allowing query engines to skip entire row groups or files that don’t contain relevant data based on filter conditions. This combination of columnar organization, advanced encoding schemes, efficient compression, predicate pushdown, and schema evolution support makes Parquet the optimal choice for data warehouse tables, analytical datasets, machine learning feature stores, time-series data, and any scenario where fast analytical queries over large datasets are required, often achieving 10-100x improvements in query performance and storage efficiency compared to row-oriented formats.

License: Apache 2.0

Tags: Data, Serialization, Binary

Properties: Columnar Storage Format, Column-Oriented, Apache Project, Open Source, Twitter-Cloudera Collaboration, Big Data Format, Analytics Optimized, Self-Describing, Schema Embedded, Metadata Rich, Binary Format, Efficient Storage, High Compression, Compression Support, Snappy Compression, Gzip Compression, LZO Compression, Brotli Compression, Zstandard Compression, Uncompressed Option, Column-Level Compression, Encoding Schemes, Dictionary Encoding, Run-Length Encoding, Bit-Packing, Delta Encoding, Delta Binary Packing, Plain Encoding, Byte Stream Split, Hybrid Encoding, Efficient Reads, Selective Column Access, Column Pruning, Projection Pushdown, Predicate Pushdown, Filter Pushdown, Statistics-Based Filtering, Row Group Skipping, Page-Level Statistics, Column Statistics, Min/Max Values, Null Counts, Distinct Counts, Bloom Filters, File-Level Metadata, Row Group Metadata, Column Chunk Metadata, Page Metadata, Schema Evolution, Schema Compatibility, Add Columns, Remove Columns, Rename Columns, Type Evolution, Nested Data Support, Complex Types, Struct Types, Array Types, Map Types, List Types, Repeated Fields, Optional Fields, Required Fields, Hierarchical Data, Semi-Structured Data, JSON Compatible, Avro Compatible, Thrift Compatible, Protocol Buffers Compatible, Rich Data Types, Primitive Types, Boolean Type, Integer Types, INT32, INT64, INT96, Float Type, Double Type, Binary Type, Fixed-Length Binary, String Type, UTF-8 Strings, Decimal Type, Date Type, Time Type, Timestamp Type, UUID Type, Enum Type, Logical Types, Converted Types, Annotation Support, Row Groups, Columnar Chunks, Data Pages, Dictionary Pages, Index Pages, Header Pages, Footer Structure, Magic Number, Version Number, File Format Version, Parquet Format 2.0, Apache Arrow Integration, Arrow Flight, In-Memory Format, Zero-Copy Reads, Memory Mapping, Lazy Loading, Streaming Reads, Batch Reads, Vectorized Processing, SIMD Optimization, CPU Efficiency, I/O Efficiency, Network Efficiency, Query Performance, Fast Scans, Aggregate Performance, Join Performance, Analytical Workloads, OLAP Queries, Data Warehouse Format, Data Lake Format, Cloud Storage Optimized, S3 Optimized, Azure Blob Compatible, Google Cloud Storage Compatible, HDFS Compatible, Object Storage, Distributed Storage, Splittable Files, Parallel Processing, Multi-Threaded Reads, Concurrent Access, Apache Spark Integration, Spark SQL, DataFrame Support, Dataset Support, PySpark Support, Apache Hive Integration, Hive Tables, HiveQL Support, Impala Support, Presto Support, Trino Support, Apache Drill Support, Dremio Support, ClickHouse Support, DuckDB Support, Snowflake Compatible, BigQuery Compatible, Redshift Spectrum, Athena Compatible, Azure Synapse, Databricks Support, EMR Support, Dataproc Compatible, AWS Glue, Data Catalog Integration, Table Format Support, Delta Lake, Apache Iceberg, Apache Hudi, Time Travel, ACID Transactions, Schema Registry, Metastore Integration, Partition Support, Partitioned Tables, Partition Pruning, Bucketing Support, Sorted Data, Clustered Data, Data Organization, File Organization, Directory Structure, Hive Partitioning, Key-Based Partitioning, Date Partitioning, ETL Integration, Data Pipelines, Batch Processing, Stream Processing, Real-Time Analytics, Apache Kafka Integration, Apache Flink Support, Streaming Writes, Micro-Batching, Change Data Capture, Incremental Updates, Upsert Support, Delete Support, Merge Support, Compaction, Small File Problem, File Consolidation, Optimization, Vacuum Operations, Machine Learning, Feature Store, Training Datasets, Model Input, Data Science, Pandas Integration, NumPy Compatible, Scikit-learn, TensorFlow Datasets, PyTorch DataLoader, Jupyter Notebooks, R Support, Julia Support, Command Line Tools, parquet-tools, parquet-cli, File Inspection, Schema Extraction, Row Count, File Size, Compression Ratio, Storage Metrics, Performance Metrics, Benchmark Results, Query Optimization, Cost-Based Optimization, Statistics Collection, Cardinality Estimation, Data Profiling, Data Quality, Data Validation, Type Safety, Schema Validation, Constraint Checking, Business Rules, Programming Language Support, Java Support, Scala Support, Python Support, C++ Support, Go Support, Rust Support, JavaScript Support, .NET Support, Arrow Parquet, PyArrow, FastParquet, parquet-cpp, parquet-mr, parquet-format, Specification, Open Standard, Vendor Neutral, Cross-Platform, Portable Format, Interoperability, Data Exchange, Data Sharing, Data Publishing, Open Data, Public Datasets, Reproducible Research, Version Control Friendly, Git LFS Compatible, Data Versioning, Data Lineage, Provenance Tracking, Audit Trails, Compliance Support, GDPR Compatible, Data Governance, Access Control, Encryption Support, Encryption at Rest, Column Encryption, Transparent Encryption, Security, Authentication, Authorization, Row-Level Security, Column Masking, Data Redaction, PII Protection, Sensitive Data, Anonymization, Pseudonymization, Production Ready, Enterprise Grade, Mission Critical, High Performance, Scalable, Petabyte Scale, Exabyte Scale, Large Datasets, Wide Tables, Deep Nesting, High Cardinality, Low Cardinality, Sparse Data, Dense Data, Time Series Data, Event Data, Log Data, Metrics Data, Sensor Data, IoT Data, Clickstream Data, User Behavior, Transaction Data, Financial Data, Scientific Data, Genomics Data, Weather Data, Geospatial Data, GIS Integration, Location Data, Coordinates, Spatial Queries, Temporal Queries, Historical Data, Archive Format, Cold Storage, Data Retention, Backup Format, Disaster Recovery, Long-Term Storage, Cost Optimization, Storage Savings, Cloud Cost Reduction, Bandwidth Savings, Compute Efficiency, Resource Optimization, Green Computing, Energy Efficient, Carbon Footprint, Sustainability, Industry Standard, Widely Adopted, Battle Tested, Mature Technology, Active Development, Community Support, Documentation, Examples, Tutorials, Best Practices, Design Patterns, Anti-Patterns, Troubleshooting, Debugging, Profiling Tools, Performance Tuning, Optimization Guides, Migration Tools, Conversion Tools, CSV to Parquet, JSON to Parquet, Avro to Parquet, ORC Alternative, Comparison Benchmarks, Format Selection, Use Case Specific, Analytics First, Write Once Read Many, WORM, Append-Only, Immutable Files, Idempotent Writes, Exactly-Once Semantics, Consistency, Durability, Reliability, Fault Tolerance, Error Handling, Data Integrity, Checksum Validation, CRC Checks, Corruption Detection, Self-Healing, Backward Compatible, Forward Compatible, Version Migration, Legacy Support, Modern Format, Future Proof, Ecosystem Integration, Tool Support, BI Tools, Tableau Support, Power BI, Looker, Qlik, Metabase, Superset, Grafana, Monitoring, Observability, Telemetry, Usage Tracking, Access Patterns, Query Patterns, Workload Analysis

Website: https://parquet.apache.org/

8.16 - Avro

Apache Avro is a data serialization system that provides compact binary encoding of structured data along with schema definitions, enabling efficient data exchange and storage with built-in schema evolution capabilities that allow data structures to change over time while maintaining compatibility between different versions.

Compact binary data serialization.

Apache Avro is a data serialization framework developed within the Apache Hadoop project that provides a compact, fast binary data format along with rich data structures and schema definitions. Created by Doug Cutting (the creator of Hadoop) in 2009, Avro addresses the need for efficient data serialization in big data ecosystems where massive volumes of data must be stored and transmitted efficiently. Unlike JSON or XML which use verbose text-based formats, Avro serializes data into a compact binary representation that significantly reduces storage requirements and network bandwidth while maintaining fast serialization and deserialization performance. Avro schemas are defined using JSON, making them human-readable and language-independent, and these schemas travel with the data (either embedded in files or referenced through a schema registry), ensuring that any system can correctly interpret the serialized data without prior knowledge of its structure. This self-describing nature makes Avro particularly valuable in distributed systems where different services written in different languages need to exchange data reliably.

One of Avro’s most powerful features is its robust support for schema evolution, which allows data schemas to change over time without breaking compatibility between producers and consumers of that data. Avro supports both forward compatibility (new code can read old data) and backward compatibility (old code can read new data) through features like default values for fields, optional fields, and union types. This makes it ideal for long-lived data storage and streaming systems where data structures evolve as business requirements change. Avro has become a cornerstone technology in the big data ecosystem, widely used with Apache Kafka for streaming data pipelines (where the Confluent Schema Registry manages Avro schemas), Apache Spark for data processing, Apache Hive for data warehousing, and as the serialization format for Hadoop’s remote procedure calls. Avro supports rich data types including primitive types (null, boolean, int, long, float, double, bytes, string), complex types (records, enums, arrays, maps, unions, fixed), and logical types (decimals, dates, timestamps), and provides code generation capabilities that create type-safe classes in languages like Java, C++, C#, Python, Ruby, and PHP. Its combination of compact binary encoding, strong schema support, language independence, and schema evolution capabilities makes Avro the preferred serialization format for many data-intensive applications, particularly in streaming architectures and data lakes.

License: Apache 2.0

Tags: Data, Serialization, Binary

Properties: Data Serialization System, Binary Format, Compact Encoding, Schema-Based, JSON Schema Definition, Self-Describing Data, Apache Project, Apache Hadoop Ecosystem, Doug Cutting Created, Open Source, Language-Independent, Platform-Independent, Cross-Language Support, Rich Data Structures, Schema Evolution, Forward Compatibility, Backward Compatibility, Full Compatibility, Default Values, Optional Fields, Field Addition, Field Deletion, Field Renaming, Union Types, Schema Resolution, Schema Registry Support, Confluent Schema Registry, Schema Versioning, Schema ID, Schema Discovery, Dynamic Typing, Static Typing, Code Generation, Type-Safe Classes, Java Support, C++ Support, C# Support, Python Support, Ruby Support, PHP Support, JavaScript Support, Perl Support, Haskell Support, Rust Support, Go Support, Primitive Types, Null Type, Boolean Type, Integer Types, Int Type, Long Type, Float Type, Double Type, Bytes Type, String Type, Complex Types, Record Type, Enum Type, Array Type, Map Type, Union Type, Fixed Type, Logical Types, Decimal Type, Date Type, Time Type, Timestamp Type, Duration Type, UUID Type, Nested Structures, Recursive Types, Named Types, Namespace Support, Documentation Fields, Aliases, Order Specification, File Format, Object Container Files, Data Files, Block-Based Storage, Compression Support, Deflate Compression, Snappy Compression, Bzip2 Compression, XZ Compression, Zstandard Compression, Sync Markers, Splittable Files, Hadoop Compatible, MapReduce Compatible, HDFS Storage, Distributed Storage, Big Data Processing, Streaming Data, Apache Kafka Integration, Kafka Serialization, Producer Support, Consumer Support, Apache Spark Integration, Spark SQL, DataFrame Support, Dataset Support, Apache Hive Integration, Hive Tables, Metastore Integration, Apache Flink Support, Apache Storm Integration, Data Pipeline, ETL Processes, Data Lakes, Data Warehousing, Message Queue Format, Event Sourcing, Log Aggregation, RPC Framework, Remote Procedure Calls, IDL Support, Interface Definition, Service Definition, Protocol Definition, Request/Response, Binary Protocol, Efficient Serialization, Fast Deserialization, Low Overhead, Small Payload Size, Network Efficient, Storage Efficient, Memory Efficient, CPU Efficient, Performance Optimized, High Throughput, Low Latency, Scalable, Version Control Friendly, Schema Registry, Centralized Schema Management, Schema Validation, Schema Compatibility Checking, Breaking Change Detection, Migration Support, Data Transformation, Schema Mapping, Type Conversion, Field Mapping, Data Migration Tools, Schema Tools, Command Line Tools, Avro Tools JAR, Schema Validation Tools, File Inspection, Data Inspection, JSON Conversion, Avro to JSON, JSON to Avro, File Reading, File Writing, Stream Processing, Batch Processing, Real-Time Processing, Container Format, Metadata Support, Custom Metadata, User Metadata, Codec Support, Encoding Options, Generic Records, Specific Records, Reflect Records, Dynamic Schema, Runtime Schema, Compile-Time Schema, Type Safety, Null Safety, Missing Field Handling, Extra Field Handling, Type Promotion, Numeric Promotion, String Encoding, UTF-8, Byte Arrays, Binary Data, Large Object Support, Chunked Data, Block Size Configuration, Buffer Management, Memory Allocation, Object Reuse, Object Pooling, Zero-Copy, Direct Buffers, NIO Support, Async IO, Streaming API, Iterator Support, Random Access, Sequential Access, Index Support, Projection Support, Column Pruning, Predicate Pushdown, Filter Support, Query Optimization, Partition Support, Sharding, Distribution, Replication, Fault Tolerance, Data Integrity, Checksum Support, CRC Validation, Error Detection, Error Handling, Exception Handling, Validation Rules, Constraint Enforcement, Business Rules, Industry Standard, Production Ready, Enterprise Grade, Mission Critical, High Availability, Disaster Recovery, Backup Format, Archive Format, Long-Term Storage, Cold Storage, Hot Storage, Warm Storage, Tiered Storage, Cloud Storage Compatible, S3 Compatible, Azure Blob Storage, Google Cloud Storage, Object Storage, Distributed File Systems, Network File Systems, Local File Systems, Database Storage, NoSQL Databases, Document Stores, Column Stores, Key-Value Stores, Time Series Databases, Graph Databases, Search Engines, Elasticsearch Support, Solr Support, Analytics Engines, Data Science, Machine Learning Datasets, Training Data, Feature Storage, Model Serialization, Experiment Tracking, MLflow Integration, Data Versioning, DVC Support, Data Lineage, Data Provenance, Audit Trails, Compliance, GDPR Support, Data Governance, Data Quality, Data Catalog, Metadata Management, Documentation, API Documentation, Schema Documentation, Field Documentation, Type Documentation, Example Data, Sample Files, Test Data, Mock Data, Debugging Tools, Profiling Tools, Performance Monitoring, Metrics Collection, Logging Support, Tracing, Observability, Monitoring Integration, Alerting, Community Support, Active Development, Regular Releases, Bug Fixes, Security Patches, Performance Improvements, Feature Additions, Backward Compatible Releases, Stable API, Mature Technology, Battle Tested, Widely Adopted, Industry Proven, Ecosystem Integration, Tool Support, IDE Plugins, Editor Support, Build Tool Integration, Maven Support, Gradle Support, SBT Support, NPM Packages, PyPI Packages, Package Managers, Dependency Management, Transitive Dependencies, Minimal Dependencies, Lightweight, Portable, Embeddable, Library Form, Framework Integration, Microservices, Service Mesh, Container Support, Docker Compatible, Kubernetes Support, Cloud Native, Serverless Compatible, Lambda Functions, Edge Computing, IoT Data, Sensor Data, Telemetry, Metrics, Events, Notifications, Webhooks, API Responses, Interoperability, Protocol Buffers Alternative, Thrift Alternative, MessagePack Alternative, BSON Alternative, Parquet Complementary, ORC Complementary, Specification, Standard Format, Open Standard, Vendor Neutral, Community Driven

Website: https://avro.apache.org/

8.17 - Agent2Agent

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

Communicating the interoperability between systems using AI agents.

The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability among independent—often opaque—AI agent systems. Because agents may be built with different frameworks, languages, and vendors, A2A provides a common language and interaction model.

License: Apache 2.0

Tags: agents

Properties: client, servers, cards, messages, tasks, part, artifacts, streaming, push notifications, context, etensions, transport, negotiation, authentication, authorization, and discovery for agent automation. A2A has the discovery, network, context

Website: https://a2a-protocol.org/latest/

Standards: JSON-RPC 2.0, gRPC

8.18 - JSON Schema

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

Annotating and validating JSON artifacts.

JSON Schema is a vocabulary for annotating and validating JSON documents. It defines the structure, content, and constraints of data—often authored in either JSON or YAML—and can be leveraged by documentation generators, validators, and other tooling.

The specification traces back to early proposals by Kris Zyp in 2007 and has evolved through draft-04, draft-06, and draft-07 to the current 2020-12 release.

JSON Schema provides a rich set of keywords—such as title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref—to describe and validate data used in business operations.

To get involved with the community, visit the JSON Schema GitHub organization, subscribe to the blog via RSS, join discussions and meetings in the Slack workspace, and follow updates on LinkedIn.

JSON Schema is a foundational standard used by many other specifications, tools, and services. It’s the workhorse for defining and validating the digital data that keeps modern businesses running.

License: Academic Free License version 3.0

Tags: Schema, Validation

Properties: schema, title, description, type, properties, required, additionalProperties, minimum, maximum, exclusiveMinimum, exclusiveMaximum, default, enum, pattern, items, allOf, anyOf, oneOf, not, examples, and $ref

Website: https://json-schema.org

8.19 - Protocol Buffers

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Fast binary serialized structured data.

Protocol Buffers (protobuf) are Google’s language-neutral, platform-neutral way to define structured data and serialize it efficiently (small, fast). You write a schema in a .proto file, generate code for your language (Go, Java, Python, JS, etc.), and use the generated classes to read/write binary messages.

Protocol Buffers began inside Google in the early 2000s as an internal, compact, schema-driven serialization format; in 2008 Google open-sourced it as proto2. Most recently in 2023, Google introduced “Protobuf Editions” to evolve semantics without fragmenting the language into proto2 vs. proto3, while the project continues to refine tooling, compatibility guidance, and release processes across a broad open-source community.

Owner: Google

License: BSD-3-Clause License

Tags: Schema, Data, Binary, Serialization

Properties: messages, types, fields, cardinality, comments, reserved values, scalars, defaults, enumerations, nested types, vinary, unknown fields, oneOf, maps, packages, and services

Website: https://protobuf.dev/

8.20 - Schema.org

Schema.org is a collaborative, community-driven vocabulary (launched in 2011 by Google, Microsoft, Yahoo!, and Yandex) that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content.

Community-driven schema vocabulary for people, places, and things.

Schema.org is a collaborative, community-driven vocabulary that defines shared types and properties to describe things on the web—people, places, products, events, and more—so search engines and other consumers can understand page content. Publishers annotate pages using formats like JSON-LD (now the common choice), Microdata, or RDFa to express this structured data, which enables features such as rich results, knowledge panels, and better content discovery. The project maintains core and extension vocabularies, evolves through open proposals and discussion, and focuses on practical, interoperable semantics rather than being tied to a single standard body.

License: Creative Commons Attribution-ShareAlike License (CC BY-SA 3.0)

Tags: Schema

Properties: Thing, Action, AchieveAction, LoseAction, TieAction, WinAction, AssessAction, ChooseAction, VoteAction, IgnoreAction, ReactAction, AgreeAction, DisagreeAction, DislikeAction, EndorseAction, LikeAction, WantAction, ReviewAction, ConsumeAction, DrinkAction, EatAction, InstallAction, ListenAction, PlayGameAction, ReadAction, UseAction, WearAction, ViewAction, WatchAction, ControlAction, ActivateAction, AuthenticateAction, DeactivateAction, LoginAction, ResetPasswordAction, ResumeAction, SuspendAction, CreateAction, CookAction, DrawAction, FilmAction, PaintAction, PhotographAction, WriteAction, FindAction, CheckAction, DiscoverAction, TrackAction, InteractAction, BefriendAction, CommunicateAction, AskAction, CheckInAction, CheckOutAction, CommentAction, InformAction, ConfirmAction, RsvpAction, InviteAction, ReplyAction, ShareAction, FollowAction, JoinAction, LeaveAction, MarryAction, RegisterAction, SubscribeAction, UnRegisterAction, MoveAction, ArriveAction, DepartAction, TravelAction, OrganizeAction, AllocateAction, AcceptAction, AssignAction, AuthorizeAction, RejectAction, ApplyAction, BookmarkAction, PlanAction, CancelAction, ReserveAction, ScheduleAction, PlayAction, ExerciseAction, PerformAction, SearchAction, SeekToAction, SolveMathAction, TradeAction, BuyAction, OrderAction, PayAction, PreOrderAction, QuoteAction, RentAction, SellAction, TipAction, TransferAction, BorrowAction, DonateAction, DownloadAction, GiveAction, LendAction, MoneyTransfer, ReceiveAction, ReturnAction, SendAction, TakeAction, UpdateAction, AddAction, InsertAction, AppendAction, PrependAction, DeleteAction, ReplaceAction

Website: https://schema.org/g/latest/

8.21 - JSON-LD

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Introducing semantics into JSON so machines can understand meaning.

JSON-LD (JavaScript Object Notation for Linking Data) is a W3C standard for expressing linked data in JSON. It adds lightweight semantics to ordinary JSON so machines can understand what the data means, not just its shape—by mapping keys to globally unique identifiers (IRIs) via a @context. Common features include @id (identity), @type (class), and optional graph constructs (@graph).

Properties: base, containers, context, direction, graph, imports, included, language, lists, nests, prefixesm propagate, protected, reverse, set, types, values, versions, and vocabulary

Website: https://json-ld.org/

8.22 - Spectral

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

Enforcing style guides across JSON artifacts to govern schema.

Spectral is an open-source API linter for enforcing style guides and best practices across JSON Schema, OpenAPI, and AsyncAPI documents. It helps teams ensure consistency, quality, and adherence to organizational standards in API design and development.

While Spectral is a tool, its rules format is increasingly treated as a de facto standard. Spectral traces its roots to Speccy, an API linting engine created by Phil Sturgeon at WeWork. Phil later brought the concept to Stoplight, where Spectral and the next iteration of the rules format were developed; Stoplight was subsequently acquired by SmartBear.

With Spectral, you define rules and rulesets using properties such as given, then, description, message, severity, formats, recommended, and resolved. These can be applied to any JSON or YAML artifact, with primary adoption to date around OpenAPI and AsyncAPI.

The project’s GitHub repository hosts active issues and discussions, largely focused on the CLI. Development continues under SmartBear, including expanding how rules are applied across API operations and support for Arazzo workflow use cases.

Most commonly, Spectral is used to lint and govern OpenAPI and AsyncAPI specifications during design and development. It is expanding into Arazzo workflows and can be applied to any standardized JSON or YAML artifact validated with JSON Schema—making it a flexible foundation for governance across the API lifecycle.

License: Apache

Tags: Rules, Governance

Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties

GitHub: https://github.com/stoplightio/spectral

Standards: JSON Schema

8.23 - Vacuum

RuleSets are how to configure vacuum to know which rules to run for each specification, and how it should evaluate those rules, and a RuleSet is a style guide with each rule being an individual requirement as a part of the overall guide.

Enforcing style guides across JSON artifacts to govern schema.

VACUUM rules in the context of API linting are configuration definitions that specify quality and style requirements for OpenAPI specifications. RuleSets serve as comprehensive style guides where each individual rule represents a specific requirement that the API specification must meet. These rules are configured using YAML or JSON and follow the Spectral Ruleset model, making them fully compatible with Spectral rulesets while adding vacuum-specific enhancements like an id property for backward compatibility and flexible naming. A RuleSet contains a collection of rules that define what to check, where to check it, and how violations should be handled, allowing organizations to enforce consistent API design standards across their specifications.

Each rule within a RuleSet consists of several key components: a given property that uses JSONPath expressions (supporting both RFC 9535 and JSON Path Plus) to target specific sections of the OpenAPI document, a severity level (such as error, warning, or info) that indicates the importance of the rule, and a then clause that specifies which built-in function to apply and what field to evaluate. For example, a rule might target all tag objects in an API specification using $.tags[*] as the JSONPath expression, then apply the truthy function to verify that each tag has a description field populated. Built-in core functions like casing, truthy, and pattern provide the logic for evaluating whether specifications comply with the defined rules, enabling automated validation of API documentation quality, consistency, and adherence to organizational or industry standards.

Vacuum is a soft fork of Spectral, keeping the base format for interoperability, while also taking the specification into a new direction to support OpenAPI Doctor and Vacuum linting rules functionality in tooling and pipelines.

License: Apache

Tags: Rules, Governance

Properties: rules, rulesets, given, then, description, message, severity, formats, recommended, and resolved properties

GitHub: https://quobix.com/vacuum/rulesets/understanding/

Standards: JSON Schema, Spectral

8.24 - Open Policy Agent (OPA)

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Unifies policy enforcement for authentication, security, and auditability.

OPA (Open Policy Agent) is a general-purpose policy engine that unifies policy enforcement across your stack—improving developer velocity, security, and auditability. It provides a high-level, declarative language (Rego) for expressing policies across a wide range of use cases.

Originally developed at Styra in 2016, OPA was donated to the Cloud Native Computing Foundation (CNCF) in 2018 and graduated in 2021.

Rego includes rules and rulesets, unit tests, functions and built-ins, reserved keywords, conditionals, comprehensions/iterations, lookups, assignment, and comparison/equality operators—giving you a concise, expressive way to author and validate policy.

You can contribute on GitHub, follow updates via the blog and its RSS feed, and join conversations in the community Slack and on the OPA LinkedIn page.

OPA works across platforms and operational layers, standardizing policy for key infrastructure such as Kubernetes, API gateways, Docker, CI/CD, and more. It also helps normalize policy across diverse data and API integration patterns used in application and agent automation.

License: Apache

Tags: Policies, Authentication, Authorization

Properties: rules, language, tests, functions, reserved names, grammar, conditionals, iterations, lookups, assignment, equality

Website: https://www.openpolicyagent.org/

8.25 - CSV

CSV (Comma-Separated Values) is a simple text format for storing tabular data where each line represents a row and values within rows are separated by commas (or other delimiters).

Lighter weight data serialization format for data exchange.

CSV (Comma-Separated Values) is a simple, plain-text file format used to store tabular data in a structured way where each line represents a row and values within each row are separated by commas (or other delimiters like semicolons, tabs, or pipes). This straightforward format makes CSV one of the most universally supported data exchange formats, readable by spreadsheet applications like Microsoft Excel, Google Sheets, and LibreOffice Calc, as well as databases, data analysis tools, and virtually every programming language. CSV files are human-readable when opened in a text editor, showing data in a grid-like structure that closely mirrors how it would appear in a spreadsheet. The format’s simplicity—requiring no special markup, tags, or complex syntax—makes it ideal for representing datasets, lists, reports, and any tabular information where relationships between columns and rows need to be preserved.

Despite its simplicity, CSV has become essential for data import/export operations, data migration between systems, bulk data loading into databases, and sharing datasets for data analysis and machine learning. The format is particularly valuable in business contexts for handling customer lists, financial records, inventory data, sales reports, and scientific datasets. CSV files are compact and efficient, requiring minimal storage space compared to more verbose formats like XML or JSON, which makes them ideal for transferring large datasets over networks or storing historical data archives. However, CSV has limitations: it lacks standardized support for data types (everything is typically treated as text unless parsed), has no built-in schema definition, struggles with representing hierarchical or nested data, and can encounter issues with special characters, line breaks within fields, or commas in data values (typically addressed by enclosing fields in quotes). Despite these constraints, CSV remains the go-to format for flat, rectangular data exchange due to its universal compatibility, ease of use, and the fact that it can be created and edited with the most basic tools, from text editors to sophisticated data processing frameworks.

Tags: Data Format

Properties: Text-Based, Plain Text Format, Tabular Data, Row-Based Structure, Column-Based Structure, Delimiter-Separated, Comma Delimiter, Alternative Delimiters, Tab-Separated Values, Pipe-Separated, Semicolon-Separated, Human-Readable, Machine-Parsable, Flat File Format, Simple Syntax, Minimal Markup, No Tags, No Attributes, Lightweight, Compact, Small File Size, Efficient Storage, Fast Parsing, Universal Support, Cross-Platform, Language-Agnostic, Spreadsheet Compatible, Excel Compatible, Google Sheets Compatible, LibreOffice Compatible, Database Import/Export, SQL Bulk Loading, Data Exchange Format, Data Migration, Line-Based Records, Newline Row Separator, Field Delimiter, Quote Encapsulation, Double-Quote Escaping, Escape Characters, Header Row Support, Column Names, Schema-Less, No Data Types, Text-Only Values, No Type Enforcement, No Metadata, No Validation, No Comments, No Processing Instructions, RFC 4180, MIME Type text/csv, File Extension .csv, UTF-8 Encoding, ASCII Compatible, Character Encoding Support, Special Character Handling, Embedded Commas, Embedded Quotes, Embedded Newlines, Field Quoting, Optional Quoting, Whitespace Handling, Trailing Spaces, Leading Spaces, Empty Fields, Null Values, Missing Data Support, Sparse Data, Dense Data, Rectangular Grid, Fixed Columns, Variable Rows, No Nesting, No Hierarchy, No Relationships, Flat Structure, Single Table, No Joins, No Foreign Keys, Streaming Compatible, Incremental Processing, Line-by-Line Reading, Memory Efficient, Large File Support, Append-Only, Chronological Data, Time Series Data, Log Files, Sequential Access, Random Access, Indexing Support, Sorting Compatible, Filtering Compatible, Aggregation Compatible, Data Analysis, Statistical Analysis, Machine Learning Datasets, Training Data, Feature Vectors, Pandas Compatible, R Compatible, Python CSV Module, Java CSV Libraries, .NET CSV Support, Excel Formula Support, Cell Formatting Loss, No Styling, No Colors, No Fonts, No Borders, No Images, No Charts, Data-Only Format, Export Format, Import Format, Batch Processing, ETL Operations, Data Warehousing, Business Intelligence, Reporting Format, Audit Trails, Transaction Logs, Customer Lists, Contact Lists, Inventory Data, Sales Data, Financial Records, Scientific Data, Sensor Data, Measurement Data, Survey Results, Poll Data, Census Data, Demographic Data, Geographic Data, Coordinate Data, Latitude Longitude, Address Lists, Email Lists, Product Catalogs, Price Lists, Stock Data, Market Data, Historical Data, Archive Format, Backup Format, Version Control Friendly, Diff-Friendly, Merge-Friendly, Git Compatible, Text Editor Compatible, Command Line Tools, Awk Processing, Sed Processing, Grep Searching, Cut Command, Unix Tools, Shell Scripting, Automation Friendly, Cron Job Compatible, Scheduled Exports, API Responses, Web Scraping Output, Data Dumps, Bulk Downloads, FTP Transfer, Email Attachments, Cloud Storage, S3 Compatible, Azure Blob Storage, Google Cloud Storage, Database Export, MySQL Export, PostgreSQL Export, SQLite Export, Oracle Export, SQL Server Export, MongoDB Export, NoSQL Export, Data Conversion, Format Transformation, JSON to CSV, XML to CSV, Excel to CSV, CSV to JSON, Interoperability, Legacy System Support, Backwards Compatible, Universal Standard, Industry Standard, De Facto Standard, Widely Adopted, Mature Format, Production Ready, Battle Tested, Simple Implementation, Easy Generation, Easy Parsing, Minimal Dependencies, No External Libraries Required, Low Overhead, High Performance, Scalable, Concatenation Support, Split Support, Chunking Support, Partitioning Support, Compression Compatible, Gzip Compatible, Zip Compatible, Tar Compatible

Wikipedia: https://en.wikipedia.org/wiki/Comma-separated_values

8.26 - HTML

HTML (HyperText Markup Language) is the standard markup language used to create and structure content on web pages, defining elements like headings, paragraphs, links, images, and forms through a system of tags that web browsers interpret and render as visual displays.

The standard markup language powering the Web.

HTML (HyperText Markup Language) is the foundational markup language of the World Wide Web, created by Tim Berners-Lee in 1991, that defines the structure and content of web pages through a system of elements enclosed in angle-bracket tags. HTML provides the semantic framework for organizing information on the web, using tags like

for headings,

for paragraphs, for hyperlinks, for images,

for tabular data, and
for user input. These elements create a hierarchical document structure called the Document Object Model (DOM) that web browsers parse and render into the visual pages users see and interact with. HTML is a declarative language, meaning developers describe what content should appear and how it should be structured rather than specifying how to display it—the actual visual presentation is handled by CSS (Cascading Style Sheets), while interactive behavior is managed by JavaScript. This separation of concerns allows HTML to focus purely on semantic meaning and content structure, making web pages accessible to screen readers, search engines, and various devices.

Modern HTML (currently HTML5, standardized by the W3C and WHATWG) has evolved far beyond simple text formatting to support rich multimedia content, complex web applications, and interactive experiences without requiring plugins. HTML5 introduced semantic elements like

,