Knowledge Base

Demystifying AI Concepts

A curated collection of explained concepts, protocols and tools we use when building AI agent integrations. Looking for a specific term? Check out the tech dictionary.

What is the A2A Protocol?

A2A (Agent-to-Agent) is the common name for the Agent-to-Agent Protocol — an open communication protocol developed by Google. It is a set of rules and schemas for information exchange that allows programs (AI agents) to communicate with each other in a standardized way.

Thanks to A2A, developers can write software that sends messages to other A2A-compliant applications — regardless of the programming language or infrastructure they were built on.

In plain language

Think of A2A as a shared diplomatic language. Two countries (AI agents) may not speak the same native language, but if both know Esperanto (the A2A protocol), they can communicate. Agent ANITA can delegate a task to agent ANTEK, and ANTEK will respond in a way ANITA understands — all automatically, without human mediation.

Technically — how does it work?

  • Agents communicate over HTTP/HTTPS using JSON format.
  • Each agent publishes an Agent Card — a JSON file describing its capabilities, inputs and outputs.
  • Communication follows a Task model: one agent sends a task, the other processes it and returns a result.
  • The protocol supports both synchronous responses and asynchronous result streaming (Server-Sent Events).
  • Authentication uses standard web mechanisms (e.g. Bearer Token, OAuth 2.0).

Tools & Technologies

Below you will find descriptions of the tools we use (or recommend) when building AI agent integrations.

Google Cloud — Vertex AI

Google Cloud ML/AI platform

Vertex AI is Google Cloud's managed machine learning platform. It enables training, deploying and monitoring AI models on scalable cloud infrastructure. In the context of AI agents, the key feature is Agent Engine (Reasoning Engine) — a managed runtime for LangChain/LangGraph agents that handles state management, logging and security without requiring you to manage your own infrastructure.

Gemini

Google's family of language models

Gemini is a family of multimodal large language models (LLMs) developed by Google DeepMind. Gemini models (Flash, Pro, Ultra) can understand and generate text, code, images and audio. In the ANITA & ANTEK ecosystem, Gemini serves as the reasoning engine — processing task context and generating responses or next actions for the agent to execute.

LiteLLM

Unified API for 100+ LLM providers

LiteLLM is an open-source Python library and proxy server that provides a unified OpenAI-compatible API for over 100 different LLM providers: OpenAI, Anthropic, Google Gemini, Azure, Cohere and many more. It enables easy switching between providers, API key management, cost logging and load balancing — without changing application code.

LibreChat

Open-source AI chat interface

LibreChat is an open-source, self-hosted chat platform and an advanced alternative to ChatGPT. It supports multiple AI providers simultaneously (OpenAI, Anthropic, Gemini, local models via Ollama and others). It offers conversation management, plugins, RAG and multi-user management. Great for internal enterprise deployments.

Open WebUI

Web interface for Ollama and OpenAI models

Open WebUI (formerly Ollama WebUI) is an extensible, self-hosted web interface for AI models that works completely offline. It supports local models via Ollama and remote APIs (OpenAI, Gemini, etc.). It enables creating custom agents, data processing pipelines, RAG on your own documents and multi-user access management.

Vercel AI SDK

TypeScript SDK for building AI applications

The Vercel AI SDK is a TypeScript/JavaScript library that simplifies building AI user interfaces — in Next.js and other frameworks. It provides React hooks (e.g. useChat, useCompletion), supports response streaming, agent tools, structured data generation and integrates with many LLM providers.

LangGraph

Framework for building graph-based AI agents

LangGraph is a library (Python and JavaScript) from the LangChain ecosystem, used to build complex, stateful AI agents as directed graphs. Each graph node is a processing step (model call, tool call, conditional logic), and edges define the control flow. This enables agents with loops, branching and complex orchestration logic — used in Vertex AI Agent Engine, among other platforms.

A2C Protocol (Agent-to-Client)

Agent–user interface communication protocol

A2C (Agent-to-Client) is a protocol describing communication between an AI agent and the user interface (frontend / client). While A2A governs agent-to-agent communication in the background, A2C defines how an agent's results and actions are presented to the end user — e.g. text streaming, progress events, UI elements dynamically generated by the agent.

RAG — Retrieval-Augmented Generation

RAG (Retrieval-Augmented Generation) is a technique that enables language models to answer questions based on real, up-to-date data from a knowledge base, rather than relying solely on knowledge "learned" during training.

In practice: before the model generates a response, the system searches a document database (vector store) for fragments most relevant to the question and attaches them as context to the prompt. This allows the agent to answer based on company documents, FAQs, project history or other internal data — without retraining the model.

In the ANITA & ANTEK ecosystem, ANTEK uses RAG to search the client project archive and generate precise quotes or analyses based on historical data.

Looking for a specific term?

Go to tech dictionary