← All Insights
    AI & TechnologyPE Operations

    25 AI Terms Every CEO Must Know in 2026

    Dr. Oliver Gausmann · April 3, 2026 · 12 min read

    Five conversations with five different people. 25 terms. One goal: that you can hold your own when your CTO, your accountant, or your board asks about AI. 36% of German companies now use AI, nearly double the figure from a year earlier1. The AI terms that matter for business leaders have shifted completely. What sounded like a research lab in 2024 sits on every board agenda in 2026.

    Each of these terms came up in a specific conversation. No dictionary, no alphabet. Five situations from mid-market companies, as I encounter them regularly.

    "My CTO wants €80,000 for a knowledge platform. I can't understand a word in the proposal."

    A mechanical engineering CEO, 280 employees, slid a proposal across the table a few weeks ago. Three pages of acronyms. "RAG-based knowledge platform built on a foundation model with reduced hallucination risk." He tapped the cost section and asked: "Should I sign this? My CTO says the competition already has one."

    The answer depends on whether you understand what's written. We went through the proposal line by line.

    A foundation model is the umbrella term for large AI systems developed by companies like OpenAI, Google, Anthropic, or Meta. GPT-4, Gemini, Claude, Llama. They're trained on vast amounts of text and can then write, translate, generate code, and answer questions. Training costs run into hundreds of millions2. Using them costs a fraction of that.

    LLM (Large Language Model) is the technical label for the subset of these models that processes language. When your CTO says "LLM," they mean the technology behind the chat window. "And what exactly am I buying for €80,000?" the CEO asked. Fair question.

    An LLM knows the world. It doesn't know your company. It knows what torque is. It doesn't know the maintenance history of your milling machine. That's where RAG (Retrieval-Augmented Generation) comes in. Before the LLM answers, an intermediate step searches your internal documents, manuals, protocols, ticket systems. The results get fed to the LLM as context. Think of a new hire who reads the files first, then answers.

    It works. About 80% of the time. The remaining 20% is what the industry calls a hallucination. The LLM invents things. Confidently, grammatically correct, factually wrong. Not a bug to fix, a property of the technology. The CEO sat up: "What if the system quotes a warranty period that doesn't exist?" Exactly. Rates sit between 5 and 20% depending on the task3. For a chatbot quoting maintenance intervals to customers, that needs active management.

    Costs run on tokens. One token is roughly three-quarters of a word. Every query costs input tokens (your question plus the documents loaded via RAG), every answer costs output tokens. The context window determines how much text the model can process at once. GPT-4o handles up to 128,000 tokens, roughly a 300-page book. Larger windows mean better answers, higher per-query costs.

    The CEO signed the proposal. After understanding what it said. And after having the clause with the automatic three-year renewal removed.

    Terms from this conversation

    Foundation Model · LLM · RAG · Hallucination · Token · Context Window

    . Can someone explain why?"

    The finance director of an automotive supplier, 120 employees, called me on a Tuesday morning. She had the monthly ChatGPT Enterprise statement in front of her. January: €2,400. March: €7,800. Same user count, 40 licenses. "I've checked the numbers three times," she said. "They're correct. I just don't understand what they're for."

    The reason is inference cost. Training is expensive but happens once. Inference, the actual use of a trained model, runs continuously. Stanford HAI measured a 280-fold drop in inference costs for GPT-3.5-level performance between November 2022 and October 20242. From $20 per million tokens to $0.07. Sounds cheap. The finance director disagreed.

    Looking at the usage data together, the picture became clear. Two departments had started using reasoning models: a newer generation (OpenAI's o1, o3, DeepSeek R1) that "thinks before answering." They decompose complex problems into intermediate steps, check assumptions, self-correct. Procurement used them for contract analysis, engineering for technical calculations. Results were better. Costs ran 10 to 74 times higher per query4 because the model generates vastly more tokens internally. The finance director didn't know this because nobody told her different model classes exist.

    Her next question: "Are there cheaper alternatives?" Yes. The choice between open-weight and closed models is one of the most important decisions. Open-weight models like Meta's Llama 4 or DeepSeek R1 can be downloaded and run on your own infrastructure. Closed models like GPT-5 or Claude only run via the vendor's API. The performance gap shrank from 8% to 1.7% in a single year2.

    Small Language Models (SLMs) offer a third path. Compact models with 1 to 10 billion parameters running on a single office workstation. For specialized tasks, classifying sales emails, summarizing maintenance logs, they often match large models at 75 to 95% lower cost.

    The finance director asked one final question: "Do we need to retrain the model on our data?" Depends. Fine-tuning means retraining the model on your own data. Prompting means teaching it what you want through clever instructions. The rule of thumb I gave her: prompting covers 90% of mid-market use cases. Fine-tuning pays off only when you automate thousands of identical tasks.

    She chose a pilot project with an SLM. On their own server. The OpenAI bill is coming down.

    Terms from this conversation

    Inference Cost · Reasoning Models · Open-Weight vs. Closed Models · Small Language Models · Fine-Tuning vs. Prompting

    built a customer database with ChatGPT. What do we do now?"

    A head of sales at a trading company, 90 employees, told me this over lunch. One of her team members, Ms. Berger, had taken the initiative to copy customer feedback into ChatGPT, had it analyze patterns, and exported results into a self-built Google Sheet. "The results are impressive," the sales director said. "But our data protection officer is losing his mind."

    Ms. Berger had been running Shadow AI. In 42% of German companies, employees use private AI tools at work5. Private ChatGPT accounts receiving customer data, draft contracts, financial figures. Without IT approval, without privacy review.

    Ms. Berger had done something else too. She'd used an AI tool called Cursor to build a small dashboard that updates the analysis automatically. Without writing a single line of code herself. Vibe coding is the term: employees without programming skills building working software using AI assistance. The sales director was impressed and concerned at the same time. "The tool works. But where does the data go? Who maintains the code when Ms. Berger is on sick leave?"

    We discussed what the official path looks like. The most common AI assistant type is the AI copilot: AI embedded directly in existing software. Microsoft 365 Copilot in Word and Outlook, GitHub Copilot for developers. At around €30 per user per month, copilots are often the fastest path to measurable productivity. And the data protection officer can sleep at night.

    The next level is agentic AI. While a copilot waits for your input, an agent acts autonomously. It receives a goal, breaks it into steps, accesses multiple systems, checks interim results. What Ms. Berger did manually, reading feedback, spotting patterns, exporting results, an agent could do automatically. Within the company's IT infrastructure. The Salesforce Mittelstand AI Index 2026 reports that agent adoption nearly doubled year-over-year6.

    Agents rarely work alone. Most production systems are compound AI systems, assemblies of multiple specialized components. The system that could replace Ms. Berger's work might consist of a RAG module for product knowledge, an agent for feedback analysis, and a rule engine determining which results go to whom.

    The sales director didn't reprimand Ms. Berger. She made her the AI lead for the sales team. With an official tool, a budget, and clear rules.

    Terms from this conversation

    Shadow AI · Vibe Coding · AI Copilot · Agentic AI · Compound AI Systems

    why we're investing more in AI when AI is getting cheaper."

    A board member of a mid-sized industrial company, 450 employees, was preparing for an advisory board meeting. He'd secured an AI budget of €180,000 the previous year. Now he needed €320,000. His problem: "Everything I read says AI is getting cheaper. My board will ask why the bill is going up."

    We spent the evening working through two concepts that explain this.

    The Red Queen Hypothesis comes from evolutionary biology. Named after the Red Queen in Alice in Wonderland, who says: "It takes all the running you can do, to keep in the same place." Applied to AI: when all your competitors use AI, your own adoption doesn't make you better. It prevents you from falling behind7. The investment produces no visible lead. It preserves your competitive position.

    He nodded. "I get that. But why does it cost more?"

    The Jevons Paradox answers this. English economist William Stanley Jevons observed in 1865 that more efficient steam engines didn't reduce coal consumption. They increased it, because efficiency made coal economical for far more applications8. Zhang and Zhang formally proved in January 2026 that the same mechanism operates in AI: falling inference prices cause companies to redesign architectures, add deeper processing chains, and ultimately consume more compute7. His company had done exactly that. Started with a customer service chatbot. Then AI-driven quality control. Then automated quote generation. Each individual application cheaper than last year. Total: 78% more consumption.

    Companies generating proprietary data benefit from the data flywheel7. More users produce better data, which improves the model, which attracts more users. Early movers build a data advantage that latecomers struggle to close.

    When the cycle inverts, model collapse occurs: AI models increasingly trained on AI-generated content lose quality9. The company's original, human-created data grows more valuable as synthetic content floods the market.

    "Our marketing team wrote four blog posts with AI last week," the board member said. "They all sounded... the same." That's AI slop: mass-produced AI content without substance10. When your sales team sends AI slop, the recipient notices.

    The marginal cost of intelligence approaches zero2. What used to cost an hour of consulting time now costs 7 cents in token fees. When generating knowledge becomes cheap, evaluating knowledge becomes expensive. Ethan Mollick of Wharton coined the term centaur worker for someone who knows when to let the AI run and when to take over11. The board member used that in his presentation. "We're not investing in AI. We're investing in centaur workers." The board approved the budget.

    Terms from this conversation

    Red Queen Hypothesis · Jevons Paradox · Data Flywheel · Model Collapse · AI Slop · Marginal Cost of Intelligence · Centaur Worker

    we need an AI registry by August. Is that true?"

    The CEO of a medical technology company, 160 employees, had received a letter from his external counsel. Four pages, with highlighted deadlines. He called and said: "It says here 'AI Literacy obligation since February 2025.' We haven't done anything. How bad is this?"

    The answer: serious, but solvable. Since February 2, 2025, the AI literacy obligation applies to every company using AI12. Article 4 of the EU AI Act requires providers and deployers to ensure their staff have a "sufficient level of AI literacy." This applies regardless of the system's risk classification. Even a company that only uses Microsoft Copilot falls under it. Enforcement begins August 2, 2026 through national authorities12. In Germany, the Bundesnetzagentur takes this role under the KI-MIG, approved by the cabinet on February 11, 202613.

    "And what exactly do we need to do by August?"

    The Act classifies AI into four risk tiers: prohibited (social scoring, subliminal manipulation), high-risk (HR screening, credit scoring, critical infrastructure), limited risk (transparency obligations such as labeling deepfakes), and minimal risk (general chatbots, most mid-market AI applications). Medical technology is a special case: AI in medical devices almost always falls under high-risk.

    EU AI Act: Key Deadlines

    Feb 2025 ✓

    AI Literacy obligation (Art. 4) + Prohibited practices (Art. 5)

    Aug 2025 ✓

    GPAI obligations (transparency, copyright) + Governance rules

    Aug 2026 ◄ You are here

    High-risk obligations: conformity assessment, CE marking, AI inventory, XAI

    Aug 2027

    High-risk in regulated products (medical devices, machinery, elevators)

    The first step his lawyer meant is an AI inventory: a complete catalog of all AI systems in the organization, including AI features embedded in purchased software. The CRM has AI-powered lead scoring? Goes in the inventory. The accounting software uses AI for invoice recognition? Goes in the inventory. The CEO was surprised: "We use AI?" Yes. In at least seven systems, as it turned out.

    High-risk systems require a conformity assessment from August 202614. CE marking applies to software for the first time. The process covers risk management, technical documentation, data quality checks, and human oversight mechanisms. It takes three to six months. For a medical technology company with existing MDR experience, much of this transfers.

    For high-risk applications, the Act mandates Explainable AI (XAI): AI decisions must be comprehensible to the person overseeing the system14. When an AI diagnostic system issues a recommendation, the basis for that recommendation must be documented.

    The umbrella term is AI governance: your internal rulebook for AI deployment. Who may use which tools? How are outputs reviewed? Who is liable when things go wrong?

    Before hanging up, the CEO had one practical question: "Our new AI vendor wants to lock us into their system. Can I avoid that?" Yes. MCP (Model Context Protocol) is an open standard, now under the Linux Foundation, for connecting AI systems to data sources and tools15. OpenAI, Google, Microsoft, and Amazon all support it. Ask every AI vendor: "Do you support MCP?" If not, expect lock-in. A2A (Agent-to-Agent Protocol) complements MCP by connecting agents to each other rather than to tools16. Developed by Google, backed by 150+ organizations including SAP.

    The CEO did three things: started an AI inventory, commissioned AI literacy training, added MCP compatibility to vendor requirements. Four months before the deadline.

    Terms from this conversation

    AI Literacy · KI-MIG · Risk Tiers · AI Inventory · Conformity Assessment · Explainable AI · AI Governance · MCP · A2A

    All 25 Terms at a Glance

    Term What it means Conv.
    Foundation Model Large AI systems by OpenAI, Google, Meta, Anthropic. Basis for all AI applications. 1
    LLM Language model. The technology behind ChatGPT, Claude, Gemini. 1
    RAG AI searches your documents first, then answers. Company knowledge + language understanding. 1
    Hallucination AI invents things. Confident, grammatically correct, factually wrong. 1
    Token / Context Window Billing unit and capacity limit. Determines cost and quality. 1
    Inference Cost Running cost per AI query. Adds up even as per-query prices fall. 2
    Reasoning Models AI that thinks before answering. Better at analysis, 10-74x more expensive. 2
    Open-Weight vs. Closed Run on your server vs. vendor API. Cost structure, privacy, expertise trade-offs. 2
    Small Language Models Compact models on a single workstation. 75-95% cheaper, often equally good. 2
    Fine-Tuning vs. Prompting Retrain the model vs. instruct it cleverly. Prompting covers 90% of cases. 2
    Shadow AI Employees using private AI tools without IT approval. In 42% of companies. 3
    Vibe Coding Non-programmers building software with AI. Opportunity and governance risk. 3
    AI Copilot AI in existing software (Word, Outlook, Salesforce). ~€30/user/month. 3
    Agentic AI AI that acts autonomously: accepts goals, plans steps, uses systems. 3
    Compound AI Systems Assemblies of specialized AI components. How production systems actually work. 3
    Red Queen Hypothesis AI investment maintains position, doesn't create advantage. Stopping = falling behind. 4
    Jevons Paradox AI gets cheaper per query, total costs rise. Efficiency creates demand. 4
    Data Flywheel More users = better data = better model = more users. Early movers win. 4
    Model Collapse AI trained on AI content degrades. Original human data becomes more valuable. 4
    AI Slop Mass-produced AI content without substance. Recipients notice the difference. 4
    Marginal Cost of Intelligence Cost of a smart answer approaches zero. Evaluating knowledge beats generating it. 4
    Centaur Worker Employee who knows when to let AI work and when to take over. 4
    AI Literacy Legal obligation since Feb 2025. Training all AI-using employees. 5
    Risk Tiers / AI Inventory Four levels in the EU AI Act. AI inventory is the mandatory first step. 5
    AI Governance / XAI / MCP / A2A Rulebook, explainability, open standards. Prevents vendor lock-in. 5

    My Take

    people apologize for knowledge gaps. The finance director who was embarrassed she didn't know "inference cost." The board member who had to look up "Jevons Paradox." Ms. Berger, who thought she'd done something forbidden. "I should really know this." No, you shouldn't. The vocabulary shifts faster than any training program can keep up.

    What you need is the ability to ask the right questions. And to recognize when someone uses jargon to sell a bad investment.

    Of these 25 terms, five are genuinely urgent. AI literacy, because the obligation has applied since February 2025 and most companies have done nothing. AI inventory, because it's the first step for everything else. Shadow AI, because it's already happening in your organization. Inference cost, because it determines your next budget discussion. And the Red Queen, because she explains why waiting is not an option.

    Sources

    1Bitkom Research, Künstliche Intelligenz 2025 (604 Unternehmen, KW 27-32 2025)

    2Stanford HAI, AI Index Report 2025 (8. Ausgabe, April 2025)

    3Vectara Hallucination Evaluation Model (HHEM), 2024/2025

    4OpenAI, Reasoning Model Pricing o1/o3, 2025

    5Bitkom, Beschäftigte nutzen vermehrt Schatten-KI (604 Unternehmen, Oktober 2025)

    6Salesforce / DMB, KI-Index Mittelstand 2026 (526 Unternehmen)

    7Zhang & Zhang, The Economics of Digital Intelligence Capital (Januar 2026)

    8Jevons, W.S., The Coal Question, 1865 (historische Referenz)

    9Shumailov et al., The Curse of Recursion (Nature, 2024)

    10Perdigão, The AI Glossary Update 2026

    11Mollick et al., Centaurs and Cyborgs on the Jagged Frontier (BCG/Wharton, 2023)

    12EU AI Act, Artikel 4, Verordnung (EU) 2024/1689

    13ADVISORI, KI-MIG Durchführungsgesetz 2026

    14EU AI Act, Verordnung (EU) 2024/1689

    15Anthropic, MCP und Agentic AI Foundation 2026

    16Google Developers Blog, A2A Protocol (April 2025)

    Was this article helpful?

    Have questions about this topic?

    Schedule a conversation