AI concepts that every founder / PM should know. The no-jargon version.

  1. AI: AI is software that can do things we used to think only humans could do. It is a software that can learn by itself from data and uses it to predict or generate something.
  2. Predictive AI vs Generative AI: Predictive AI looks at data and says “what’s likely to happen next?” - fraud detection, demand forecasting, lead scoring. Generative AI looks at data and says “let me create something new” - text, images, code, music.
  3. Machine Learning & Deep Learning: Machine Learning is AI that learns patterns from data instead of being manually programmed. Deep Learning is a subset of ML that uses neural networks (layers of math inspired by the human brain). Deep Learning works well on messy data like text, images, and audio.
  4. Large Language Model (LLM): Think of it as an incredibly well-read intern who has read most of the internet. Under the hood, it just predicts the next word extremely well that it can write essays, code, and reason through problems. GPT, Claude, Gemini - all LLMs.
  5. Foundation Model: Instead of building a separate AI for each job, you train one giant model and fine-tune it. Like building one strong engine and putting it in cars, trucks, and boats.
  6. Training vs Inference: Training = teaching the AI. Expensive. Takes weeks. Costs a lot. Indirect / R&D Cost. Inference = using the trained AI. Cheap per query. Milliseconds. Direct Cost.
  7. Prompting / Prompt Engineering: The art of asking AI the right question to get the right answer. A vague prompt gives a vague answer. A specific prompt with context, examples, and constraints gives a brilliant answer. Levels of Prompting: Zero-shot: Just ask. No examples. Few-shot: Give 2-3 examples in your prompt. Chain-of-thought: Ask the AI to think step by step.
  8. Tokens: LLMs don’t read words - they read tokens. A token ≈ ¾ of a word. You pay per token. Every API call costs money based on tokens in (prompt) and tokens out (response).
  9. Context Window: The amount of text an AI can “see” at once. Its working memory. Small context window = AI forgets earlier parts of your conversation.
  10. Hallucination: When AI confidently makes up facts that sound completely real. It doesn’t “know” it’s lying. It’s pattern-matching and sometimes the pattern leads somewhere wrong.
  11. RAG (Retrieval-Augmented Generation): Instead of relying on what AI memorized during training, you feed it your own documents at query time. Reduces hallucination.
  12. Fine-Tuning: Taking a pre-trained model and training it further on your specific data. Like hiring a generalist doctor and sending her to specialize in cardiology. More expensive than prompting. Cheaper than training from scratch.
  13. Embedding: Converting text (or images) into numbers that capture meaning for AI. “Happy” and “Joyful” → similar numbers. “Happy” and “Refrigerator” → very different. This is how AI “understands” similarity.
  14. Vector Database: Traditional databases find exact “word” matches. Vector databases find “similar” things. User searches “shoes for flat feet” → vector DB finds relevant products even if no description uses those exact words.
  15. Agent: An AI that doesn’t just answer questions - it takes actions. Browse the web, call APIs, write code, book meetings.
  16. Agentic Workflow: Instead of one big prompt → one big answer, you break the task into steps. Agent 1 researches. Agent 2 drafts. Agent 3 reviews.
  17. Tool Use / Function Calling: Giving AI the ability to use external tools - search engines, databases, APIs. The AI decides when to use which tool.
  18. Coding Agents: AI that writes, tests, debugs, and ships code autonomously. You describe what you want in plain English, the agent writes the code. Ex: Claude Code, Codex, Cursor.
  19. Computer-Using Agents (CUA): AI that can see and use your screen - click buttons, fill forms, navigate apps - just like a human would. Ex: Claude Cowork, OpenAI’s Operator.
  20. Multimodal AI: AI that understands and generates all modes - text, images, audio, video, and code. Your product can accept a photo and return text, or accept text and return an image.
  21. Diffusion Model: The tech behind AI image generation (Midjourney, DALL-E, Stable Diffusion). Starts with noise, gradually removes it to create an image.
  22. Synthetic Data: Training data generated by AI itself, not collected from the real world. When you don’t have enough real data, or real data has privacy issues, you generate synthetic data.
  23. Guardrails: Rules and filters around AI to prevent it from going off-script. Prevent harmful content, off-topic answers, leaking confidential data.
  24. Human-in-the-Loop (HITL): AI does 90% of the work. A human reviews the last 10%. Needed when mistakes are costly - medical, legal, financial.
  25. AI-Native vs AI-Enabled: AI-enabled = existing product with AI bolted on (chatbot on your PowerPoint). AI-native = built from the ground up around AI (Gamma).
  26. Temperature: Controls how “creative” vs “predictable” the AI is. Low temperature = safe, consistent, factual. Ex: Customer Support AI. High temperature = creative, varied, crank it up. Ex: Brainstorming.
  27. Overfitting: When AI memorizes training data instead of learning patterns. Like a student who memorized past exam answers but can’t solve a new problem.
  28. Eval (Evaluation): QA for the AI world. How you check if your AI is actually doing what it was supposed to do in all cases. Checks for Accuracy, Relevance, Helpfulness, Safety. Eval is pre-ship. Post-ship, you need what’s called monitoring to catch drift, cost spikes, and latency.
  29. MCP (Model Context Protocol): A standard that lets AI connect to external tools and data in a plug-and-play way. Think USB-C for AI - one standard connector instead of custom integrations for every tool.
  30. Explainability (XAI): Can you explain why AI made a decision? In finance, healthcare, insurance - “the AI said so” is not acceptable.
  31. The “Last Mile” Problem: AI can get you 80% of the way shockingly fast. That last 20% - handling edge cases, weird inputs/outputs, industry-specific nuance, getting output just right - is where all the time and money goes. Most failed AI products didn’t fail at the AI. They failed at the last mile.