An AI engineer in 2026 builds products powered by large language models — not trains models from scratch. The six skills to learn, in order: a programming language (Python by default), prompt engineering, LLM APIs (OpenAI/Anthropic/Gemini), embeddings, RAG pipelines, and an agent framework (LangChain or CrewAI). A CS degree helps but isn't required — shipped projects matter more than credentials.
This article was researched and written by the Careery team — that helps land higher-paying jobs faster than ever! Learn more about Careery →
Quick Answers
How long does it take to become an AI engineer?
With programming experience: 1-3 months of focused learning on the GenAI stack. Without coding background: 2-6 months — thanks to vibe coding tools like Cursor, the barrier is lower than ever, but you still need to understand what the code does. The bottleneck isn't theory — it's building projects that demonstrate you can ship AI-powered products to production.
Can I become an AI engineer without knowing programming?
Yes — with vibe coding tools like Cursor and advanced models like GPT and Claude, you can build real AI applications by describing what you want in natural language. However, you must quickly learn to understand the architecture of the solutions being generated. Without that understanding, your growth will slow over time. If you have the time, start learning programming fundamentals in parallel — it gives you a much stronger foundation.
Do you need a degree to become an AI engineer?
No. AI engineering in 2026 is one of the most portfolio-driven fields in tech. A GitHub repo with a working RAG application speaks louder than a diploma. Companies like OpenAI, Anthropic, and hundreds of AI startups hire based on what you've built, not where you studied.
What skills do AI engineers need in 2026?
A programming language (Python by default, but the choice depends on your project), prompt engineering, LLM APIs (OpenAI/Anthropic/Gemini), embeddings and vector databases, RAG pipelines, and an agent framework like LangChain or CrewAI. Traditional ML (training models from scratch) is useful but not required for most AI engineering roles.
What is the difference between an AI engineer and an ML engineer?
ML engineers train and optimize models. AI engineers build products using pre-trained models (GPT, Claude, Gemini). Think of it this way: ML engineers build the engine, AI engineers build the car. Most new AI engineering roles focus on integrating LLMs into applications, not training them.
Here are the six things to learn, in order, starting today: a programming language (Python by default), prompt engineering, LLM APIs, embeddings, RAG, and agent frameworks. That's the AI engineering stack in 2026. Everything else — degrees, certifications, theoretical ML — is secondary to actually building with these tools.
AI engineering is not machine learning. AI engineers don't train models from scratch. They take powerful pre-trained models like GPT, Claude, and Gemini and build real products on top of them — chatbots, search engines, document processors, AI agents, and full-stack applications. It's one of the fastest-growing and highest-paying specializations in tech, and the barrier to entry is lower than most people think.
- AI Engineer
An AI engineer designs, builds, and deploys applications powered by large language models (LLMs) and other AI systems. Unlike ML engineers who train models, AI engineers focus on integrating pre-trained models into products — building RAG pipelines, AI agents, chatbots, and intelligent features using APIs from OpenAI, Anthropic, Google, and open-source alternatives.
AI Engineering vs ML Engineering vs Data Science
This is the most common confusion. All three involve AI, but the day-to-day work is fundamentally different.
The Real Day-to-Day
Here's what AI engineers actually do — not the job posting fantasy, but real work:
Morning
- Debug why the RAG pipeline is returning irrelevant results for a specific query type
- Review a PR that adds streaming response support to the chatbot
- Write a prompt chain that extracts structured data from unstructured PDF documents
- Test a new embedding model to see if it improves retrieval accuracy
Afternoon
- Build an AI agent that can search a knowledge base, call external APIs, and synthesize answers
- Optimize token usage — the current implementation is burning through the API budget
- Implement guardrails to prevent the LLM from hallucinating about sensitive topics
- Deploy a new version of the AI feature behind a feature flag and monitor user feedback
AI engineering in 2026 means building with LLMs, not training them. The role is closer to full-stack development than to research — it's about shipping AI-powered products.
Here's the stack, in the order to learn it. Each layer builds on the previous one.
1. A Programming Language — Python by Default
The language you choose depends on your project. Python is the default — most LLM APIs, frameworks (LangChain, LlamaIndex), and AI tools are Python-first. But it's not the only option: TypeScript/JavaScript works well for AI-powered web apps (Vercel AI SDK, Next.js), Go and Rust are used for high-performance AI infrastructure. If you're not sure, start with Python.
What matters at this stage is not mastering the language — it's being able to:
- Call APIs: making HTTP requests, handling JSON responses, managing API keys
- Work with data: processing text, files, and structured data
- Handle async patterns: for streaming LLM responses
- Manage dependencies: packages, virtual environments, project setup
If you can write a script that calls an API, processes the response, and saves the result — your programming is ready for AI engineering.
You can start building AI applications today using vibe coding tools like Cursor — describe what you want, and the AI writes the code. But you should learn to understand the architecture of what's being generated. Without that understanding, your growth will plateau. If you have the time, learn programming fundamentals in parallel — it gives you a much stronger foundation long-term.
2. Prompt Engineering — The Core Skill
Prompt engineering is not "just writing instructions." It's a systematic approach to getting reliable, consistent output from LLMs.
Key techniques to master:
- System prompts: defining the LLM's role, constraints, and output format
- Few-shot prompting: providing examples so the model understands the pattern
- Chain-of-thought: asking the model to reason step by step before answering
- Output formatting: structured output with JSON mode, function calling
- Prompt chaining: breaking complex tasks into sequential LLM calls
Build a prompt that extracts the name, company, role, and email from any cold outreach email — and returns clean JSON every time, even for poorly formatted messages. If you can do that reliably, your prompt engineering is job-ready.
3. LLM APIs — OpenAI, Anthropic, Google
Knowing how to call LLM APIs and choosing the right model for each task is bread-and-butter AI engineering.
What to learn:
- Chat completions API (messages format, roles, parameters)
- Streaming responses for real-time UI
- Function calling / tool use — letting the LLM invoke your code
- Token counting and cost optimization
- Model selection — when to use a fast cheap model vs a powerful expensive one
4. Embeddings — How Computers Understand Meaning
Before diving into RAG and vector databases, you need to understand the concept that powers all of it: embeddings.
Embeddings sound complicated. They're not. Here's the simplest explanation:
Imagine every word, sentence, or document has a GPS coordinate. Not a real location — a location in "meaning space." Things with similar meanings are close together on the map. Things with different meanings are far apart.
- "puppy" and "dog" → very close on the map
- "puppy" and "rocket" → very far apart
- "How do I reset my password?" and "I forgot my login credentials" → almost the same spot
That's what an embedding is: a list of numbers (a coordinate) that captures what something means. When the computer compares embeddings, it's literally measuring how close two pieces of meaning are.
Why this matters: Embeddings are the foundation of RAG, semantic search, vector databases, and recommendation systems. Every time an AI system "finds relevant documents" or "searches by meaning" — it's using embeddings under the hood.
How to use them in practice:
- Generate — call an API with your text, get back a list of numbers. Paid options: OpenAI Embeddings API, Cohere Embed API. Free/local: Sentence-Transformers (Hugging Face) — models like
all-MiniLM-L6-v2run on your machine for free. - Store — put the embeddings in a vector database (Pinecone, Weaviate, Chroma, or pgvector for Postgres).
- Query — embed the user's question with the same model, search the vector DB for closest matches. Those matches become context for the LLM.
Think of embeddings as GPS coordinates for meaning. Similar meanings = nearby coordinates. Different meanings = distant coordinates. Vector databases = a map search engine that finds the closest points to your query.
The analogy above is a simplification. Technically, an embedding is a high-dimensional numerical vector produced by a neural network that encodes semantic relationships between inputs in a continuous vector space.
5. RAG Pipelines — Retrieval-Augmented Generation
RAG is the most common architecture in production AI applications. It solves the biggest LLM problem: models don't know your private data.
- RAG (Retrieval-Augmented Generation)
A pattern where the AI system first retrieves relevant documents from a knowledge base, then passes those documents to an LLM along with the user's question. This lets the LLM answer questions about data it was never trained on — company documents, product catalogs, support tickets, or any private dataset.
The RAG pipeline in practice:
- Ingest: split documents into chunks, generate embeddings, store in a vector database
- Retrieve: when a user asks a question, embed the query and find the most similar chunks
- Generate: pass the retrieved chunks + question to an LLM, get a grounded answer
Tools to know: LangChain, LlamaIndex, vector databases (Pinecone, Weaviate, Chroma, pgvector).
6. Agent Frameworks — AI That Takes Action
Agents are LLMs that can use tools — search the web, query databases, call APIs, write code, and chain actions together to accomplish complex tasks.
Key frameworks:
- LangChain / LangGraph — the most popular, with the largest ecosystem
- CrewAI — multi-agent orchestration (multiple AI agents collaborating)
- AutoGen (Microsoft) — conversation-based multi-agent patterns
- Vercel AI SDK — for building AI features in web applications
Agent frameworks are evolving rapidly. Don't spend months mastering one framework — learn the patterns (tool use, memory, planning, multi-agent coordination) and you'll be able to pick up any framework quickly.
We interviewed an Amazon Applied Scientist who built production GenAI systems using AWS Bedrock — covering Claude integration, Knowledge Bases, Guardrails, and Agents that process billions of customer interactions. Read the full Careery Insight: AWS Bedrock: Complete Guide from an Amazon Applied Scientist.
The GenAI stack has six layers: programming language (Python by default) → prompt engineering → LLM APIs → embeddings → RAG pipelines → agent frameworks. Learn them in this order. Each builds on the previous one.
Here's something most "how to become an AI engineer" guides skip: the tools you use to write AI are also AI.
- Vibe Coding
A development workflow where engineers use AI-powered code editors and assistants to write, debug, and iterate on code through natural language conversation. Instead of typing every line manually, the developer describes intent and the AI generates, modifies, or refactors the code. The term was coined by Andrej Karpathy, co-founder of OpenAI.
This isn't a gimmick. AI-assisted development is how the fastest AI engineers ship products. It's a core professional skill in 2026, not a crutch.
The Tools
How AI Engineers Actually Use These Tools
This isn't about lazy coding. It's about speed and iteration:
- Scaffolding: "Build a FastAPI endpoint that accepts a question, queries Pinecone, and returns an LLM-generated answer" → working code in 30 seconds instead of 30 minutes
- Debugging: paste an error traceback into the chat → get an explanation and fix
- Refactoring: "Convert this synchronous API call to async streaming" → done
- Learning: "Explain what this LangChain callback handler does" → contextual explanation with your actual code
- Writing AI with AI: using Cursor to build a RAG pipeline, an agent, or a prompt chain — the AI helps write the AI
AI-generated code still needs human judgment. Understanding what the code does, why it works, and when it's wrong is what separates an AI engineer from someone who just pastes AI output. The tool writes faster — the engineer decides what to write.
AI-powered development tools like Cursor, Windsurf, and GitHub Copilot are core skills for AI engineers in 2026. Vibe coding isn't about replacing thinking — it's about shipping faster.
Path 1: Self-Taught (Projects-First) — Recommended
The self-taught path is not only viable for AI engineering — it's arguably the best path, because the field moves so fast that formal programs can't keep up. This is how most working AI engineers learned the GenAI stack.
Recommended learning order:
- Set up your dev environment (3-7 days) — install Cursor (or another AI-powered editor), configure your IDE, terminal, Git, and Python/Node. This sounds trivial but getting comfortable with the tools you'll use every day — especially AI-assisted coding — is the real first step. Don't skip it.
- Programming basics (2-4 weeks) — if you already know a language, skip this step and use it. If not, start with Python — it's the default for AI
- Prompt engineering (1-2 weeks) — OpenAI playground, Anthropic console, systematic prompting
- LLM APIs (1-2 weeks) — build a simple chatbot with the OpenAI or Anthropic API
- Embeddings + vector databases (1-2 weeks) — embed documents, store in Chroma, query by similarity
- RAG (2-3 weeks) — build a complete RAG application over your own data
- Agent frameworks (2-3 weeks) — build an agent with LangChain or CrewAI that uses tools
- Full-stack AI app (3-4 weeks) — deploy a complete AI product with a web frontend
- Read AI Engineering by Chip Huyen (ongoing) — covers the production side: evaluation, deployment, monitoring, and the real challenges of shipping AI
If you hit a wall and don't know how to move forward — ask an LLM. This is not cheating. This is the core skill of an AI engineer: using AI to build AI. Describe your problem to GPT, Claude, or your Cursor chat. Ask it to explain the concept, debug the error, or suggest an approach. The best AI engineers are the ones who know how to get unstuck by talking to the models they're building with.
The biggest mistake self-taught learners make: watching tutorials instead of building. After learning each skill, immediately build something. A half-finished AI project that handles real data is worth more than 20 completed courses.
Path 2: Bootcamp or Short Courses
Programs like DeepLearning.AI short courses (Andrew Ng), AI/ML bootcamps, and structured online programs (fast.ai, Full Stack Deep Learning) teach practical AI skills in weeks to months.
- + Fast: weeks to months, not years
- + Practical: focused on building, not theory
- + DeepLearning.AI short courses cover LangChain, RAG, agents specifically
- + Lower cost: free to $5,000
- − Shallow depth — you'll need to go deeper independently
- − Credential less recognized than a degree
- − Quality varies wildly between programs
- − Still need to build projects beyond the curriculum
Best for: Career changers who want structure, developers adding AI skills, anyone who learns best with guided instruction.
Path 3: Computer Science Degree
- + Strongest credential signal for large companies
- + Deep fundamentals: algorithms, systems, distributed computing
- + Research opportunities in AI/ML labs
- + Network of peers and professors in the field
- − 4 years and $40,000-$200,000+ in cost
- − Curriculum usually lags industry by 2-3 years — most CS programs don't teach the GenAI stack
- − Heavy math and theory requirements that aren't needed for AI engineering
- − Opportunity cost: 4 years of missed salary and project-building time
Best for: People early in their career (18-22) who want the broadest optionality, or those targeting AI research labs where a CS/ML degree is expected.
All three paths work. For AI engineering specifically, the self-taught/projects-first path is the strongest because the field moves faster than any curriculum. What matters is what you've built, not where you learned it.
A portfolio is not optional for AI engineers. It's the primary signal hiring managers use — especially for candidates without traditional ML backgrounds.
Build these three projects. Each demonstrates a different skill:
Project 1: A RAG Application
Build a system that answers questions about a specific knowledge base (company docs, research papers, a book, legal documents).
What it demonstrates: embeddings, vector databases, retrieval pipelines, prompt engineering, LLM integration.
Example: "Chat with the Python documentation" — upload the Python docs, embed them, let users ask natural language questions and get accurate answers with source citations.
Deploy it. A live URL is 10x more impressive than a GitHub repo.
Project 2: An AI Agent
Build an agent that can use tools to accomplish a goal — search the web, query a database, call APIs, and synthesize information.
What it demonstrates: agent frameworks, tool use, multi-step reasoning, error handling.
Example: A research agent that takes a topic, searches multiple sources, evaluates relevance, and produces a structured summary with citations.
Project 3: A Full-Stack AI Product
Build a complete application with a real UI, user authentication, and an AI backend. This is the project that shows you can ship, not just prototype.
What it demonstrates: full-stack development, production deployment, UX design for AI features, streaming responses, error states.
Example: An AI writing assistant that helps users draft emails, with tone controls, edit suggestions, and conversation history.
Not sure what to build? Our GenAI Project Ideas for AI Engineers guide ranks 8 projects from beginner to advanced — each with the tech stack, what it proves to employers, and a ready-to-use Cursor prompt to scaffold the entire app.
Portfolio Presentation Tips
- GitHub: clean README with architecture diagram, setup instructions, and demo GIF/video
- Live deployment: Vercel, Railway, or Fly.io — a live URL beats a README every time
- Write about it: a blog post or Twitter thread explaining what you built and what you learned
- Show the iteration: git history that shows you debugging, improving, and refactoring is more impressive than a single perfect commit
- Do you have a RAG application that handles real documents?
- Do you have an AI agent that uses tools and handles multi-step tasks?
- Do you have a full-stack AI product deployed to a live URL?
- Is your code on GitHub with clear documentation?
- Can you explain the architectural decisions you made and why?
- Have you written about or presented your projects publicly?
Build three projects: a RAG app, an AI agent, and a full-stack AI product. Deploy them live. Write about them. This portfolio will generate more interviews than any degree or certification.
Certifications don't replace a portfolio, but they can signal baseline knowledge — especially for career changers. Here's every GenAI-relevant certification worth considering, ranked by value for AI engineers.
DeepLearning.AI Short Courses — Start Here
Free short courses by Andrew Ng and partners (OpenAI, LangChain, Anthropic, etc.). These are the fastest way to learn the GenAI stack hands-on:
- ChatGPT Prompt Engineering for Developers — prompt engineering fundamentals with OpenAI
- LangChain for LLM Application Development — building chains, agents, and tools
- Building RAG Agents with LLMs — retrieval-augmented generation from scratch
- Building Systems with the ChatGPT API — chaining LLM calls, evaluation, deployment
No exam, no credential — but the practical skills are directly applicable. Combine with portfolio projects for maximum impact.
AWS AI Practitioner
Entry-level AWS certification covering AI/ML services, Amazon Bedrock, responsible AI, and prompt engineering. Good credential signal for companies in the AWS ecosystem.
Relevant for this article because: AWS Bedrock is one of the main ways enterprises deploy LLM applications (GPT, Claude, Llama via a single API).
Azure AI Engineer Associate (AI-102)
Covers Azure AI services including Azure OpenAI Service — Microsoft's hosted GPT and embedding models. Focuses on building AI solutions with Cognitive Services, document intelligence, and knowledge mining.
Relevant for this article because: Azure OpenAI Service is how many enterprises access GPT models. If your target companies are Microsoft shops, this cert opens doors.
LangChain Academy
Free courses from the LangChain team on building with LangChain and LangGraph — covering chains, agents, tool use, and multi-agent systems. Certificate of completion available.
Relevant for this article because: LangChain is the most popular framework for building LLM applications. Learning it directly from the creators is the most efficient path.
When Certifications Help
- Career changers — signals commitment to hiring managers who see no AI experience on the resume
- Enterprise roles — companies using Azure or AWS often filter for platform-specific certifications
- Combined with projects — one cert + three strong projects is the optimal combination
When They Don't Help
- Before building anything — a certification without a portfolio is an empty signal
- Collecting multiple certs — one certification plus real projects beats four certifications with no projects
- AI-native startups — most startups don't care about certifications at all; they want to see your GitHub
Start with free resources: DeepLearning.AI short courses and LangChain Academy. If you're targeting enterprise roles, add one cloud cert (AWS or Azure) based on your target company's stack. One cert + three strong projects is the winning formula.
Where AI Engineering Jobs Are
- AI-native startups — the highest concentration of AI engineering roles. Check Y Combinator's Work at a Startup, AI-focused job boards
- Big tech AI teams — OpenAI, Anthropic, Google DeepMind, Meta AI, Microsoft AI (competitive but high-paying)
- Enterprise AI teams — large companies building internal AI tools (every Fortune 500 is hiring for this)
- Consulting/agencies — firms building AI products for clients
- Freelance/contract — growing demand for AI engineers who can build MVPs and prototypes
What Hiring Managers Actually Look For
How to Position Yourself
- LinkedIn headline: "AI Engineer | Building with LLMs, RAG, and Agent Frameworks" — be specific about what you build. See our Personal Brand Keywords for AI Engineers for 20+ GenAI terms and headline formulas
- GitHub profile: pin your three portfolio projects, write clear READMEs
- Content: share what you're building. A tweet thread about a technical challenge you solved while building a RAG app gets noticed
- Network in AI communities: AI Twitter/X, local AI meetups, Discord servers for LangChain/OpenAI/Anthropic
- Target the right companies: start with startups and mid-size companies where AI engineers wear many hats — it's easier to get in and you'll learn faster
Job Search Mistakes AI Engineers Make
- Applying only to FAANG — AI startups are hiring faster and have lower bars for entry
- Listing ChatGPT usage as 'AI experience' on a resume — hiring managers want to see what you've BUILT
- Spending months on theory before applying — apply while you're building, not after you feel 'ready'
- Ignoring the portfolio — no GitHub projects is a red flag for AI engineering roles
- Not showing deployed projects — a live demo URL is worth 100 bullet points on a resume
AI engineering roles exist across startups, big tech, and enterprise. Position yourself with a specific LinkedIn presence, a strong GitHub portfolio, and deployed projects. Apply to startups first — they hire faster and let you learn more.
Ready to apply? We've built the complete AI engineer job search toolkit: AI Engineer Resume Guide (templates and bullet formulas), Interview Questions & Answers (50+ questions covering LLMs, RAG, agents, and system design), and Personal Branding for AI Engineers (LinkedIn, GitHub, and content strategy).
AI engineering has a clear trajectory, though the field is young enough that titles and levels are still being defined.
Specialization Paths
As the field matures, specializations are emerging:
- AI Product Engineer — builds user-facing AI features (closest to full-stack development)
- AI Infrastructure Engineer — builds the platform: model serving, evaluation pipelines, cost optimization
- AI Agent Specialist — designs and builds complex multi-agent systems
- AI Safety / Evaluation — builds guardrails, tests for hallucination, ensures responsible AI
AI engineering is a young field with rapid career growth. Junior to senior can happen in 4-7 years. Specialization paths are still forming — choose what excites you most.
The Bottom Line
- 1AI engineering in 2026 means building with LLMs, not training them — it's product engineering powered by GenAI
- 2The core stack: a programming language (Python by default), prompt engineering, LLM APIs (OpenAI/Anthropic/Gemini), embeddings + RAG, agent frameworks
- 3Embeddings are simple: they turn text into numbers that capture meaning, powering RAG and semantic search
- 4AI-powered dev tools (Cursor, Windsurf, Copilot) are core professional skills — vibe coding is how fast AI engineers ship
- 5Portfolio > degrees: build a RAG app, an AI agent, and a full-stack AI product — deploy them live
- 6Start applying to AI startups while building — they hire based on projects, not credentials
Frequently Asked Questions
Can you become an AI engineer without knowing programming?
Yes — and this is new. With vibe coding tools like Cursor and advanced models like GPT and Claude, it's possible to build real AI applications by describing what you want in plain language. The AI writes the code. However, you must quickly learn to understand the architecture of the solutions being generated — what the code does, why it's structured that way, and where it can break. Without that understanding, your growth will stall over time. If you have the time, the best approach is to start learning programming fundamentals in parallel. It gives you a much stronger foundation.
Can you become an AI engineer without a CS degree?
Yes. AI engineering is one of the most accessible engineering specializations because the tools are high-level (API calls, not low-level math). A strong portfolio of AI projects — especially a RAG app, an agent, and a deployed product — can substitute for formal education. Many successful AI engineers are self-taught or transitioned from other development roles.
What programming language should I learn for AI engineering?
Python is the default choice — most LLM APIs, frameworks (LangChain, LlamaIndex), and AI tools are Python-first. But the language depends on your project: TypeScript/JavaScript for AI-powered web apps (Vercel AI SDK, Next.js), Go or Rust for high-performance AI infrastructure. Start with Python unless you have a specific reason not to.
Is AI engineering a good career in 2026?
Yes. Demand for AI engineers far exceeds supply. Every company from startups to Fortune 500s is building AI features, and the pool of engineers who can ship production AI applications is small. Compensation reflects this: AI engineering roles typically pay 20-40% more than equivalent software engineering positions.
What is the difference between AI engineering and prompt engineering?
Prompt engineering is one skill within AI engineering. A prompt engineer focuses exclusively on crafting effective prompts. An AI engineer uses prompt engineering as part of building complete systems — they also handle RAG pipelines, vector databases, agent frameworks, deployment, and full-stack development. AI engineering is the broader, more technical role.
Do I need to understand machine learning to be an AI engineer?
Not deeply. Understanding the basics — what a model is, how training works, what tokens are, how embeddings represent meaning — is sufficient. You don't need to know linear algebra, backpropagation, or how to train models from scratch. That's ML engineering, not AI engineering.
What are the best resources to learn AI engineering?
Start with the official documentation: OpenAI docs, Anthropic docs, LangChain docs. For structured learning, DeepLearning.AI short courses are excellent and free. For the production side, read AI Engineering by Chip Huyen (O'Reilly, 2025). For hands-on practice, build projects using Cursor or another AI-powered editor — it accelerates learning significantly.
How much do AI engineers make?
AI engineering is among the highest-paid software engineering specializations. Junior roles typically start at $120,000-$150,000. Mid-level: $150,000-$200,000. Senior: $200,000-$300,000+. At top AI companies (OpenAI, Anthropic, Google DeepMind), total compensation can exceed $400,000 for senior roles. Compensation varies significantly by location, company stage, and specialization.


Researching Job Market & Building AI Tools for careerists since December 2020
Sources & References
- AI Engineering: Building Applications with Foundation Models — Chip Huyen (2025)
- Claude Model Documentation — Anthropic (2025)
- Embeddings Guide — OpenAI (2025)
- Cursor — The AI Code Editor — Anysphere Inc. (2025)
- LangChain Documentation — Introduction — LangChain (2025)
- Sentence-Transformers Documentation — Hugging Face / UKP Lab (2025)