Best GenAI Project Ideas for AI Engineers: Portfolio Projects with Cursor Prompts (2026)

Published: 2026-02-12

TL;DR

The best portfolio projects for AI engineers in 2026 are GenAI-focused — RAG chatbots, AI agents, and full-stack AI products. Not image classifiers or recommendation engines. This guide ranks 8 project ideas from beginner to advanced, each with the tech stack, what it proves to employers, and a ready-to-use Cursor prompt to build the entire project from scratch. The formula: 3 deployed projects (RAG app + AI agent + full-stack AI SaaS) beats any number of certifications.

Careery Logo
Brought to you by Careery

This article was researched and written by the Careery team — that helps land higher-paying jobs faster than ever! Learn more about Careery

Quick Answers

What AI projects should I build for my portfolio?

Build three GenAI projects: (1) a RAG chatbot that answers questions over documents, (2) an AI agent that uses tools to complete tasks, and (3) a full-stack AI product with authentication and a real use case. These three prove the core GenAI skills hiring managers look for.

What is the best beginner AI project?

A RAG chatbot — upload documents, embed them, and chat with them. It's the 'hello world' of GenAI engineering. The project teaches embeddings, vector databases, retrieval, and prompt engineering — all core skills — and can be built in a weekend with LangChain and OpenAI.

Do AI projects matter more than certifications?

Yes. Portfolio projects are the #1 hiring signal for AI engineers. A deployed RAG app on GitHub proves you can build. A certification proves you can study. Three projects + one cloud certification is the winning formula. Projects first, certifications second.

Can I build AI projects with Cursor?

Yes — Cursor is one of the fastest ways to build AI projects. Each project in this guide includes a detailed prompt you can paste directly into Cursor to scaffold the entire application. Cursor's AI agent understands project architecture and can generate working code from a clear specification.

Hiring managers screening AI engineer candidates look at one thing first: what have you built? Not what courses you took, not what certifications you hold — what working AI applications exist on your GitHub. The AI engineering job market is portfolio-driven. Three deployed GenAI projects are worth more than a computer science degree when it comes to landing interviews.

This guide covers 8 project ideas specifically chosen for AI engineers — not generic "build a chatbot" tutorials, but projects that demonstrate the skills enterprise and startup AI teams actually hire for: RAG pipelines, AI agents, prompt engineering, multi-model integration, and full-stack AI product development.

Every project includes a Cursor prompt — a detailed specification that can be pasted directly into Cursor to scaffold the entire application from scratch.


Why Projects Beat Certifications

AI Engineer Portfolio Project

A deployed application that demonstrates GenAI engineering skills — building with LLMs, embeddings, vector databases, RAG pipelines, and AI agents. Unlike traditional software projects, AI portfolio projects prove the ability to work with non-deterministic systems, engineer prompts, and integrate foundation models into real products.

Three reasons projects matter more than certifications for AI engineers:

  1. Projects prove building ability. A certification proves studying. A deployed RAG app proves building. Hiring managers can inspect your code, test your application, and evaluate your engineering decisions. Certifications are pass/fail black boxes.

  2. Projects demonstrate judgment. Choosing the right model, designing effective prompts, handling edge cases, managing costs — these decisions only show up in real projects, not multiple-choice exams.

  3. Projects compound. Each project adds to a public portfolio that grows over time. Certifications expire. GitHub repositories don't.

The 3-Project Formula

The minimum viable portfolio for an AI engineer: (1) a RAG application — proves you understand embeddings, retrieval, and grounding, (2) an AI agent — proves you can build autonomous systems with tool use, and (3) a full-stack AI product — proves you can ship. Three projects, three core skills.

Complete AI Engineer Roadmap

Portfolio projects are step 7 in the AI engineer learning path. For the complete journey — from programming fundamentals to your first AI role — see our How to Become an AI Engineer: The Only Free Guide You Need.


The Complete Project Ranking

#ProjectDifficultyBuild TimeCore Skill Proven
1AI Resume AnalyzerBeginner1-2 daysPrompt engineering, file processing
2RAG Chatbot: Chat with Your DocsBeginner2-3 daysRAG pipeline, embeddings, vector DB
3Multi-Model AI PlaygroundBeginner+2-3 daysMulti-provider APIs, streaming
4AI Content PipelineIntermediate3-5 daysPrompt chaining, structured output
5AI Code Review AgentIntermediate4-6 daysAgent architecture, tool use
6Voice AI AssistantIntermediate+4-6 daysMultimodal AI, real-time processing
7AI Data Analyst AgentAdvanced1-2 weeksFunction calling, complex agents
8Full-Stack AI SaaSAdvanced2-4 weeksProduction deployment, full-stack
🔑

Start with Project 1 or 2 as your first GenAI project. Build Project 5 or 7 to prove agent skills. Finish with Project 8 to prove you can ship a complete product. Three projects from different tiers = a well-rounded AI engineer portfolio.


Beginner Projects

1. AI Resume Analyzer

Upload a resume and a job description. The AI analyzes how well the resume matches the job, identifies missing keywords, suggests improvements, and scores the match.

Key Stats
Beginner
Difficulty
1-2 days
Build time
Next.js + GPT-5.2 Mini
Tech stack

What it proves to employers:

  • Prompt engineering with structured output (JSON scores, categorized feedback)
  • File processing (PDF parsing, text extraction)
  • Practical AI integration into a useful product
  • Understanding of the job market domain (relevant for Careery-adjacent roles)

Key features to build:

  • PDF upload and text extraction
  • Side-by-side resume vs job description analysis
  • Match score with breakdown (skills, experience, keywords)
  • Specific improvement suggestions from the LLM
  • Clean UI with clear results display
Cursor Prompt: AI Resume Analyzer
Build a full-stack AI Resume Analyzer app using Next.js 14 (App Router) and the OpenAI API.

CORE FEATURES:
- Upload page: user uploads a PDF resume and pastes a job description
- PDF parsing: extract text from the resume PDF using pdf-parse
- AI analysis: send resume text + job description to GPT-4o-mini with a structured prompt
- Results page showing: match score (0-100), matched skills, missing skills, specific improvement suggestions
- The LLM response must be structured JSON (use OpenAI's response_format: { type: "json_object" })

TECH STACK:
- Next.js 15 with App Router and Server Actions
- OpenAI SDK — GPT-5.2 Mini for analysis (fast, cheap at $0.25/1M input tokens)
- pdf-parse for PDF text extraction
- Tailwind CSS + shadcn/ui for the UI
- File upload using Next.js API route with formData

ARCHITECTURE:
- /app/page.tsx — upload form (resume PDF + job description textarea)
- /app/api/analyze/route.ts — API route: parse PDF, call GPT-5.2 Mini, return structured JSON
- /app/results/page.tsx — display analysis results with score visualization
- /lib/prompts.ts — system prompt and analysis prompt templates
- /lib/types.ts — TypeScript types for the analysis response

MODEL CHOICE: GPT-5.2 Mini — this task needs structured JSON output but not deep reasoning.
Mini is 10x cheaper than Instant and fast enough for single-document analysis.

The system prompt should instruct the model to:
1. Compare the resume against the job description
2. Score the match from 0-100
3. List matched skills, missing skills, and keyword gaps
4. Provide 3-5 specific, actionable improvement suggestions
5. Return everything as structured JSON

Style the app clean and modern. Use a progress indicator during analysis.

2. RAG Chatbot: Chat with Your Documents

Upload PDF documents. The system chunks them, creates embeddings, stores them in a vector database, and enables a chat interface where users can ask questions about their documents with cited sources.

Key Stats
Beginner
Difficulty
2-3 days
Build time
Next.js + LangChain + ChromaDB + GPT-5.2 Mini
Tech stack

What it proves to employers:

  • RAG pipeline architecture (the most in-demand GenAI pattern)
  • Embeddings and vector database knowledge
  • Document chunking strategies
  • Source citation and grounding (proving the answer came from the data)

Key features to build:

  • Multi-file PDF upload with chunking
  • Embedding generation with OpenAI embeddings
  • Vector storage and similarity search (ChromaDB or Pinecone)
  • Chat interface with streaming responses
  • Source citations: show which document chunks were used for each answer
Cursor Prompt: RAG Chatbot
Build a RAG (Retrieval-Augmented Generation) chatbot using Next.js 14, LangChain, and ChromaDB.

CORE FEATURES:
- Document upload: accept multiple PDF files
- Document processing pipeline: PDF → text extraction → chunking → embedding → vector store
- Chat interface: user asks questions, system retrieves relevant chunks, generates answer with citations
- Source display: show which document chunks were used for each answer (with page numbers)
- Conversation memory: maintain chat history within a session

TECH STACK:
- Next.js 15 with App Router
- LangChain.js for the RAG pipeline (document loaders, text splitters, retrieval chain)
- OpenAI API — GPT-5.2 Mini for chat (cheap, fast), text-embedding-3-large for embeddings
- ChromaDB (local, via chromadb npm package) as the vector store
- Tailwind CSS + shadcn/ui for the chat UI
- Vercel AI SDK for streaming chat responses

ARCHITECTURE:
- /app/page.tsx — document upload + chat interface (split layout)
- /app/api/upload/route.ts — handle PDF upload, chunk, embed, store in ChromaDB
- /app/api/chat/route.ts — retrieve relevant chunks, generate streaming response with GPT-5.2 Mini
- /lib/rag.ts — RAG pipeline: createRetrievalChain with LangChain
- /lib/embeddings.ts — embedding generation (text-embedding-3-large) and ChromaDB operations
- /lib/chunking.ts — RecursiveCharacterTextSplitter config (chunk size: 1000, overlap: 200)

MODEL CHOICE: GPT-5.2 Mini for chat — RAG grounding means the model doesn't need
deep reasoning, just clear synthesis of retrieved context. Mini is perfect here.
Use text-embedding-3-large for embeddings (better retrieval quality than small).

IMPORTANT DETAILS:
- Use RecursiveCharacterTextSplitter with chunk_size=1000, chunk_overlap=200
- Retrieve top 4 chunks per query using similarity search
- Include source metadata (filename, page) in each chunk
- The system prompt should instruct the model to ONLY answer from provided context
- If the context doesn't contain the answer, say "I don't have enough information"
- Stream the response using Vercel AI SDK's streamText

Chat UI should show user/assistant messages, a typing indicator, and expandable source citations below each answer.
Learn RAG Before Building

The free DeepLearning.AI course "Building RAG Agents with LLMs" teaches the RAG architecture in 90 minutes. Take it before building this project. See our DeepLearning.AI Courses Guide for the recommended learning order.


3. Multi-Model AI Playground

A side-by-side comparison tool that sends the same prompt to GPT-5.2 Instant, Claude Sonnet 4.5, and Gemini 3 Flash — displaying streaming responses simultaneously. Users can compare quality, speed, cost, and style across models.

Key Stats
Beginner+
Difficulty
2-3 days
Build time
Next.js + GPT-5.2 + Claude Sonnet 4.5 + Gemini 3
Tech stack

What it proves to employers:

  • Multi-provider API integration (not locked into one vendor)
  • Streaming response handling (real-time UI updates)
  • Comparison and evaluation thinking (which model for which task)
  • Clean UX for complex data (three simultaneous streams)

Key features to build:

  • Single prompt input → simultaneous requests to 3 models
  • Side-by-side streaming responses (three columns, real-time)
  • Response metadata: tokens used, latency, cost estimate
  • Prompt templates library (coding, writing, analysis, summarization)
  • History: save and compare previous prompts
Cursor Prompt: Multi-Model AI Playground
Build a Multi-Model AI Playground that compares GPT-5.2 Instant, Claude Sonnet 4.5, and Gemini 3 Flash side-by-side.

CORE FEATURES:
- Single prompt textarea at the top
- Three-column layout below showing streaming responses from each model simultaneously
- Each column shows: model name, streaming response, token count, latency (ms), estimated cost
- Model selector per column (swap in GPT-5.2 Thinking, Claude Opus 4.6, Gemini 3 Pro for comparison)
- Prompt templates dropdown (coding, writing, analysis, summarization, reasoning)
- System prompt customization (optional, shared across models)
- Response history with ability to save/load comparisons

TECH STACK:
- Next.js 15 with App Router
- Vercel AI SDK for unified streaming across all three providers
- OpenAI SDK (@ai-sdk/openai — default: GPT-5.2 Instant)
- Anthropic SDK (@ai-sdk/anthropic — default: Claude Sonnet 4.5)
- Google Generative AI SDK (@ai-sdk/google — default: Gemini 3 Flash)
- Tailwind CSS + shadcn/ui
- localStorage for response history

AVAILABLE MODELS (user can switch per column):
- OpenAI: GPT-5.2 Mini, GPT-5.2 Instant, GPT-5.2 Thinking, GPT-5.3-Codex
- Anthropic: Claude Haiku 4.5, Claude Sonnet 4.5, Claude Opus 4.6
- Google: Gemini 3 Flash, Gemini 3 Pro

ARCHITECTURE:
- /app/page.tsx — main playground: prompt input + three-column response grid
- /app/api/chat/[provider]/route.ts — dynamic route for each provider, returns streaming response
- /lib/providers.ts — unified provider config with all models, pricing per token, API setup
- /lib/templates.ts — prompt template definitions
- /lib/history.ts — localStorage-based history management
- /components/ResponseColumn.tsx — single model response display with streaming + metadata
- /components/ModelSelector.tsx — dropdown to switch model per column

KEY IMPLEMENTATION DETAILS:
- Use Promise.allSettled to fire all three API calls simultaneously
- Each column streams independently using Vercel AI SDK's streamText
- Calculate cost per response: (input_tokens * input_price + output_tokens * output_price)
- Track latency: time from request start to first token (TTFT), and to last token
- Handle errors per-provider gracefully (one failing shouldn't block others)
- Responsive: on mobile, stack columns vertically with tabs

PRICING CONFIG (per 1M tokens — input/output):
- GPT-5.2 Mini: $0.25/$2.00
- GPT-5.2 Instant: $1.25/$5.00
- GPT-5.2 Thinking: $5.00/$20.00
- Claude Haiku 4.5: $1.00/$5.00
- Claude Sonnet 4.5: $3.00/$15.00
- Claude Opus 4.6: $5.00/$25.00
- Gemini 3 Flash: $0.50/$3.00
- Gemini 3 Pro: $2.00/$12.00

Intermediate Projects

4. AI Content Pipeline

Input a topic. The AI researches it, creates an outline, writes a draft, generates SEO metadata (title, description, keywords), and produces social media posts — all through a multi-step prompt chain.

Key Stats
Intermediate
Difficulty
3-5 days
Build time
Next.js + LangChain + GPT-5.2
Tech stack

What it proves to employers:

  • Prompt chaining (multi-step LLM workflows — the backbone of production AI)
  • Structured output (JSON schema enforcement for SEO metadata)
  • System design (breaking a complex task into orchestrated steps)
  • Real-world AI product thinking (content is a massive AI use case)

Key features to build:

  • Topic input → multi-step pipeline: research → outline → draft → SEO metadata → social posts
  • Each step's output is visible and editable before proceeding
  • Structured output: SEO metadata returned as typed JSON
  • Export: download the complete content package (article + metadata + social posts)
  • Pipeline visualization: show the user which step is running
Cursor Prompt: AI Content Pipeline
Build an AI Content Pipeline app using Next.js 14 and LangChain.js that generates complete content packages from a topic.

CORE FEATURES:
- User inputs a topic and optional context/angle
- Multi-step pipeline that runs sequentially:
  Step 1: Research — generate key points, statistics, and subtopics for the topic
  Step 2: Outline — create a structured article outline based on research
  Step 3: Draft — write a full article draft section by section
  Step 4: SEO — generate title, meta description, keywords, slug (structured JSON)
  Step 5: Social — generate Twitter thread, LinkedIn post, and email subject lines
- Each step shows progress and intermediate output
- Users can edit any step's output before proceeding to the next
- Export button: download all outputs as a ZIP (markdown + JSON)

TECH STACK:
- Next.js 15 with App Router and Server Actions
- LangChain.js for prompt chaining (SequentialChain or LCEL pipe)
- OpenAI API — GPT-5.2 Mini for research/outline/SEO steps (fast, cheap),
  GPT-5.2 Instant for the draft step (better writing quality)
- Tailwind CSS + shadcn/ui
- JSZip for export functionality

MODEL STRATEGY (different models per step for cost/quality optimization):
- Step 1 (Research): GPT-5.2 Mini — extraction task, doesn't need deep reasoning
- Step 2 (Outline): GPT-5.2 Mini — structural task, fast
- Step 3 (Draft): GPT-5.2 Instant — writing quality matters here, use the better model
- Step 4 (SEO): GPT-5.2 Mini with structured output — simple extraction
- Step 5 (Social): GPT-5.2 Mini — short-form content, fast

ARCHITECTURE:
- /app/page.tsx — topic input form + pipeline visualization
- /app/api/pipeline/route.ts — orchestrates all 5 steps, streams status updates via SSE
- /lib/chains/research.ts — research prompt chain (GPT-5.2 Mini)
- /lib/chains/outline.ts — outline prompt chain (GPT-5.2 Mini)
- /lib/chains/draft.ts — draft prompt chain (GPT-5.2 Instant — better quality)
- /lib/chains/seo.ts — SEO metadata chain (GPT-5.2 Mini, structured JSON output)
- /lib/chains/social.ts — social media chain (GPT-5.2 Mini)
- /lib/types.ts — TypeScript types for each step's output
- /components/PipelineStep.tsx — reusable step component with status, output, edit

IMPORTANT:
- Use OpenAI's response_format: { type: "json_schema" } for the SEO step
- Each chain passes its output to the next chain's input
- Show a stepper UI: step 1 ✓ → step 2 (running) → step 3 (pending)...
- Allow cancellation mid-pipeline
- Handle errors per-step with retry option

5. AI Code Review Agent

Connect to a GitHub repository. The agent analyzes pull requests — reviewing code quality, identifying bugs, suggesting improvements, checking for security issues, and writing review comments. An AI agent that uses GitHub as a tool.

Key Stats
Intermediate
Difficulty
4-6 days
Build time
Next.js + LangGraph + GitHub API + GPT-5.3-Codex
Tech stack

What it proves to employers:

  • AI agent architecture (the hottest skill in AI engineering)
  • Tool use and function calling (GitHub API as a tool)
  • Real-world integration (connecting AI to existing developer workflows)
  • Engineering judgment (knowing what makes a good code review)

Key features to build:

  • GitHub OAuth: connect to repositories
  • PR analysis: fetch diff, files changed, commit messages
  • Multi-aspect review: code quality, bugs, security, performance, readability
  • Review comments: generate specific, actionable review comments with line references
  • Summary: overall PR assessment with approve/request-changes recommendation
Cursor Prompt: AI Code Review Agent
Build an AI Code Review Agent using Next.js 14, LangGraph.js, and the GitHub API.

CORE FEATURES:
- GitHub OAuth login to access user's repositories
- Repository selector: pick a repo, see open pull requests
- PR Review: select a PR → agent analyzes the diff and generates a comprehensive code review
- The review includes: summary, bugs found, security issues, code quality suggestions, performance concerns
- Each finding references specific files and line numbers from the diff
- Overall verdict: "Approve", "Request Changes", or "Comment" with reasoning

TECH STACK:
- Next.js 15 with App Router
- LangGraph.js for the agent workflow (multi-step review pipeline)
- GitHub REST API (via Octokit) for fetching PRs, diffs, file contents
- OpenAI API — GPT-5.3-Codex for code analysis (OpenAI's best coding model,
  purpose-built for agentic code tasks)
- NextAuth.js with GitHub OAuth provider
- Tailwind CSS + shadcn/ui

MODEL CHOICE: GPT-5.3-Codex — this is OpenAI's most capable coding model (Feb 2026).
It's specifically designed for agentic coding tasks: understanding diffs, identifying bugs,
and reasoning about code quality. ~25% faster than GPT-5.2 for code tasks.
Alternative: Claude Opus 4.6 (also excellent at code review, best on Terminal-Bench 2.0).

ARCHITECTURE:
- /app/page.tsx — repo selector + PR list
- /app/review/[pr]/page.tsx — PR review results display
- /app/api/auth/[...nextauth]/route.ts — GitHub OAuth with NextAuth
- /app/api/review/route.ts — triggers the LangGraph review agent
- /lib/agent/graph.ts — LangGraph agent definition:
    Node 1: "fetch_pr" — fetch PR diff and metadata from GitHub
    Node 2: "analyze_chunks" — split diff into file chunks, analyze each with GPT-5.3-Codex
    Node 3: "security_check" — dedicated security analysis pass
    Node 4: "synthesize" — combine all analyses into final review
- /lib/agent/tools.ts — GitHub tools: getPRDiff, getFileContent, getPRComments
- /lib/prompts.ts — review prompts for each analysis type
- /components/ReviewResult.tsx — display findings with file/line references

LANGGRAPH DETAILS:
- Define a StateGraph with: pr_data, file_analyses, security_findings, final_review
- Each node is a function that reads state and returns updated state
- The graph flows: fetch_pr → analyze_chunks → security_check → synthesize
- Use conditional edges: if diff is very large (>5000 lines), split into batches

Style the review results like a GitHub PR review: findings grouped by file, with severity badges (critical, warning, suggestion, nitpick).
Learn LangGraph First

The LangGraph course on LangChain Academy (free) teaches agent architecture patterns. Take it before building this project. See our LangChain Academy Guide.


Advanced Projects

6. Voice AI Assistant

Speak into the microphone. The app transcribes speech to text, sends it to an LLM, generates a response, and speaks it back — a full voice conversation loop with an AI assistant.

Key Stats
Intermediate+
Difficulty
4-6 days
Build time
Next.js + Whisper + GPT-5.2 Mini + TTS
Tech stack

What it proves to employers:

  • Multimodal AI (audio → text → LLM → audio pipeline)
  • Real-time processing and streaming
  • Browser APIs (MediaRecorder, audio playback)
  • Production-quality UX for conversational AI

Key features to build:

  • Push-to-talk or voice activity detection
  • Speech-to-text via OpenAI Whisper API
  • LLM response generation with conversation memory
  • Text-to-speech playback via OpenAI TTS API
  • Conversation transcript display alongside audio
Cursor Prompt: Voice AI Assistant
Build a Voice AI Assistant using Next.js 15, OpenAI Whisper (STT), GPT-5.2 Mini (LLM), and OpenAI TTS.

CORE FEATURES:
- Push-to-talk button: hold to record, release to send
- Speech-to-text: send audio to Whisper API, get transcript
- LLM response: send transcript (with conversation history) to GPT-5.2 Mini
- Text-to-speech: convert LLM response to audio using OpenAI TTS API (alloy voice)
- Auto-play the audio response
- Show full conversation transcript alongside audio controls
- Conversation memory: maintain last 10 exchanges for context

TECH STACK:
- Next.js 15 with App Router
- OpenAI SDK (Whisper for STT, GPT-5.2 Mini for chat, TTS-1 for speech)
- Browser MediaRecorder API for audio capture
- Web Audio API for playback
- Tailwind CSS + shadcn/ui

MODEL CHOICE: GPT-5.2 Mini — voice assistants need LOW LATENCY above all.
Mini is the fastest OpenAI model and cheapest ($0.25/1M input). Users won't
tolerate a 3-second pause between speaking and getting a response. Mini's
speed makes the conversation feel natural. For a premium mode, offer
Gemini 3 Flash as an alternative (also very fast at $0.50/1M input).

ARCHITECTURE:
- /app/page.tsx — main voice assistant UI (large mic button, transcript below)
- /app/api/transcribe/route.ts — receive audio blob, send to Whisper, return text
- /app/api/chat/route.ts — receive text + history, generate GPT-5.2 Mini response
- /app/api/speak/route.ts — receive text, generate TTS audio, return audio buffer
- /lib/audio.ts — MediaRecorder wrapper: start/stop recording, get audio blob
- /lib/conversation.ts — conversation history management
- /components/VoiceButton.tsx — animated push-to-talk button with recording state
- /components/Transcript.tsx — conversation transcript with timestamps

KEY DETAILS:
- Record audio as webm, convert to supported format for Whisper if needed
- Show visual feedback during each phase: "Listening..." → "Thinking..." → "Speaking..."
- Track and display time-to-first-token for the LLM response (target: <500ms)
- Add a "Stop" button to interrupt TTS playback
- Handle errors: microphone permission denied, API failures
- Mobile-friendly: large touch target for the mic button

7. AI Data Analyst Agent

Type a question in natural language ("What were our top 10 products by revenue last quarter?"). The agent writes SQL, executes it against a database, analyzes the results, and generates a chart — all autonomously.

Key Stats
Advanced
Difficulty
1-2 weeks
Build time
Next.js + LangGraph + PostgreSQL + GPT-5.2 Thinking
Tech stack

What it proves to employers:

  • Complex agent orchestration (multi-step reasoning with real tools)
  • Function calling and tool use (SQL execution, chart generation)
  • Database integration (SQL generation from natural language)
  • Production safety (SQL injection prevention, read-only queries)

Key features to build:

  • Natural language question input
  • Agent generates SQL based on database schema
  • SQL execution against real database (read-only)
  • Result analysis and insights generation
  • Automatic chart creation (bar, line, pie based on data shape)
  • Query history with re-run capability
Cursor Prompt: AI Data Analyst Agent
Build an AI Data Analyst Agent using Next.js 15, LangGraph.js, PostgreSQL, and GPT-5.2 Thinking.

CORE FEATURES:
- User types a natural language question about data
- Agent workflow:
  1. Inspect database schema (tables, columns, types, relationships)
  2. Generate SQL query based on the question + schema
  3. Execute query (READ-ONLY) against PostgreSQL
  4. Analyze results: summarize findings, identify trends, flag anomalies
  5. Generate a chart if appropriate (bar, line, pie) based on data shape
- Display: SQL query, raw results table, analysis text, and chart
- Query history sidebar with re-run capability

TECH STACK:
- Next.js 15 with App Router
- LangGraph.js for agent orchestration
- OpenAI API — GPT-5.2 Thinking for SQL generation and data analysis
- PostgreSQL with Drizzle ORM (for schema inspection and query execution)
- Recharts for data visualization (bar, line, pie charts)
- Tailwind CSS + shadcn/ui

MODEL CHOICE: GPT-5.2 Thinking — SQL generation from natural language requires
multi-step reasoning: understanding the question, mapping to schema, writing correct
joins and aggregations. Thinking mode excels here. The analysis step also benefits
from deep reasoning to identify trends and anomalies.
Alternative: Claude Opus 4.6 (best reasoning model overall, excels at data analysis).
For simple queries, fall back to GPT-5.2 Mini to save costs.

ARCHITECTURE:
- /app/page.tsx — question input + results display (SQL, table, chart, analysis)
- /app/api/analyze/route.ts — triggers the LangGraph agent, streams results
- /lib/agent/graph.ts — LangGraph StateGraph:
    Node 1: "inspect_schema" — read table names, columns, types, sample data
    Node 2: "generate_sql" — GPT-5.2 Thinking generates SQL using schema + function calling
    Node 3: "execute_sql" — run the query (READ-ONLY transaction)
    Node 4: "analyze_results" — GPT-5.2 Thinking analyzes results, generates insights
    Node 5: "generate_chart" — GPT-5.2 Mini decides chart type and config (simple task)
    Conditional: if SQL fails → "fix_sql" (GPT-5.2 Thinking) → retry execute_sql (max 2 retries)
- /lib/agent/tools.ts — tools: inspectSchema, executeQuery, describeTable
- /lib/db.ts — PostgreSQL connection with Drizzle, READ-ONLY mode
- /lib/chart.ts — chart type selection logic and Recharts config generation
- /components/SqlDisplay.tsx — syntax-highlighted SQL display
- /components/DataTable.tsx — paginated results table
- /components/Chart.tsx — dynamic chart component

SAFETY:
- ALL queries must run inside a READ-ONLY transaction (SET TRANSACTION READ ONLY)
- Block any SQL containing DROP, DELETE, UPDATE, INSERT, ALTER, TRUNCATE
- Limit query execution time to 10 seconds
- Limit result set to 1000 rows

Seed the database with sample e-commerce data: products, orders, customers, revenue.
Include 5 example questions in the UI that users can click to try.

8. Full-Stack AI SaaS

The capstone project: a complete AI-powered SaaS product with user authentication, subscription billing, a core AI feature, and production deployment. This is the project that proves an AI engineer can ship a real product.

Key Stats
Advanced
Difficulty
2-4 weeks
Build time
Next.js + Supabase + Stripe + GPT-5.2 Mini
Tech stack

What it proves to employers:

  • Full-stack product development (not just AI skills — shipping skills)
  • Authentication and authorization (real user management)
  • Billing integration (Stripe subscriptions, usage limits)
  • Production deployment (hosting, environment variables, monitoring)
  • Product thinking (solving a real problem, not just a demo)

Key features to build:

  • Authentication: email/password + Google OAuth
  • Subscription tiers: Free (limited), Pro (unlimited), with Stripe Checkout
  • Core AI feature: pick one domain (writing assistant, research tool, data analyzer)
  • Usage tracking: token/request limits per subscription tier
  • Dashboard: usage stats, billing management, AI interaction history
  • Production: deployed to Vercel with proper env management
Cursor Prompt: Full-Stack AI SaaS (AI Writing Assistant)
Build a full-stack AI Writing Assistant SaaS using Next.js 15, Supabase, Stripe, and GPT-5.2 Mini.

THE PRODUCT:
An AI writing assistant where users input rough ideas/notes and get polished content: blog posts, emails, social media posts, product descriptions. Free tier: 10 generations/month. Pro tier ($9/month): unlimited.

CORE FEATURES:
- Auth: email/password + Google OAuth via Supabase Auth
- Dashboard: new generation, history of past generations, usage counter
- Generation flow: pick content type (blog, email, social, product desc) → input rough notes → AI generates polished content with multiple variations
- Subscription: Free (10/month) and Pro ($9/month unlimited) via Stripe Checkout
- Usage enforcement: track generations per user per month, block when limit reached
- History: all past generations saved and searchable
- Export: copy to clipboard, download as markdown

TECH STACK:
- Next.js 15 with App Router and Server Actions
- Supabase (Auth, PostgreSQL database, Row Level Security)
- Stripe (Checkout, Customer Portal, Webhooks for subscription sync)
- OpenAI API — GPT-5.2 Mini for generation ($0.25/1M input — critical for SaaS margins)
- Tailwind CSS + shadcn/ui
- Deploy to Vercel

MODEL CHOICE: GPT-5.2 Mini — for a SaaS product, unit economics matter.
At $0.25/1M input and $2.00/1M output, Mini keeps per-generation costs
under $0.01, making the $9/mo Pro tier highly profitable.
For a "Premium quality" toggle, offer GPT-5.2 Instant at higher cost.

DATABASE SCHEMA (Supabase/PostgreSQL):
- users: id, email, stripe_customer_id, subscription_status, subscription_tier
- generations: id, user_id, content_type, input_text, output_text, created_at
- usage: user_id, month, generation_count

ARCHITECTURE:
- /app/page.tsx — landing page with pricing
- /app/login/page.tsx — auth page (Supabase Auth UI)
- /app/dashboard/page.tsx — main dashboard: new generation + history
- /app/dashboard/generate/page.tsx — generation flow: type → input → output
- /app/dashboard/billing/page.tsx — Stripe Customer Portal redirect
- /app/api/generate/route.ts — AI generation with GPT-5.2 Mini (checks usage limits)
- /app/api/stripe/webhook/route.ts — Stripe webhook handler
- /app/api/stripe/checkout/route.ts — create Stripe Checkout Session
- /lib/supabase/server.ts — Supabase server client
- /lib/stripe.ts — Stripe client and helpers
- /lib/usage.ts — usage tracking and limit enforcement
- /lib/prompts.ts — generation prompts per content type

STRIPE FLOW:
- Free users see upgrade banner on dashboard
- "Upgrade to Pro" → Stripe Checkout → webhook updates user subscription_status
- "Manage Billing" → Stripe Customer Portal (cancel, update payment)
- Webhook events: checkout.session.completed, customer.subscription.updated/deleted

ROW LEVEL SECURITY:
- Users can only read/write their own generations
- Usage table scoped to authenticated user
- Admin can read all

This should be production-ready: proper error handling, loading states, mobile responsive, SEO meta tags on landing page.
🔑

The Full-Stack AI SaaS is the capstone project. It proves that an AI engineer can build a complete product — not just an AI feature. Authentication, billing, usage limits, and deployment are the skills that separate "can build demos" from "can ship products."


How to Present Projects for Maximum Impact

Building the project is half the work. Presenting it well is the other half.

1

GitHub Repository: Production-Quality README

Every project needs a README with: problem statement, architecture diagram (even a simple one), tech stack, setup instructions, screenshots/demo GIF, and lessons learned. Hiring managers spend 30 seconds on a README — make those seconds count.

2

Deploy It: Live Demo Link

Deploy every project to a public URL (Vercel, Railway, Fly.io). A live demo is 10x more impressive than a GitHub link alone. Hiring managers click the demo first, code second.

3

LinkedIn Post: Show the Build Process

Write a short LinkedIn post for each project: what you built, why, what you learned, and a link to the demo. This turns each project into a networking signal — recruiters search for AI engineers sharing their work.

4

Write About Your Decisions

Add a "Technical Decisions" section to each README explaining your choices: why this model, why this vector database, how you handled edge cases. This demonstrates engineering judgment — the #1 thing senior engineers look for in candidates.

All Certifications to Complement Your Projects

Projects prove building ability. Certifications prove domain knowledge. The winning combination: 3 projects + 1 cloud certification. For which cert to get, see our Best GenAI & AI Certifications in 2026.

Turn Projects Into a Resume That Gets Interviews

Built the projects? Now present them on your resume. Our AI Engineer Resume Guide has bullet point formulas, ATS keywords, and templates specifically for GenAI engineers. For interview prep, see 50+ AI Engineer Interview Questions.


GenAI Project Ideas: Key Takeaways

  1. 1Portfolio projects are the #1 hiring signal for AI engineers — more valuable than certifications or degrees
  2. 2Build 3 projects from different tiers: one RAG app (beginner), one AI agent (intermediate), one full-stack AI SaaS (advanced)
  3. 3Every project includes a Cursor prompt — paste it into Cursor to scaffold the entire application
  4. 4Focus on GenAI projects (RAG, agents, LLMs) — not traditional ML (image classifiers, recommendation engines)
  5. 5Deploy every project with a live demo link — a working URL is 10x more impressive than a GitHub repo alone
  6. 6Present projects well: production-quality README, live demo, LinkedIn post, technical decision documentation

Frequently Asked Questions

How many AI projects do I need for a portfolio?

Three is the sweet spot: one RAG application, one AI agent, and one full-stack AI product. Three quality projects that demonstrate different skills are better than ten half-finished demos. Each project should be deployed with a live demo link.

Can I build AI projects without a computer science degree?

Absolutely. AI engineering is the most accessible engineering field right now — pre-trained models (GPT, Claude, Gemini) handle the hard AI parts. You need programming skills (Python or TypeScript), understanding of APIs, and the ability to build web applications. The projects in this guide don't require ML theory or math.

Which programming language should I use for AI projects?

Python or TypeScript. Python has the deepest AI ecosystem (LangChain, OpenAI SDK, most tutorials). TypeScript/JavaScript is better for full-stack web apps (Next.js + AI). The projects in this guide use Next.js (TypeScript) because they're full-stack web applications. Both are excellent choices.

How do I handle API costs for AI projects?

Use GPT-5.2 Mini during development — at $0.25 per million input tokens, it's extremely cheap and fast. Switch to GPT-5.2 Instant or Claude Sonnet 4.5 for production demos where quality matters. Set hard spending limits in your OpenAI dashboard. Most portfolio projects cost $2-$10 in API calls total during development. Use environment variables so API keys never appear in code.

Should I build AI projects from scratch or use templates?

Build from scratch using the Cursor prompts in this guide. Templates teach you nothing — the learning happens in the building. Cursor generates the scaffolding, but you should understand every line of code it produces. If you can't explain your own project in an interview, it hurts more than it helps.

What if my project isn't original?

Originality is overrated for portfolio projects. A well-built RAG chatbot is more impressive than a poorly-built 'original' idea. What matters is execution quality: clean code, good UX, proper error handling, and thoughtful technical decisions. Thousands of engineers build chatbots — the ones who get hired build them well.

How long does it take to build a portfolio?

With focused effort: 4-8 weeks for three projects. Project 1 (beginner): 1 weekend. Project 2 (intermediate): 1 week. Project 3 (advanced): 2-3 weeks. The Cursor prompts in this guide accelerate the scaffolding, but budget time for learning, debugging, and polishing.

Do these projects work on the Cursor free plan?

The Cursor prompts work on any plan — they're text prompts you paste into Cursor's Agent mode. You'll need API keys for OpenAI ($5-$20 for development), and optionally Anthropic and Google. The projects themselves are free to deploy on Vercel's free tier.


Editorial Policy
Bogdan Serebryakov
Reviewed by

Researching Job Market & Building AI Tools for careerists since December 2020

Sources & References

  1. AI Engineering: Building Applications with Foundation ModelsChip Huyen (2025)
  2. A Practical Guide to Building AgentsOpenAI (2025)
  3. LangGraph DocumentationLangChain Inc. (2025)
  4. Vercel AI SDK DocumentationVercel (2025)

Careery is an AI-driven career acceleration service that helps professionals land high-paying jobs and get promoted faster through job search automation, personal branding, and real-world hiring psychology.

© 2026 Careery. All rights reserved.