GenAI Use Cases for Developers
GenAI Use Cases for Developers
You understand how LLMs work. Now let's explore what you can actually build with them. This tutorial covers the most practical GenAI use cases for developers, with real code examples you can adapt.
1. Code Generation and Assistance
The most immediately useful GenAI application for developers. LLMs can generate, explain, refactor, and debug code.
Generate Code from Natural Language
async function generateCode(description: string): Promise<string> { const response = await fetch("https://api.anthropic.com/v1/messages", { method: "POST", headers: { "Content-Type": "application/json", "x-api-key": process.env.ANTHROPIC_API_KEY!, "anthropic-version": "2023-06-01", }, body: JSON.stringify({ model: "claude-3-5-sonnet-20241022", max_tokens: 4096, temperature: 0, system: `You are a senior TypeScript developer. Generate clean, well-typed code with comments. Only return the code, no explanations.`, messages: [{ role: "user", content: description }], }), }); const data = await response.json(); return data.content[0].text; } // Usage const code = await generateCode( "Create a React hook that debounces a value with a configurable delay" );
AI-Powered Code Review
async function reviewCode(code: string): Promise<string> { const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: "gpt-4o", temperature: 0.3, messages: [ { role: "system", content: `Review this code for: 1. Bugs and potential errors 2. Performance issues 3. Security vulnerabilities 4. Code style improvements Format your response as a markdown checklist.` }, { role: "user", content: code } ], }), }); const data = await response.json(); return data.choices[0].message.content; }
2. Content Creation and Summarization
Generate, summarize, and transform content at scale.
Summarize Long Documents
async function summarize(text: string, style: "brief" | "detailed" = "brief"): Promise<string> { const instruction = style === "brief" ? "Summarize this in 2-3 sentences." : "Provide a detailed summary with key points as bullet points."; const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: "gpt-4o-mini", temperature: 0.3, messages: [ { role: "system", content: instruction }, { role: "user", content: text }, ], }), }); const data = await response.json(); return data.choices[0].message.content; }
Generate Multiple Content Variations
async function generateVariations( topic: string, count: number = 3 ): Promise<string[]> { const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: "gpt-4o", temperature: 1.0, messages: [ { role: "system", content: `Generate exactly ${count} different variations. Return them as a JSON array of strings. Each variation should have a different angle or tone.` }, { role: "user", content: `Write marketing copy for: ${topic}` }, ], }), }); const data = await response.json(); return JSON.parse(data.choices[0].message.content); }
3. Data Extraction and Structured Output
LLMs excel at extracting structured data from unstructured text.
Extract Data into a Schema
interface ContactInfo { name: string; email: string | null; phone: string | null; company: string | null; role: string | null; } async function extractContactInfo(text: string): Promise<ContactInfo> { const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: "gpt-4o-mini", temperature: 0, response_format: { type: "json_object" }, messages: [ { role: "system", content: `Extract contact information from the text. Return JSON with fields: name, email, phone, company, role. Use null for any field not found in the text.` }, { role: "user", content: text }, ], }), }); const data = await response.json(); return JSON.parse(data.choices[0].message.content); } // Usage const contact = await extractContactInfo( "Hi, I'm Sarah Chen, VP of Engineering at TechCorp. Reach me at sarah@techcorp.com or 555-0123." ); // { name: "Sarah Chen", email: "sarah@techcorp.com", phone: "555-0123", company: "TechCorp", role: "VP of Engineering" }
4. Chatbots and Conversational AI
Build intelligent conversational interfaces with memory and personality.
Simple Chatbot with Conversation History
interface Message { role: "system" | "user" | "assistant"; content: string; } class Chatbot { private messages: Message[] = []; private model: string; constructor(systemPrompt: string, model: string = "gpt-4o-mini") { this.model = model; this.messages = [{ role: "system", content: systemPrompt }]; } async chat(userMessage: string): Promise<string> { this.messages.push({ role: "user", content: userMessage }); const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: this.model, messages: this.messages, temperature: 0.7, }), }); const data = await response.json(); const assistantMessage = data.choices[0].message.content; this.messages.push({ role: "assistant", content: assistantMessage }); return assistantMessage; } getHistory(): Message[] { return [...this.messages]; } } // Usage const bot = new Chatbot( "You are a helpful coding tutor. Explain concepts clearly with examples. Keep responses concise." ); const answer1 = await bot.chat("What is a Promise in JavaScript?"); const answer2 = await bot.chat("Can you show me an example?"); // The bot remembers the conversation context
5. RAG (Retrieval Augmented Generation)
RAG is a pattern that combines search with generation. Instead of relying on the model's training data, you retrieve relevant documents and include them in the prompt.
Why RAG Matters
- Reduces hallucinations (model has actual documents to reference)
- Provides up-to-date information (not limited by training cutoff)
- Can work with your private data (company docs, codebases, etc.)
RAG Architecture
User asks a question
↓
1. EMBED the question (convert to vector)
↓
2. SEARCH your vector database for similar documents
↓
3. RETRIEVE the top N most relevant documents
↓
4. AUGMENT the prompt with those documents as context
↓
5. GENERATE a response using the LLM + context
Simple RAG Implementation
// Simplified RAG pipeline interface Document { id: string; content: string; embedding: number[]; } class SimpleRAG { private documents: Document[] = []; // Step 1: Index documents (one-time setup) async addDocument(id: string, content: string): Promise<void> { const embedding = await this.getEmbedding(content); this.documents.push({ id, content, embedding }); } // Step 2: Find relevant documents findRelevant(queryEmbedding: number[], topK: number = 3): Document[] { return this.documents .map(doc => ({ ...doc, similarity: this.cosineSimilarity(queryEmbedding, doc.embedding), })) .sort((a, b) => b.similarity - a.similarity) .slice(0, topK); } // Step 3: Generate answer with context async answer(question: string): Promise<string> { const queryEmbedding = await this.getEmbedding(question); const relevantDocs = this.findRelevant(queryEmbedding); const context = relevantDocs .map(doc => doc.content) .join("\n\n---\n\n"); const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: "gpt-4o", temperature: 0.3, messages: [ { role: "system", content: `Answer questions based ONLY on the provided context. If the context doesn't contain the answer, say so. Context: ${context}` }, { role: "user", content: question }, ], }), }); const data = await response.json(); return data.choices[0].message.content; } private async getEmbedding(text: string): Promise<number[]> { const response = await fetch("https://api.openai.com/v1/embeddings", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({ model: "text-embedding-3-small", input: text, }), }); const data = await response.json(); return data.data[0].embedding; } private cosineSimilarity(a: number[], b: number[]): number { let dot = 0, normA = 0, normB = 0; for (let i = 0; i < a.length; i++) { dot += a[i] * b[i]; normA += a[i] ** 2; normB += b[i] ** 2; } return dot / (Math.sqrt(normA) * Math.sqrt(normB)); } }
6. AI-Powered Search
Go beyond keyword matching — use AI to understand search intent.
// Semantic search: find results by meaning, not just keywords async function semanticSearch( query: string, documents: { id: string; content: string; embedding: number[] }[] ): Promise<{ id: string; content: string; score: number }[]> { // Get embedding for the search query const queryEmbedding = await getEmbedding(query); // Rank documents by similarity return documents .map(doc => ({ id: doc.id, content: doc.content, score: cosineSimilarity(queryEmbedding, doc.embedding), })) .sort((a, b) => b.score - a.score) .filter(doc => doc.score > 0.3); // Only return relevant results } // "How do I handle errors in async code?" // Finds documents about: try/catch, Promise.catch, error boundaries, etc. // Even if none of them contain the exact word "handle"
Use Case Decision Matrix
| Use Case | Best Model | Temperature | Complexity |
|---|---|---|---|
| Code generation | Claude 3.5 Sonnet | 0 | Low |
| Content summarization | GPT-4o mini | 0.3 | Low |
| Data extraction | GPT-4o mini | 0 | Low |
| Simple chatbot | GPT-4o mini | 0.7 | Medium |
| RAG system | GPT-4o + embeddings | 0.3 | Medium-High |
| AI-powered search | Embeddings model | N/A | Medium |
| Creative writing | GPT-4o | 1.0 | Low |
| Code review | Claude 3.5 Sonnet | 0.3 | Low |
Key Takeaways
- Code generation is the most immediately useful GenAI skill for developers
- Data extraction with structured output turns unstructured data into usable formats
- RAG is the pattern for building AI that uses your own data — reduces hallucinations dramatically
- Embeddings enable semantic search and similarity matching
- Start with simple use cases (summarization, extraction) before building complex systems (RAG, agents)
- Most use cases need just a few API calls — the infrastructure is simpler than you think
What's Next?
Let's wrap up with a comprehensive GenAI Fundamentals Cheat Sheet — your quick reference for models, parameters, concepts, and decision trees.
What to ask your AI: "I want to add AI features to my [type of app]. What are the top 3 features I should implement first, and which models should I use?"