Books/AI APIs & SDKs/AI APIs Cheat Sheet

    AI APIs Cheat Sheet

    AI APIs Cheat Sheet

    Your complete quick reference for working with AI APIs. Bookmark this page!

    Side-by-Side API Comparison

    FeatureOpenAIAnthropicGoogle Gemini
    Packageopenai@anthropic-ai/sdk@google/generative-ai
    Env VariableOPENAI_API_KEYANTHROPIC_API_KEYManual
    Auto-reads keyYesYesNo
    System promptMessage with role: "system"Top-level system paramsystemInstruction in model config
    max_tokensOptionalRequiredOptional (maxOutputTokens)
    Temperature range0–20–10–2
    Streamingstream: true.stream() or stream: truegenerateContentStream()
    Chat managementManual message arrayManual message arrayBuilt-in chat object
    MultimodalImagesImagesImages, audio, video
    Free tierNoNoYes

    Installation

    # Install all three
    npm install openai @anthropic-ai/sdk @google/generative-ai
    
    # Or install individually
    npm install openai
    npm install @anthropic-ai/sdk
    npm install @google/generative-ai

    Environment Setup

    # .env
    OPENAI_API_KEY=sk-proj-your-key-here
    ANTHROPIC_API_KEY=sk-ant-your-key-here
    GOOGLE_API_KEY=AIzaSy-your-key-here

    Code Templates

    OpenAI — Basic Chat

    import OpenAI from "openai";
    
    const openai = new OpenAI();
    
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [
        { role: "system", content: "You are a helpful assistant." },
        { role: "user", content: "Hello!" },
      ],
    });
    
    console.log(response.choices[0].message.content);

    OpenAI — Streaming

    const stream = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: "Hello!" }],
      stream: true,
    });
    
    for await (const chunk of stream) {
      const content = chunk.choices[0]?.delta?.content;
      if (content) process.stdout.write(content);
    }

    Anthropic — Basic Chat

    import Anthropic from "@anthropic-ai/sdk";
    
    const anthropic = new Anthropic();
    
    const message = await anthropic.messages.create({
      model: "claude-sonnet-4-20250514",
      max_tokens: 1024,
      system: "You are a helpful assistant.",
      messages: [
        { role: "user", content: "Hello!" },
      ],
    });
    
    console.log(message.content[0].type === "text" ? message.content[0].text : "");

    Anthropic — Streaming

    const stream = anthropic.messages.stream({
      model: "claude-sonnet-4-20250514",
      max_tokens: 1024,
      messages: [{ role: "user", content: "Hello!" }],
    });
    
    stream.on("text", (text) => process.stdout.write(text));
    await stream.finalMessage();

    Google Gemini — Basic Chat

    import { GoogleGenerativeAI } from "@google/generative-ai";
    
    const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
    const model = genAI.getGenerativeModel({
      model: "gemini-2.5-flash",
      systemInstruction: "You are a helpful assistant.",
    });
    
    const result = await model.generateContent("Hello!");
    console.log(result.response.text());

    Google Gemini — Streaming

    const result = await model.generateContentStream("Hello!");
    
    for await (const chunk of result.stream) {
      process.stdout.write(chunk.text());
    }

    Google Gemini — Chat

    const chat = model.startChat({ history: [] });
    
    const result1 = await chat.sendMessage("Hello!");
    console.log(result1.response.text());
    
    const result2 = await chat.sendMessage("Tell me more.");
    console.log(result2.response.text());

    Pricing Comparison

    ModelInput (per 1M tokens)Output (per 1M tokens)Best For
    GPT-4o-mini$0.15$0.60Cheap, fast tasks
    GPT-4o$2.50$10.00Quality general purpose
    Claude Haiku$0.25$1.25Cheap, fast tasks
    Claude Sonnet 4$3.00$15.00Quality general purpose
    Gemini 2.5 Flash$0.15$0.60Cheap, fast tasks
    Gemini 2.5 Pro$1.25$10.00Complex reasoning

    Cost Tiers

    BudgetRecommended Models
    FreeGemini 2.5 Flash (free tier)
    $5/monthGPT-4o-mini, Claude Haiku, Gemini Flash
    $20/monthGPT-4o, Claude Sonnet 4, Gemini Pro
    $100+/monthAny model at production scale

    Model Selection Guide

    By Task

    TaskRecommendedWhy
    Simple Q&AGPT-4o-mini / Haiku / FlashFast and cheap
    Code generationGPT-4o / Claude Sonnet 4High quality code
    Long document analysisClaude Sonnet 4Great at long context
    Image understandingGemini Flash / GPT-4oStrong multimodal
    Creative writingClaude Sonnet 4 / GPT-4oNuanced output
    Data extraction (JSON)GPT-4o (structured output)Guaranteed format
    High volume / batchGPT-4o-mini / Haiku / FlashLowest cost

    By Priority

    PriorityBest Choice
    CheapestGemini 2.5 Flash (free tier) or GPT-4o-mini
    FastestClaude Haiku or GPT-4o-mini
    Highest qualityClaude Sonnet 4 or GPT-4o
    Best multimodalGemini 2.5 Flash or Pro
    Widest ecosystemOpenAI (most tools and tutorials)

    Response Access Patterns

    // OpenAI
    const text = response.choices[0].message.content;
    const tokens = response.usage?.total_tokens;
    
    // Anthropic
    const text = message.content[0].type === "text" ? message.content[0].text : "";
    const tokens = message.usage.input_tokens + message.usage.output_tokens;
    
    // Gemini
    const text = result.response.text();
    const tokens = result.response.usageMetadata?.totalTokenCount;

    Error Handling Template

    async function callAI(provider: "openai" | "anthropic" | "gemini") {
      try {
        // Your API call here
      } catch (error: any) {
        if (error?.status === 429 || error?.message?.includes("429")) {
          console.error("Rate limited — implement retry with backoff");
        } else if (error?.status === 401 || error?.message?.includes("API_KEY")) {
          console.error("Authentication failed — check your API key");
        } else if (error?.status === 400) {
          console.error("Bad request — check your parameters");
        } else if (error?.status >= 500) {
          console.error("Server error — retry later");
        } else {
          throw error;
        }
      }
    }

    Retry with Exponential Backoff

    async function withRetry<T>(
      fn: () => Promise<T>,
      maxRetries = 3,
      baseDelay = 1000
    ): Promise<T> {
      for (let i = 0; i <= maxRetries; i++) {
        try {
          return await fn();
        } catch (error: any) {
          if (error?.status === 429 && i < maxRetries) {
            const delay = baseDelay * Math.pow(2, i);
            await new Promise((r) => setTimeout(r, delay));
          } else {
            throw error;
          }
        }
      }
      throw new Error("Unreachable");
    }

    AI Prompts for API Integration

    Getting Started

    • "Set up a Node.js project with TypeScript that can call the [OpenAI/Anthropic/Google] API. Include environment variables and error handling."
    • "I want to compare responses from OpenAI, Anthropic, and Google for the same prompt. Build a script that calls all three and shows the results."
    • "Create a reusable AI client class that supports multiple providers. I should be able to switch between OpenAI, Anthropic, and Google easily."

    Building Features

    • "Build a chat API endpoint that supports streaming responses using [provider]. The frontend is built with React."
    • "Create a text summarization feature using [provider]. It should accept long text and return a 3-sentence summary."
    • "Build a code review tool that analyzes code and returns structured feedback as JSON using OpenAI's structured output."
    • "Create a function-calling setup with OpenAI where the AI can search a database and return results."

    Production Readiness

    • "Add rate limiting, retry logic, and error handling to my AI API calls. I'm using [provider]."
    • "Set up cost tracking for my AI API usage. Log tokens used per request and estimate daily costs."
    • "Build a prompt caching layer that stores AI responses in [Redis/Firestore] to avoid redundant API calls."
    • "Create a fallback system that tries OpenAI first, then falls back to Anthropic if it fails."

    Optimization

    • "My AI API costs are too high. Here's my current setup: [describe]. How can I reduce costs?"
    • "Optimize my prompts for token efficiency. Here are my current prompts: [paste]. Make them shorter without losing quality."
    • "I'm hitting rate limits. Help me implement request queuing with concurrency control."

    Debugging

    • "I'm getting a 429 error from [provider]. What does this mean and how do I fix it?"
    • "My streaming implementation isn't working. Here's my code: [paste]. What's wrong?"
    • "The AI response doesn't match my expected JSON format. Here's my prompt and the response: [paste]. How do I fix this?"
    • "My API key works in curl but not in my Node.js app. Here's my code: [paste]."

    Quick Start Checklist

    1. SIGN UP for an API account (OpenAI, Anthropic, or Google)
    2. GET your API key from the provider's dashboard
    3. INSTALL the SDK: npm install [package]
    4. CREATE a .env file with your API key
    5. ADD .env to .gitignore
    6. SET a spending limit on your account
    7. WRITE your first API call
    8. TEST with a cheap model first (mini/haiku/flash)
    9. ADD error handling and retries
    10. UPGRADE to a better model if needed
    

    Key URLs


    You now have everything you need to integrate AI into your applications. Start with one provider, build something small, and expand from there. The best way to learn AI APIs is to build with them.

    Happy building!


    🌐 www.genai-mentor.ai