Understanding AI APIs
Understanding AI APIs
Before you can build AI-powered applications, you need to understand how your code communicates with AI models. The answer is APIs — and specifically, the APIs provided by companies like OpenAI, Anthropic, and Google.
What is an API?
API stands for Application Programming Interface. It's a way for two programs to talk to each other. When your app needs to use an AI model, it doesn't run the model directly — it sends a request to a server that runs the model and gets a response back.
Think of it like ordering food at a restaurant:
- You (your app) tell the waiter what you want
- The waiter (the API) takes your order to the kitchen
- The kitchen (the AI model) prepares your food
- The waiter brings back your meal (the response)
You don't need to know how the kitchen works. You just need to know how to place your order.
REST API Basics
Most AI APIs use REST (Representational State Transfer), which is the standard way web applications communicate. REST uses regular HTTP — the same protocol your browser uses to load websites.
Key Concepts
| Concept | What It Means |
|---|---|
| Endpoint | A URL where you send requests (e.g., https://api.openai.com/v1/chat/completions) |
| HTTP Method | The type of action (GET, POST, PUT, DELETE) |
| Headers | Metadata sent with the request (authentication, content type) |
| Body | The data you're sending (your prompt, model choice, etc.) |
| Response | The data that comes back (the AI's answer) |
Common HTTP Methods for AI APIs
| Method | Purpose | AI API Example |
|---|---|---|
| POST | Send data and get a result | Send a prompt, get a completion |
| GET | Retrieve information | List available models |
| DELETE | Remove something | Cancel a fine-tuning job |
For AI chat completions, you'll almost always use POST — because you're sending data (your prompt) and receiving data (the AI's response).
How AI APIs Work
Every AI API follows the same basic pattern:
1. Your app sends a POST request with:
- Your API key (authentication)
- The model you want to use
- Your prompt or messages
- Optional settings (temperature, max tokens, etc.)
2. The AI server:
- Validates your API key
- Runs your prompt through the model
- Generates a response
3. Your app receives:
- The generated text
- Usage information (tokens used)
- Metadata (model used, finish reason)
Here's what that looks like in code using fetch:
const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": "Bearer sk-your-api-key-here", }, body: JSON.stringify({ model: "gpt-4o", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What is an API?" }, ], }), }); const data = await response.json(); console.log(data.choices[0].message.content);
You don't need to memorize this — AI SDKs make it much simpler. But understanding the underlying flow helps you debug issues and read documentation.
Authentication with API Keys
Every AI API requires an API key — a secret string that identifies you and tracks your usage. Without a valid key, the API won't respond.
How to Get an API Key
| Provider | Where to Get Your Key |
|---|---|
| OpenAI | platform.openai.com/api-keys |
| Anthropic | console.anthropic.com/settings/keys |
| aistudio.google.com/apikey |
Keeping Your Key Safe
API keys are like passwords. Never put them directly in your code or commit them to Git.
# .env file (add to .gitignore!) OPENAI_API_KEY=sk-proj-abc123... ANTHROPIC_API_KEY=sk-ant-abc123... GOOGLE_API_KEY=AIzaSy...
// Access in your code const apiKey = process.env.OPENAI_API_KEY;
What to ask your AI: "Help me set up environment variables for my AI API keys in a Node.js project. Include a .env.example file."
Request and Response Format
All major AI APIs use JSON (JavaScript Object Notation) for both requests and responses.
Typical Request Body
{ "model": "gpt-4o", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Explain APIs in one sentence." } ], "temperature": 0.7, "max_tokens": 200 }
Typical Response
{ "id": "chatcmpl-abc123", "object": "chat.completion", "model": "gpt-4o", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "An API is a set of rules that allows different software programs to communicate with each other." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 25, "completion_tokens": 22, "total_tokens": 47 } }
Key Fields Explained
| Field | What It Means |
|---|---|
model | Which AI model processed the request |
choices | Array of responses (usually just one) |
message.content | The actual generated text |
finish_reason | Why the model stopped (stop = natural end, length = hit token limit) |
usage | How many tokens were used (affects cost) |
What Are Tokens?
Tokens are the units AI models use to process text. Roughly:
- 1 token is about 4 characters in English
- 1 token is about 0.75 words
- "Hello, world!" is about 4 tokens
You're charged based on tokens used — both the tokens you send (input/prompt) and the tokens the model generates (output/completion).
What to ask your AI: "How many tokens would my prompt use? Here it is: [paste your prompt]. What would this cost with GPT-4o?"
SDKs vs. Raw HTTP
While you can call AI APIs with raw fetch calls, every provider offers an SDK (Software Development Kit) — a library that simplifies the process:
| Approach | Pros | Cons |
|---|---|---|
| Raw fetch | No dependencies, full control | Verbose, handle errors manually |
| Official SDK | Simple, typed, handles retries | Extra dependency |
SDK Example (OpenAI)
import OpenAI from "openai"; const openai = new OpenAI(); // Uses OPENAI_API_KEY env var const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "What is an API?" }], }); console.log(response.choices[0].message.content);
Much cleaner than the raw fetch version, right? SDKs also handle:
- Authentication — Reads your API key from environment variables
- Error handling — Throws typed errors you can catch
- Retries — Automatically retries failed requests
- Type safety — Full TypeScript support
What's Next?
Now that you understand the basics, let's dive deep into each provider. We'll start with the OpenAI API — the most widely used AI API.
What to ask your AI: "I want to call an AI API from my Node.js app. What's the simplest way to get started?"