OpenAI API Deep Dive
OpenAI API Deep Dive
OpenAI is the company behind ChatGPT, and their API is the most widely used AI API in the world. Let's learn how to use it in your applications.
Setting Up the OpenAI SDK
Install
npm install openai
Configure Your API Key
Get your key from platform.openai.com/api-keys, then add it to your .env:
OPENAI_API_KEY=sk-proj-your-key-here
Initialize the Client
import OpenAI from "openai"; const openai = new OpenAI(); // Automatically reads OPENAI_API_KEY from environment
That's it. The SDK automatically finds your API key in the OPENAI_API_KEY environment variable.
If you need to pass the key explicitly:
const openai = new OpenAI({ apiKey: process.env.MY_CUSTOM_KEY_NAME, });
The Chat Completions Endpoint
This is the main endpoint you'll use. It takes a conversation (array of messages) and returns the AI's response.
Basic Example
import OpenAI from "openai"; const openai = new OpenAI(); const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [ { role: "user", content: "What is TypeScript?" }, ], }); console.log(response.choices[0].message.content);
With a System Prompt
System prompts tell the AI how to behave:
const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [ { role: "system", content: "You are a senior software engineer. Give concise, practical answers with code examples.", }, { role: "user", content: "How do I handle errors in async/await?", }, ], });
Multi-Turn Conversations
To create a back-and-forth conversation, include the full message history:
const messages: OpenAI.ChatCompletionMessageParam[] = [ { role: "system", content: "You are a helpful coding assistant." }, { role: "user", content: "What is a Promise in JavaScript?" }, { role: "assistant", content: "A Promise is an object representing the eventual completion or failure of an asynchronous operation..." }, { role: "user", content: "Can you show me an example?" }, ]; const response = await openai.chat.completions.create({ model: "gpt-4o", messages, });
The AI sees the entire conversation and responds in context.
What to ask your AI: "Build a simple chatbot with the OpenAI API that maintains conversation history."
OpenAI Models
| Model | Best For | Speed | Cost |
|---|---|---|---|
| gpt-4o | Complex reasoning, coding, analysis | Fast | Medium |
| gpt-4o-mini | Simple tasks, high volume | Very fast | Very low |
| o3 | Advanced reasoning, math, science | Slower | Higher |
| o3-mini | Reasoning with lower cost | Moderate | Medium |
Choosing a Model
- Start with
gpt-4o-minifor development and simple tasks — it's cheap and fast - Use
gpt-4ofor production features that need quality - Use
o3-minioro3only when you need step-by-step reasoning (math, logic puzzles, complex planning)
// For quick, cheap tasks const quick = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Summarize this text..." }], }); // For complex tasks const detailed = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Review this code for bugs..." }], });
Important Parameters
| Parameter | Default | What It Does |
|---|---|---|
temperature | 1.0 | Creativity level (0 = deterministic, 2 = very creative) |
max_tokens | Model limit | Maximum response length |
top_p | 1.0 | Alternative to temperature (nucleus sampling) |
frequency_penalty | 0 | Reduce repetition (-2 to 2) |
presence_penalty | 0 | Encourage new topics (-2 to 2) |
const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Write a haiku about coding" }], temperature: 0.9, // More creative max_tokens: 100, // Keep it short });
Tip: Use
temperature: 0for tasks that need consistent, factual answers (like code generation). Use higher values for creative tasks.
Structured Output
Sometimes you need the AI to return data in a specific JSON format — not free-form text. OpenAI supports structured output with JSON mode:
const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [ { role: "system", content: "Extract the product information and return it as JSON.", }, { role: "user", content: "The iPhone 16 Pro costs $999 and has 256GB storage.", }, ], response_format: { type: "json_schema", json_schema: { name: "product_info", schema: { type: "object", properties: { name: { type: "string" }, price: { type: "number" }, storage: { type: "string" }, }, required: ["name", "price", "storage"], }, }, }, }); const product = JSON.parse(response.choices[0].message.content!); // { name: "iPhone 16 Pro", price: 999, storage: "256GB" }
This guarantees the response matches your schema — no parsing guesswork.
What to ask your AI: "I need to extract structured data from user input. Set up OpenAI structured output with a JSON schema for [describe your data]."
Function Calling (Tool Use)
Function calling lets the AI decide when to call functions you define. The AI doesn't execute the function — it tells you which function to call and with what arguments.
const tools: OpenAI.ChatCompletionTool[] = [ { type: "function", function: { name: "get_weather", description: "Get the current weather for a city", parameters: { type: "object", properties: { city: { type: "string", description: "The city name, e.g., San Francisco", }, }, required: ["city"], }, }, }, ]; const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "What's the weather in Tokyo?" }], tools, }); // Check if the model wants to call a function const toolCall = response.choices[0].message.tool_calls?.[0]; if (toolCall) { console.log("Function:", toolCall.function.name); console.log("Args:", JSON.parse(toolCall.function.arguments)); // Function: get_weather // Args: { city: "Tokyo" } // You would call your actual function here, then send the result back }
The Function Calling Flow
- You define available functions (tools)
- Send the user's message along with the tool definitions
- The AI decides whether to call a function
- If yes, you execute the function and send the result back
- The AI uses the result to form its final response
This pattern is the foundation of AI agents — AI that can take actions, not just generate text.
What to ask your AI: "Build a function-calling example where the AI can look up products in a database and check inventory."
Error Handling
import OpenAI from "openai"; const openai = new OpenAI(); try { const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Hello!" }], }); console.log(response.choices[0].message.content); } catch (error) { if (error instanceof OpenAI.APIError) { console.error("Status:", error.status); // e.g., 429, 500 console.error("Message:", error.message); if (error.status === 429) { console.error("Rate limited — wait and retry"); } } else { throw error; } }
What's Next?
Now let's explore the Anthropic Claude API — a powerful alternative with its own strengths.
What to ask your AI: "Help me build a reusable OpenAI helper function with error handling, retries, and configurable model selection."