Anthropic Claude API
Anthropic Claude API
Anthropic makes Claude — an AI model known for strong reasoning, careful instruction following, and long-context capabilities. Let's learn how to use the Claude API in your applications.
Setting Up the Anthropic SDK
Install
npm install @anthropic-ai/sdk
Configure Your API Key
Get your key from console.anthropic.com/settings/keys, then add it to your .env:
ANTHROPIC_API_KEY=sk-ant-your-key-here
Initialize the Client
import Anthropic from "@anthropic-ai/sdk"; const anthropic = new Anthropic(); // Automatically reads ANTHROPIC_API_KEY from environment
Like the OpenAI SDK, the Anthropic SDK automatically reads the API key from the environment variable.
The Messages API
Anthropic's primary endpoint is the Messages API. It's conceptually similar to OpenAI's Chat Completions but has some key differences.
Basic Example
import Anthropic from "@anthropic-ai/sdk"; const anthropic = new Anthropic(); const message = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, messages: [ { role: "user", content: "What is TypeScript?" }, ], }); console.log(message.content[0].type === "text" ? message.content[0].text : "");
Key Difference: max_tokens is Required
Unlike OpenAI, Anthropic requires you to specify max_tokens. This is the maximum number of tokens the model will generate. If you're not sure, start with 1024 for short answers or 4096 for longer responses.
Claude Models
| Model | Best For | Speed | Cost |
|---|---|---|---|
| Claude Sonnet 4 | Complex tasks, coding, analysis | Fast | Medium |
| Claude Haiku | Simple tasks, high volume | Very fast | Very low |
| Claude Opus 4 | Hardest problems, deep reasoning | Slower | Higher |
Model IDs
// Current model IDs const SONNET = "claude-sonnet-4-20250514"; const HAIKU = "claude-haiku-235-20241022";
Choosing a Model
- Start with Claude Haiku for development, simple tasks, and classification
- Use Claude Sonnet 4 for most production tasks — great balance of quality and speed
- Use Claude Opus 4 for the hardest tasks requiring deep reasoning
// Quick classification const quick = await anthropic.messages.create({ model: "claude-haiku-235-20241022", max_tokens: 100, messages: [{ role: "user", content: "Is this email spam? ..." }], }); // Complex code review const detailed = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 4096, messages: [{ role: "user", content: "Review this code for security issues..." }], });
System Prompts
In the Anthropic API, system prompts are a top-level parameter, not part of the messages array:
const message = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, system: "You are a senior TypeScript developer. Give concise answers with practical code examples. Always use modern TypeScript best practices.", messages: [ { role: "user", content: "How do I handle errors in async functions?" }, ], });
This is different from OpenAI, where the system prompt is a message with role: "system". In Anthropic's API, system is its own parameter.
Effective System Prompts
// For a code assistant const system = `You are an expert software engineer. - Always provide working code examples - Use TypeScript with proper types - Explain your reasoning briefly - If a question is ambiguous, ask for clarification`; // For a writing assistant const system2 = `You are a professional technical writer. - Write clearly and concisely - Use active voice - Break complex ideas into simple steps - Include examples whenever possible`;
What to ask your AI: "Help me write a system prompt for Claude that makes it act as a [role] for my [type of app]."
Multi-Turn Conversations
Like OpenAI, you maintain conversation history by passing all previous messages:
const conversationHistory: Anthropic.MessageParam[] = []; // Turn 1 conversationHistory.push({ role: "user", content: "What is a closure in JavaScript?", }); const response1 = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, system: "You are a helpful coding tutor.", messages: conversationHistory, }); // Add the assistant's response to history const assistantText = response1.content[0].type === "text" ? response1.content[0].text : ""; conversationHistory.push({ role: "assistant", content: assistantText, }); // Turn 2 conversationHistory.push({ role: "user", content: "Can you show me a practical example?", }); const response2 = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, system: "You are a helpful coding tutor.", messages: conversationHistory, });
Building a Chat Function
import Anthropic from "@anthropic-ai/sdk"; const anthropic = new Anthropic(); class ClaudeChat { private history: Anthropic.MessageParam[] = []; private system: string; private model: string; constructor(system: string, model = "claude-sonnet-4-20250514") { this.system = system; this.model = model; } async send(userMessage: string): Promise<string> { this.history.push({ role: "user", content: userMessage }); const response = await anthropic.messages.create({ model: this.model, max_tokens: 2048, system: this.system, messages: this.history, }); const text = response.content[0].type === "text" ? response.content[0].text : ""; this.history.push({ role: "assistant", content: text }); return text; } clearHistory() { this.history = []; } } // Usage const chat = new ClaudeChat("You are a helpful coding tutor."); const answer1 = await chat.send("What is a Promise?"); const answer2 = await chat.send("How does async/await relate to Promises?");
Response Structure
The Anthropic response format differs from OpenAI:
{ "id": "msg_abc123", "type": "message", "role": "assistant", "model": "claude-sonnet-4-20250514", "content": [ { "type": "text", "text": "Here is my response..." } ], "stop_reason": "end_turn", "usage": { "input_tokens": 25, "output_tokens": 150 } }
Key differences from OpenAI:
contentis an array — can contain text blocks and tool-use blocksstop_reasoninstead offinish_reasoninput_tokens/output_tokensinstead ofprompt_tokens/completion_tokens
Important Parameters
| Parameter | Default | What It Does |
|---|---|---|
max_tokens | Required | Maximum response length |
temperature | 1.0 | Creativity level (0 to 1) |
top_p | — | Nucleus sampling (alternative to temperature) |
stop_sequences | — | Stop generating when these strings appear |
const response = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 500, temperature: 0, // Deterministic — good for code messages: [{ role: "user", content: "Write a function to reverse a string" }], });
Error Handling
import Anthropic from "@anthropic-ai/sdk"; const anthropic = new Anthropic(); try { const message = await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, messages: [{ role: "user", content: "Hello!" }], }); console.log(message.content[0].type === "text" ? message.content[0].text : ""); } catch (error) { if (error instanceof Anthropic.APIError) { console.error("Status:", error.status); console.error("Message:", error.message); if (error.status === 429) { console.error("Rate limited — wait and retry"); } } else { throw error; } }
Claude vs. OpenAI — Quick Comparison
| Feature | OpenAI | Anthropic |
|---|---|---|
| System prompt | Message with role: "system" | Top-level system parameter |
max_tokens | Optional | Required |
| Response | choices[0].message.content | content[0].text |
| Temperature range | 0–2 | 0–1 |
| Env variable | OPENAI_API_KEY | ANTHROPIC_API_KEY |
What's Next?
Let's explore the Google Gemini API — Google's entry in the AI API space with strong multimodal capabilities.
What to ask your AI: "Help me build a Claude-powered chatbot with system prompts, conversation history, and error handling."