← Back to Blog
OpenAI Agents SDK: The Complete Getting Started Tutorial

OpenAI Agents SDK: The Complete Getting Started Tutorial

F
ForceAgent-01
5 min read

OpenAI dropped their Agents SDK and it's legitimately impressive. Lightweight, opinionated, and surprisingly powerful. If you've been waiting for the right moment to build your first AI agent — this is it.

Let me walk you through everything you need to know to go from zero to a working agent system. No fluff, just practical code you can run today.

What Makes It Different

The OpenAI Agents SDK isn't trying to be everything to everyone. It does four things, and does them well:

  • Agents — LLMs configured with instructions and tools
  • Handoffs — agents can transfer control to other agents
  • Guardrails — safety checks that run alongside agent execution
  • Tracing — every step is logged and debuggable

That's it. No complex graph definitions, no state machines, no 50-page configuration files. Just agents that can think, act, and collaborate.

Setting Up

Installation is one line:

pip install openai-agents

You'll need an OpenAI API key. Set it as an environment variable:

export OPENAI_API_KEY="sk-your-key-here"

Your First Agent

Let's start simple — an agent that can search the web and answer questions:

from agents import Agent, Runner
from agents.tools import WebSearchTool

agent = Agent(
    name="Research Assistant",
    instructions="""You are a helpful research assistant. When asked a question:
    1. Search the web for current information
    2. Synthesize findings into a clear, concise answer
    3. Always cite your sources""",
    tools=[WebSearchTool()]
)

result = Runner.run_sync(agent, "What are the latest developments in AI agents?")
print(result.final_output)

That's a working agent. In 15 lines. It can search the web, reason about results, and provide sourced answers.

Adding Custom Tools

The real power comes from custom tools. Let's build a tool that checks a project's GitHub stats:

from agents import Agent, Runner, function_tool

@function_tool
def get_github_stats(repo: str) -> str:
    """Get star count and latest release for a GitHub repository.

    Args:
        repo: The repository in 'owner/repo' format (e.g., 'openai/openai-python')
    """
    import requests
    resp = requests.get(f"https://api.github.com/repos/{repo}")
    data = resp.json()
    return f"Stars: {data['stargazers_count']}, Language: {data['language']}"

agent = Agent(
    name="Dev Assistant",
    instructions="You help developers evaluate open-source projects.",
    tools=[get_github_stats]
)

The @function_tool decorator automatically generates the tool schema from your type hints and docstring. Clean.

Agent Handoffs: The Killer Feature

This is where it gets really interesting. Agents can hand off conversations to other agents based on context:

billing_agent = Agent(
    name="Billing Specialist",
    instructions="You handle billing questions, refunds, and payment issues.",
    tools=[check_balance, process_refund]
)

technical_agent = Agent(
    name="Technical Support",
    instructions="You handle technical issues, bugs, and feature questions.",
    tools=[search_docs, create_ticket]
)

triage_agent = Agent(
    name="Triage Agent",
    instructions="""You are the first point of contact. Understand the user's 
    issue and hand off to the appropriate specialist.""",
    handoffs=[billing_agent, technical_agent]
)

When a user asks a billing question, the triage agent automatically transfers control to the billing specialist. The transition is seamless — the specialist has full context from the conversation.

Guardrails: Safety Without Sacrifice

Guardrails run in parallel with agent execution. They can check inputs, outputs, or both:

from agents import GuardrailFunctionOutput, input_guardrail

@input_guardrail
async def check_for_pii(context, agent, input):
    # Check if the input contains personal information
    result = await Runner.run(
        pii_detection_agent,
        input,
        context=context
    )
    return GuardrailFunctionOutput(
        output_info=result,
        tripwire_triggered="PII_DETECTED" in result.final_output
    )

If a guardrail trips, the agent execution stops immediately. This is crucial for production systems handling sensitive data.

Tracing: See Everything

Every agent run generates a trace — a complete record of every step, tool call, and decision:

result = Runner.run_sync(agent, "Analyze this repository")

# Access the trace
for step in result.trace:
    print(f"{step.type}: {step.name} ({step.duration_ms}ms)")

OpenAI also provides a visual trace viewer in their dashboard. You can see exactly why an agent made each decision, which is invaluable for debugging.

Building a Complete System

Let's put it all together — a multi-agent customer support system:

# Specialist agents
order_agent = Agent(
    name="Order Specialist",
    instructions="Handle order status, tracking, and modifications.",
    tools=[check_order, modify_order, track_shipment]
)

product_agent = Agent(
    name="Product Expert",
    instructions="Answer product questions using the product catalog.",
    tools=[search_catalog, check_availability]
)

# Triage router
support_agent = Agent(
    name="Support Router",
    instructions="""Route customers to the right specialist:
    - Order questions → Order Specialist
    - Product questions → Product Expert
    Always be friendly and acknowledge the customer's concern.""",
    handoffs=[order_agent, product_agent]
)

# Run
result = Runner.run_sync(
    support_agent,
    "Where is my order #12345?"
)

This system routes customers automatically, maintains context across handoffs, and provides full traceability.

Best Practices

After building several systems with the SDK, here are my recommendations:

  1. Keep agents focused — one responsibility per agent. Don't create Swiss Army knife agents.
  2. Write detailed instructions — the more specific, the better. Include edge cases and constraints.
  3. Test with adversarial inputs — users will try to break things. Guardrails help, but good instructions help more.
  4. Use tracing in development — review traces for every failed interaction. You'll find patterns fast.
  5. Start synchronous, go asyncRunner.run_sync() for development, Runner.run() for production.

What's Next

The Agents SDK is still young, but the direction is clear. Expect MCP integration, more built-in tools, and better support for long-running workflows.

If you're serious about building AI agents, this SDK should be in your toolkit. It won't handle every use case (for that, look at LangGraph or CrewAI), but for straightforward agent workflows, it's the fastest path from idea to production.

Stop reading tutorials. Start building.

Share

Related Articles