PHASE 2 ← Back to Course
9 / 23
🔧

Tool Use & Function Calling

Let your LLM take actions in the real world — calling APIs, running code, and accessing live data.

1

What Is Tool Use?

By default, LLMs can only generate text. Tool use (also called function calling) lets the model do something you want it to decide to do — not something you tell it to do. The model chooses when and how to use tools based on the user's request.

Key Insight: The Model Decides

The critical difference from RAG: with RAG, you retrieve documents and pass them. With tool use, the model analyzes the user's request and decides whether a tool is needed, and which one to call with what arguments.

🤔

Decision-Making

Model analyzes the request and decides if a tool call is needed. It can decline to use tools even if available.

🎯

Argument Selection

Model chooses parameters for the tool. "Search for X" and the model fills in the X based on context.

🔄

Multi-Step Reasoning

Model can call multiple tools in sequence, using output from one tool as input to the next.

🌍

Real-World Impact

Tool calls can have side effects: make purchases, send emails, update databases. Requires safety considerations.

💡

Analogy: A Smart Assistant

Without tool use: you ask a question and get an answer from memory. With tool use: you ask an assistant who can pick up the phone (call API), check the filing cabinet (database query), or run a calculation (execute code) to give you an accurate answer. The assistant decides what tools to use.

2

How Function Calling Works

Tool use follows a structured loop. Understanding each step is essential for building reliable systems.

1. Define Tools
2. Model Selects
3. You Execute
4. Model Responds

Step 1: Define Tools

You describe available tools in JSON schema format. Tell the model what each tool does, what parameters it accepts, and what it returns. The model reads these definitions and uses them to decide when to call.

Step 2: Model Selects

When you send a user message, Claude analyzes it against the tool definitions. If it thinks a tool would help, it returns a tool-use response instead of text. It specifies: which tool, with what arguments.

Step 3: You Execute

Your code receives the tool call, validates the arguments, executes it (calls API, queries database, runs function), and gets a result. You're responsible for the actual action.

Step 4: Model Responds

You pass the tool result back to Claude in the conversation. Claude now has real data and can generate an informed response. This may trigger additional tool calls, or Claude can directly answer the user.

⚠️

Critical: The Model Doesn't Execute Tools

The model never actually calls your tools. It only requests that you call them. You must handle execution, error checking, and security validation. This is a feature, not a bug — it keeps the model safe and you in control.

3

Defining Tools (Anthropic)

Anthropic tools use a clear JSON schema. Here's how to define them.

JSON — Tool Definition
{
  "name": "get_weather",
  "description": "Get current weather for a city",
  "input_schema": {
    "type": "object",
    "properties": {
      "city": {
        "type": "string",
        "description": "City name (e.g., 'San Francisco')"
      },
      "unit": {
        "type": "string",
        "enum": ["celsius", "fahrenheit"],
        "description": "Temperature unit"
      }
    },
    "required": ["city"]
  }
}

Complete Example: Multiple Tools

Python — Define and Use Tools
import anthropic
import json

client = anthropic.Anthropic()

# Define available tools
tools = [
    {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "input_schema": {
            "type": "object",
            "properties": {
                "city": {"type": "string",
                            "description": "City name"},
                "unit": {"type": "string",
                          "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["city"]
        }
    },
    {
        "name": "calculate",
        "description": "Perform a calculation",
        "input_schema": {
            "type": "object",
            "properties": {
                "expression": {"type": "string",
                                  "description": "Math expression (e.g., '2+2*3')"}
            },
            "required": ["expression"]
        }
    }
]

# Make API call with tools
response = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": "What's the weather in Paris and what is 15 * 3?"}
    ]
)
4

Defining Tools (OpenAI Format)

OpenAI's function calling uses a similar but slightly different format. Here's the comparison.

✅ Anthropic Format
{
  "name": "get_weather",
  "description": "...",
  "input_schema": {
    "type": "object",
    "properties": { ... },
    "required": [ ... ]
  }
}
✅ OpenAI Format
{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "...",
    "parameters": {
      "type": "object",
      "properties": { ... },
      "required": [ ... ]
    }
  }
}

When to Use Each

Anthropic tools: Clean, minimal format. Excellent documentation support. OpenAI format: More widely supported. Use if you're using OpenAI models or need broader compatibility. For new projects, Anthropic's format is simpler and less error-prone.

5

The Tool Use Loop

Handling tool responses requires a loop. The model might call tools, get results, and call more tools.

Python — Complete Tool Loop
def process_tool_call(tool_name, tool_input):
    """Execute a tool and return result"""
    if tool_name == "get_weather":
        return f"Weather in {tool_input['city']}: Sunny, 72°F"
    elif tool_name == "calculate":
        return f"Result: {eval(tool_input['expression'])}"
    else:
        return "Unknown tool"

# Main loop
messages = [{"role": "user", "content": "What's weather in NYC and compute 10+5?"}]

while True:
    # Call Claude with tools
    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        tools=tools,
        messages=messages
    )

    # Check if we need to handle tool calls
    if response.stop_reason == "tool_use":
        # Process each tool call
        tool_calls = [block for block in response.content
                      if block.type == "tool_use"]

        # Add assistant response to messages
        messages.append({"role": "assistant", "content": response.content})

        # Execute tools and collect results
        for tool_call in tool_calls:
            result = process_tool_call(tool_call.name, tool_call.input)

            # Add tool result to messages
            messages.append({
                "role": "user",
                "content": [{
                    "type": "tool_result",
                    "tool_use_id": tool_call.id,
                    "content": result
                }]
            })
    else:
        # Model generated final response (no more tool calls)
        final_response = response.content[0].text
        print(final_response)
        break

Loop Pattern

1. Call Claude with tools in the message list. 2. If response is "tool_use", process the tool calls. 3. Add assistant response and tool results back to messages. 4. Loop back to step 1. 5. When response is "end_turn" (or no tool calls), you have the final answer.

6

Real-World Tool Examples

Tool use enables countless real-world applications. Here are the most common patterns:

🔍

Web Search

Query a search engine API (Google, Tavily, Serper) to find current information. Perfect for breaking knowledge cutoffs.

🧮

Calculator

Send complex math expressions to Python or WolframAlpha. Prevents mathematical hallucinations.

🗄️

Database Query

Model generates SQL, you execute safely with parameterized queries. Enables data-driven responses.

📡

API Calls

Call any REST API. Weather, stock prices, transportation, payment systems — anything accessible via HTTP.

📁

File Operations

Read, write, process files. Model decides what analysis to perform based on file contents.

⚙️

Code Execution

Run Python code in a sandbox. Model analyzes data, creates visualizations, trains models.

7

Error Handling & Safety

What Happens When Tools Fail?

Tools will fail. Network timeouts, invalid inputs, API rate limits, permission errors. You must:

Python — Error Handling
def process_tool_call(tool_name, tool_input):
    try:
        if tool_name == "web_search":
            # Validate input
            query = tool_input.get("query", "")
            if not query or len(query) > 500:
                return "Error: query required, max 500 chars"

            # Execute with timeout
            result = search_api.search(query, timeout=5)
            return result

    except TimeoutError:
        return "Error: search timed out. Try a simpler query."
    except Exception as e:
        return f"Error: {str(e)}"

Safety & Guardrails

⚠️

Tools Can Have Real Impact

A tool that deletes files, transfers money, or sends emails can cause real harm if misused. Always implement guardrails: approval flows for sensitive actions, rate limiting, audit logs, and permission checks based on user identity.

💎

Approval Flows

For sensitive actions (payments, deletions, API calls with side effects), consider requiring human approval. The model can request an action, but your code asks the user for confirmation before executing.

Check Your Understanding

Quick Quiz — 4 Questions

1. What is the critical difference between tool use and RAG?

2. Who executes the actual tool (API call, database query, etc.)?

3. What is the second step in the function calling pipeline?

4. When the model requests a tool call, what should your code do first?

Topic 8 Summary

Tool use (function calling) lets models take action in the real world by calling APIs, running code, and accessing live data. The model decides when and how to use tools based on user requests. The pipeline is: Define Tools → Model Selects → You Execute → Model Responds. You must implement the loop: call Claude, check for tool requests, execute tools, return results, repeat until done. Tools can integrate web search, calculators, databases, APIs, file operations, and code execution. Critical: You execute tools, not the model. You're responsible for validation, error handling, and safety.

Next up → Topic 9: Prompt Templates & Chaining
Build reusable prompt components and orchestrate them into multi-step workflows.

← Topic 7 Topic 9 of 23 Next: Prompt Templates & Chaining →