PHASE 2 ← Back to Course
10 / 23
⛓️

Prompt Templates & Chaining

Build reusable prompt components and chain them into powerful multi-step workflows.

1

Why Templates?

As your AI system grows, you'll write the same prompts over and over with different inputs. Prompt templates solve this problem — they're reusable, consistent, and easier to maintain.

The DRY Principle

DRY = "Don't Repeat Yourself." Hardcoding prompts violates this. If you decide to change your system prompt, you have to find and update every file. With templates, you change one place and it propagates everywhere.

❌ Hardcoded
def analyze_code(code):
    response = client.create(
        model="claude-opus-4-6",
        system="You are a code reviewer...",
        messages=[{
            "role": "user",
            "content": f"Review this: {code}"
        }]
    )

def test_code(code):
    response = client.create(
        model="claude-opus-4-6",
        system="You are a code reviewer...",
        messages=[{
            "role": "user",
            "content": f"Test: {code}"
        }]
    )
✅ Templated
SYSTEM_PROMPT = "You are a code reviewer..."

CODE_REVIEW_TEMPLATE = """
Review this code for bugs and
quality issues:

{code}
"""

CODE_TEST_TEMPLATE = """
Write test cases for this code:

{code}
"""

def call_claude(
    template, **kwargs
):
    content = template.format(**kwargs)
    return client.create(...)

Benefits of Templates

Reusability across your codebase. Consistency in system prompts. Easy testing and versioning. Reduced bugs from copy-paste errors. Clear separation of prompt logic from application logic.

2

Building Prompt Templates

There are several ways to build templates. Each has trade-offs. Let's compare three common approaches.

Option 1: Python f-strings

Simple, no dependencies, but limited for complex logic.

Python — f-string Template
TEMPLATE = """
You are a data analyst.

Analyze this dataset:

{data}

Focus on: {focus_areas}
"""

result = TEMPLATE.format(
    data="10,20,30,40,50",
    focus_areas="trends and outliers"
)

Option 2: Jinja2

More powerful. Supports loops, conditionals, filters. Industry standard for templating.

Python — Jinja2 Template
from jinja2 import Template

template_str = """
You are a data analyst.

Analyze this dataset:
{% for item in data %}
- {{ item }}
{% endfor %}

{% if include_summary %}
Include a summary.
{% endif %}
"""

template = Template(template_str)
result = template.render(
    data=[10, 20, 30],
    include_summary=True
)

Option 3: LangChain PromptTemplate

Built for LLM workflows. Integrates with LangChain chains and tools. Good for production systems.

Python — LangChain PromptTemplate
from langchain.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["data", "focus"],
    template="""
You are a data analyst.

Analyze this dataset:
{data}

Focus on: {focus}
"""
)

prompt = template.format(
    data="10,20,30,40,50",
    focus="trends"
)
🎯

f-strings

Best for: Simple one-off prompts. Cons: No validation, limited power.

⚙️

Jinja2

Best for: Complex templates with loops and conditionals. Cons: Extra dependency.

🔗

LangChain

Best for: Chains and production systems. Cons: Heavier framework.

📝

Custom

Best for: Full control. Cons: You maintain it yourself.

3

Template Variables & Validation

When templates have variables, you need to validate inputs. Missing or invalid data causes bad prompts.

Python — Validated Template
from typing import Optional
from dataclasses import dataclass

@dataclass
class AnalysisRequest:
    """Request with validation"""
    dataset: str
    focus_areas: list[str]
    max_length: 500 = 500

    def validate(self):
        if not self.dataset:
            raise ValueError(
                "dataset required"
            )
        if len(self.dataset) > 5000:
            raise ValueError(
                "dataset too large"
            )
        if not self.focus_areas:
            self.focus_areas = ["general trends"]

TEMPLATE = """You are a data analyst.

Analyze:
{dataset}

Focus: {focus}"""

def analyze(request: AnalysisRequest):
    request.validate()
    prompt = TEMPLATE.format(
        dataset=request.dataset,
        focus=", ".join(request.focus_areas)
    )
    return client.create(...)
💎

Production Pattern

Use dataclasses or Pydantic models for request objects. Validate before creating the prompt. Set reasonable defaults for optional fields. This prevents malformed prompts from reaching Claude.

4

Prompt Chaining Patterns

Prompt chains connect multiple prompts in sequences. The output of one becomes the input to the next. This enables complex workflows that would be hard for a single prompt.

Sequential Chain

Step 1 → Step 2 → Step 3. Each step uses the output of the previous. Best for multi-step workflows.

⤴️

Parallel Chain

Run multiple prompts simultaneously, then combine. Great for analyzing from multiple perspectives.

🔀

Conditional Chain

Route to different chains based on model output. E.g., "Is this positive or negative?" → different handling.

🚦

Router Chain

Model chooses which chain to use based on input. "This is about X, so use the X-handler."

Sequential Chain Example

Extract key points → Organize by theme → Generate summary → Quality check. Each step refines the previous output.

Router Chain Example

User input → Router decides topic → If "technical", use technical handler; if "general", use general handler.

5

Building a Chain: Document Summarizer

Let's build a complete 4-step sequential chain to summarize long documents.

1

Extract Key Points

Read document, identify 5-10 main ideas. Focus on substance, not style.

2

Organize by Theme

Group key points into logical themes. E.g., "Introduction", "Methods", "Results".

3

Generate Summary

Write concise summary from organized points. 1-2 paragraphs, 200 words max.

4

Quality Check

Verify summary is accurate, covers main ideas, no fabrications. Rate quality 1-5.

Python — Document Summarization Chain
def summarize_document(doc_text: str):
    """4-step document summarization chain"""

    # Step 1: Extract key points
    extract_prompt = f"""
Extract 5-10 key points from this document:

{doc_text}

Return only the key points, one per line."""

    response1 = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=500,
        messages=[{"role": "user",
                    "content": extract_prompt}]
    )
    key_points = response1.content[0].text

    # Step 2: Organize by theme
    organize_prompt = f"""
Organize these key points into logical themes:

{key_points}

Output format:
[Theme Name]
- point 1
- point 2
"""

    response2 = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=500,
        messages=[{"role": "user",
                    "content": organize_prompt}]
    )
    organized = response2.content[0].text

    # Step 3: Generate summary
    summary_prompt = f"""
Write a 200-word summary from these organized points:

{organized}

Be concise and accurate. No more than 200 words."""

    response3 = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=300,
        messages=[{"role": "user",
                    "content": summary_prompt}]
    )
    summary = response3.content[0].text

    # Step 4: Quality check
    quality_prompt = f"""
Rate this summary on:
1. Accuracy (no fabrications)
2. Completeness (covers main ideas)
3. Clarity (easy to understand)
4. Conciseness (under 200 words)

Summary:
{summary}

Rate each 1-5 and explain any issues."""

    response4 = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=200,
        messages=[{"role": "user",
                    "content": quality_prompt}]
    )
    quality = response4.content[0].text

    return {
        "summary": summary,
        "quality_check": quality
    }
6

Orchestration Libraries

For complex chains, specialized libraries handle the complexity. Here's a quick overview.

🔗

LangChain

Most popular. Chains, memory, agents, RAG integration. Good documentation. Heavier but feature-rich.

🦙

LlamaIndex

Focused on data indexing and RAG. Great for document processing. Lighter weight than LangChain.

🧠

Haystack

Search and RAG focused. Components-based architecture. Good for production pipelines.

🏗️

Build Your Own

Simple chains don't need a framework. Custom code gives you full control, reduces dependencies.

When to Use Each

LangChain: Complex multi-tool agents, memory management, many integrations. LlamaIndex: Document-heavy RAG systems. Custom: Simple 2-3 step chains, maximum control and minimal dependencies. Start simple, add a framework when you need it.

Check Your Understanding

Quick Quiz — 4 Questions

1. What is the main benefit of using prompt templates instead of hardcoding prompts?

2. Which templating approach is most powerful and supports loops and conditionals?

3. In a prompt chain, what happens at each step?

4. When should you use orchestration libraries like LangChain?

Topic 9 Summary

Prompt templates follow the DRY principle — write once, use everywhere. You can build them with f-strings (simple), Jinja2 (powerful), LangChain (framework), or custom code. Always validate template variables before using them. Prompt chains connect multiple prompts in sequences: Sequential (step-by-step), Parallel (multiple perspectives), Conditional (branching), or Router (model selects). Chains enable complex workflows that would be impossible with a single prompt. For simple chains, write custom code. For complex multi-tool systems, use LangChain or similar libraries.

Next up → Topic 10: Evaluating Prompt Quality
Measure, score, and systematically improve your prompts — because you can't improve what you can't measure.

← Topic 8 Topic 10 of 23 Next: Structured Output →