Build reusable prompt components and chain them into powerful multi-step workflows.
As your AI system grows, you'll write the same prompts over and over with different inputs. Prompt templates solve this problem — they're reusable, consistent, and easier to maintain.
The DRY Principle
DRY = "Don't Repeat Yourself." Hardcoding prompts violates this. If you decide to change your system prompt, you have to find and update every file. With templates, you change one place and it propagates everywhere.
def analyze_code(code): response = client.create( model="claude-opus-4-6", system="You are a code reviewer...", messages=[{ "role": "user", "content": f"Review this: {code}" }] ) def test_code(code): response = client.create( model="claude-opus-4-6", system="You are a code reviewer...", messages=[{ "role": "user", "content": f"Test: {code}" }] )
SYSTEM_PROMPT = "You are a code reviewer..." CODE_REVIEW_TEMPLATE = """ Review this code for bugs and quality issues: {code} """ CODE_TEST_TEMPLATE = """ Write test cases for this code: {code} """ def call_claude( template, **kwargs ): content = template.format(**kwargs) return client.create(...)
Reusability across your codebase. Consistency in system prompts. Easy testing and versioning. Reduced bugs from copy-paste errors. Clear separation of prompt logic from application logic.
There are several ways to build templates. Each has trade-offs. Let's compare three common approaches.
Option 1: Python f-strings
Simple, no dependencies, but limited for complex logic.
TEMPLATE = """ You are a data analyst. Analyze this dataset: {data} Focus on: {focus_areas} """ result = TEMPLATE.format( data="10,20,30,40,50", focus_areas="trends and outliers" )
Option 2: Jinja2
More powerful. Supports loops, conditionals, filters. Industry standard for templating.
from jinja2 import Template template_str = """ You are a data analyst. Analyze this dataset: {% for item in data %} - {{ item }} {% endfor %} {% if include_summary %} Include a summary. {% endif %} """ template = Template(template_str) result = template.render( data=[10, 20, 30], include_summary=True )
Option 3: LangChain PromptTemplate
Built for LLM workflows. Integrates with LangChain chains and tools. Good for production systems.
from langchain.prompts import PromptTemplate template = PromptTemplate( input_variables=["data", "focus"], template=""" You are a data analyst. Analyze this dataset: {data} Focus on: {focus} """ ) prompt = template.format( data="10,20,30,40,50", focus="trends" )
Best for: Simple one-off prompts. Cons: No validation, limited power.
Best for: Complex templates with loops and conditionals. Cons: Extra dependency.
Best for: Chains and production systems. Cons: Heavier framework.
Best for: Full control. Cons: You maintain it yourself.
When templates have variables, you need to validate inputs. Missing or invalid data causes bad prompts.
from typing import Optional from dataclasses import dataclass @dataclass class AnalysisRequest: """Request with validation""" dataset: str focus_areas: list[str] max_length: 500 = 500 def validate(self): if not self.dataset: raise ValueError( "dataset required" ) if len(self.dataset) > 5000: raise ValueError( "dataset too large" ) if not self.focus_areas: self.focus_areas = ["general trends"] TEMPLATE = """You are a data analyst. Analyze: {dataset} Focus: {focus}""" def analyze(request: AnalysisRequest): request.validate() prompt = TEMPLATE.format( dataset=request.dataset, focus=", ".join(request.focus_areas) ) return client.create(...)
Use dataclasses or Pydantic models for request objects. Validate before creating the prompt. Set reasonable defaults for optional fields. This prevents malformed prompts from reaching Claude.
Prompt chains connect multiple prompts in sequences. The output of one becomes the input to the next. This enables complex workflows that would be hard for a single prompt.
Step 1 → Step 2 → Step 3. Each step uses the output of the previous. Best for multi-step workflows.
Run multiple prompts simultaneously, then combine. Great for analyzing from multiple perspectives.
Route to different chains based on model output. E.g., "Is this positive or negative?" → different handling.
Model chooses which chain to use based on input. "This is about X, so use the X-handler."
Sequential Chain Example
Extract key points → Organize by theme → Generate summary → Quality check. Each step refines the previous output.
Router Chain Example
User input → Router decides topic → If "technical", use technical handler; if "general", use general handler.
Let's build a complete 4-step sequential chain to summarize long documents.
Read document, identify 5-10 main ideas. Focus on substance, not style.
Group key points into logical themes. E.g., "Introduction", "Methods", "Results".
Write concise summary from organized points. 1-2 paragraphs, 200 words max.
Verify summary is accurate, covers main ideas, no fabrications. Rate quality 1-5.
def summarize_document(doc_text: str): """4-step document summarization chain""" # Step 1: Extract key points extract_prompt = f""" Extract 5-10 key points from this document: {doc_text} Return only the key points, one per line.""" response1 = client.messages.create( model="claude-opus-4-6", max_tokens=500, messages=[{"role": "user", "content": extract_prompt}] ) key_points = response1.content[0].text # Step 2: Organize by theme organize_prompt = f""" Organize these key points into logical themes: {key_points} Output format: [Theme Name] - point 1 - point 2 """ response2 = client.messages.create( model="claude-opus-4-6", max_tokens=500, messages=[{"role": "user", "content": organize_prompt}] ) organized = response2.content[0].text # Step 3: Generate summary summary_prompt = f""" Write a 200-word summary from these organized points: {organized} Be concise and accurate. No more than 200 words.""" response3 = client.messages.create( model="claude-opus-4-6", max_tokens=300, messages=[{"role": "user", "content": summary_prompt}] ) summary = response3.content[0].text # Step 4: Quality check quality_prompt = f""" Rate this summary on: 1. Accuracy (no fabrications) 2. Completeness (covers main ideas) 3. Clarity (easy to understand) 4. Conciseness (under 200 words) Summary: {summary} Rate each 1-5 and explain any issues.""" response4 = client.messages.create( model="claude-opus-4-6", max_tokens=200, messages=[{"role": "user", "content": quality_prompt}] ) quality = response4.content[0].text return { "summary": summary, "quality_check": quality }
For complex chains, specialized libraries handle the complexity. Here's a quick overview.
Most popular. Chains, memory, agents, RAG integration. Good documentation. Heavier but feature-rich.
Focused on data indexing and RAG. Great for document processing. Lighter weight than LangChain.
Search and RAG focused. Components-based architecture. Good for production pipelines.
Simple chains don't need a framework. Custom code gives you full control, reduces dependencies.
LangChain: Complex multi-tool agents, memory management, many integrations. LlamaIndex: Document-heavy RAG systems. Custom: Simple 2-3 step chains, maximum control and minimal dependencies. Start simple, add a framework when you need it.
1. What is the main benefit of using prompt templates instead of hardcoding prompts?
2. Which templating approach is most powerful and supports loops and conditionals?
3. In a prompt chain, what happens at each step?
4. When should you use orchestration libraries like LangChain?
Prompt templates follow the DRY principle — write once, use everywhere. You can build them with f-strings (simple), Jinja2 (powerful), LangChain (framework), or custom code. Always validate template variables before using them. Prompt chains connect multiple prompts in sequences: Sequential (step-by-step), Parallel (multiple perspectives), Conditional (branching), or Router (model selects). Chains enable complex workflows that would be impossible with a single prompt. For simple chains, write custom code. For complex multi-tool systems, use LangChain or similar libraries.
Next up → Topic 10: Evaluating Prompt Quality
Measure, score, and systematically improve your prompts — because you can't improve what you can't measure.