Certification Study Guide
Claude Certified Architect
Foundations (CCA-F)
Everything you need to pass the CCA-F exam: domain breakdowns, key patterns, anti-patterns, and interactive practice questions.
Exam Blueprint
The CCA-F is a proctored, multiple-choice exam with a scaled passing score of 720/1000. You'll face 4 of 6 possible scenarios.
67% rule: D1 + D3 + D4 account for two-thirds of your score. Prioritize accordingly.
The 6 Scenarios
You'll face 4 of these, randomly selected. Know all 6.
1. Customer Support Agent
Escalation, refund tools, 80%+ FCR target
2. Code Gen with Claude Code
CLAUDE.md, slash commands, plan mode
3. Multi-Agent Research
Coordinator + subagents architecture
4. Developer Productivity
Codebase exploration, built-in tools, MCP
5. Claude Code for CI/CD
Automated code review, test gen, PR feedback
6. Structured Data Extraction
JSON schema extraction, validation, edge cases
Agentic Architecture & Orchestration
The highest-weighted domain. Master the agentic loop, coordinator-subagent pattern, and the deterministic vs. probabilistic distinction.
The Agentic Loop
Anti-Patterns
- Parsing natural language to detect completion
- Arbitrary iteration caps as primary stop
- Checking assistant text for "I'm done"
Correct Pattern
- Always use
stop_reasonto control the loop "tool_use"= execute and continue"end_turn"= stop and return
The Core Exam Distinction
This pattern appears across almost every domain. If the question involves money, identity, or security — the answer is programmatic enforcement.
| Approach | Guarantee | Use When |
|---|---|---|
| Programmatic enforcement (hooks, prerequisite gates) |
Deterministic — 100% | Critical business logic, financial ops, identity verification |
| Prompt-based guidance (system prompt, few-shot) |
Probabilistic — non-zero failure | Judgment calls, tone, escalation phrasing |
Tool Design & Claude Code Config
Tool descriptions are the #1 mechanism for LLM tool selection. CLAUDE.md hierarchy controls what Claude knows about your project.
Tool Description Quality
get_customer: "Retrieves customer information"
lookup_order: "Retrieves order details"
Result: Agent frequently calls get_customer when users ask about orders.
tool_choice Options
| Setting | Behavior | Use When |
|---|---|---|
"auto" | Model may skip tool call entirely | Default — model decides |
"any" | Must call a tool (chooses which) | Need structured output, not text |
{"type":"tool","name":"X"} | Must call specific tool X | Enforce a first step (e.g., extract_metadata) |
CLAUDE.md Hierarchy
Exam trap: new team member doesn't get instructions because they're in ~/.claude/CLAUDE.md (user-level) instead of .claude/CLAUDE.md (project-level).
| Level | Location | Shared? | Scope |
|---|---|---|---|
| User | ~/.claude/CLAUDE.md | No | All projects for that user |
| Project | .claude/CLAUDE.md | Yes (VC) | All team members |
| Directory | Subdirectory CLAUDE.md | Yes | Specific subdirectory |
| Path rules | .claude/rules/*.md | Yes | Files matching glob pattern |
Prompt Engineering & Structured Output
Vague instructions fail. Specific categorical criteria work. tool_use eliminates JSON syntax errors but not semantic errors.
Explicit Criteria for Precision
Vague (Fails)
- "Be conservative"
- "Only report high-confidence findings"
- "Check that comments are accurate"
Specific (Works)
- "Flag only: bugs, security issues. Skip: style, local patterns"
- "Report when claimed behavior contradicts actual code"
- "Flag comments only when they contradict the code"
Structured Output Reliability
tool_use Eliminates
- JSON syntax errors
- Malformed output
- Missing required fields
tool_use Does NOT Fix
- Line items that don't sum to total
- Fabricated values for required fields
- Data in wrong semantic fields
Schema tip: Make fields optional/nullable when source documents may not contain the info. Required fields cause the model to fabricate values.
Batch API Facts
Good for overnight reports. Bad for blocking pre-merge checks.
Context Management & Reliability
Keep critical information alive across long conversations, handle errors gracefully, and know when to escalate.
Three Context Killers
Progressive Summarization
Condensing "$127.43 refund for order #8821" into "discussed refund" loses critical details.
Lost in the Middle
Models reliably process beginning and end but may omit findings from the middle.
Tool Result Bloat
40+ fields per tool result when only 5 are relevant. Trim before accumulation.
Fix: Extract transactional facts into a persistent "case facts" blockA structured block of key facts included in every prompt, separate from summarized conversation history in each prompt.
Escalation Rules
Escalate When
- Customer explicitly requests a human
- Policy exceptions or gaps
- Unable to make meaningful progress
Unreliable Signals
- Sentiment-based escalation
- Self-reported confidence scores
- Frustration level alone
Exam rule: Customer says "I want to talk to a real person" → honor immediately. Even if the issue is simple.
Error Categories
| Category | Retryable? | Example |
|---|---|---|
| Transient | Yes | Timeout, service unavailable |
| Validation | No | Invalid input format |
| Business | No | Policy violation (refund > limit) |
| Permission | No | Unauthorized access |
Test Your Knowledge
5 exam-style questions spanning all domains.
D1 · Question 1 of 5
Your customer support agent occasionally processes refunds without first verifying customer identity. The error rate is 8%. What's the most effective fix?
Study Complete
You've Got the Blueprint
You've covered all 5 domains, the core exam patterns, and tested yourself with practice questions.