CCA Prep / Exam Results
3%
1 correct out of 29 questions
Not yet — keep studying

Passing threshold: 75%

Performance by Domain

Agentic Architecture & Orchestration 0/6 (0%)
Tool Design & MCP Integration 0/6 (0%)
Claude Code Configuration & Workflows 0/6 (0%)
Prompt Engineering & Structured Output 0/5 (0%)
Context Management & Reliability 1/6 (17%)

Review — 28 questions to revisit

A developer productivity tool is being built on top of the Claude Agent SDK. The tool runs long-running refactoring... Show
A. Use the AskUserQuestion tool to inject a clarifying prompt at the next natural pause in Claude's execution, then resume the task with the user's updated direction.
  B. Use the canUseTool callback to intercept every tool call and check whether the user wants to change direction before each step.
C. Use streaming input to send a cancel signal or updated direction to Claude while it is actively working, without waiting for an approval checkpoint.
  D. Display an "Other" free-text input option alongside Claude's predefined choices so the user can type a custom redirect instruction at any approval checkpoint.
Why A is wrong: AskUserQuestion is designed for Claude to proactively solicit clarification from users at decision points it identifies. It does not allow users to proactively interrupt Claude mid-task; it still depends on Claude reaching a pause in execution where it chooses to ask.
Why C is correct: Streaming input is explicitly designed for scenarios where users need to interrupt the agent mid-task — sending a cancel signal or changing direction while Claude is working. This allows the developer tool to redirect Claude without waiting for it to reach a formal checkpoint, which is exactly what the scenario requires.
A team is deploying a multi-agent research system where each agent runs Claude Code on researcher workstations. The... Show
  A. The endpoint-managed plist cannot activate while any prior server-managed configuration exists in the admin console history; the platform team must delete all server-managed policy versions before the plist takes effect.
B. The researcher's machine is still serving a cached copy of the server-managed settings, which blocks the endpoint-managed plist from applying; the researcher should run /status to see which managed source is currently active.
C. Server-managed and endpoint-managed settings merge by default, so clearing the server-managed config causes a conflict that silently disables both sources; the researcher should run /permissions to reset the policy merge.
  D. The endpoint-managed plist was ignored because it was delivered after startup; the researcher should restart Claude Code so the plist is loaded during initialization before any settings source is evaluated.
Why C is wrong: The corpus is explicit that sources do not merge: if server-managed settings deliver any keys, endpoint-managed settings are ignored entirely, and vice versa. There is no merging behavior that could create a conflict. This answer reflects a misconception about how the managed tier combines sources.
Why B is correct: The corpus states explicitly that cached settings persist on client machines until the next successful fetch, even after the server-managed configuration is cleared in the admin console. Until the cache expires and a fresh fetch returns nothing, the client continues to treat the (now-empty) server-managed source as authoritative and ignores the endpoint-managed plist. The recommended diagnostic is /status, which shows which managed source is currently active.
A multi-agent research system uses the Claude Agent SDK to coordinate several specialized sub-agents. One sub-agent... Show
A. Use the Python SDK's built-in session persistence feature to serialize the pending canUseTool callback state and resume it after the reviewer responds.
B. Implement the TypeScript SDK's defer hook decision, which allows the process to exit and resume later from a persisted session while the callback remains pending.
  C. Switch to streaming input mode so the human reviewer's response can be injected mid-task without requiring the process to stay alive for the full wait period.
  D. Add a timeout to the canUseTool callback that auto-approves the operation after a configurable delay, keeping the process alive and bounded in duration.
Why A is wrong: The corpus states that the defer hook for long-running waits is not available in the Python SDK — only in the TypeScript SDK. Attributing this capability to the Python SDK is a direct contradiction of the documented constraint.
Why B is correct: The corpus explicitly states that when a user might take longer to respond than the process can reasonably stay running, the TypeScript SDK supports the defer hook decision, which lets the process exit and resume later from the persisted session. This is the only SDK-provided mechanism for handling long-lived human-in-the-loop waits at approval checkpoints.
A developer is building a Claude Code-based assistant that supports a wide variety of code-related tasks: generating... Show
  A. Include the full instructions for every capability directly in the system prompt so Claude has everything available before the user's first request, avoiding any latency from deferred loading.
B. Package each capability as an Agent Skill with its own SKILL.md, letting Claude read only the relevant skill file via bash when a matching user request is detected.
C. Store all capability instructions in a single shared SKILL.md and let Claude selectively parse the relevant sections from that file at runtime.
  D. Pre-load all skill files into the context window at startup using a batch bash command, then instruct Claude to ignore sections that are not relevant to the current task.
Why C is wrong: A single monolithic SKILL.md collapses the skill boundary that makes selective loading possible. Claude reads the entire file when it triggers a skill; there is no mechanism for Claude to parse and suppress irrelevant sections within a single file, so this approach still loads unnecessary content.
Why B is correct: Agent Skills use dynamic loading: only skill metadata is present at startup, and the full SKILL.md content enters the context window only when Claude reads it via bash in response to a matching request. This means unneeded capabilities never occupy context space. Additional bundled files (e.g., FORMS.md) are loaded only if the specific sub-task requires them, further minimizing context bloat.
A CI pipeline uses Claude Code to automatically update configuration files across 60 microservices based on a new... Show
  A. Prompt Claude to double-check its own work by asking it to re-read each config file after writing it and confirm the changes look correct.
B. Break the task into smaller batches of 10 services per run so that any errors affect a smaller blast radius.
C. Have Claude produce a structured changes.json plan file after analysis, validate it with a script before applying any changes, and surface specific error messages that allow Claude to iterate on the plan.
  D. Add verbose logging to Claude's system prompt so it narrates each field mapping step-by-step as it applies changes across all services.
Why B is wrong: Reducing batch size limits the blast radius of a single run but does not prevent incorrect field mappings from being applied. Errors still reach production before detection, just in smaller increments. The fundamental problem — no validation step before execution — is unaddressed.
Why C is correct: The "plan-validate-execute" pattern inserts a machine-verifiable checkpoint between analysis and execution. The structured plan file externalizes Claude's intent so a deterministic script can catch errors — like referencing non-existent fields — before any originals are touched. This converts a silent failure mode into an explicit, early error, which is the core principle of verifiable intermediate outputs.
A customer support resolution agent built on Claude Code has been running long sessions handling complex tickets.... Show
  A. Increase max_tokens in the API configuration to allow Claude to hold more conversation history before truncation occurs.
  B. Use a subagent to handle deep research or large file lookups on individual tickets, so those token-heavy reads stay out of the main session context.
C. Run /compact at a natural pause point to replace the growing conversation with a structured summary, so startup content reloads and resolution state is preserved efficiently.
D. Run /context periodically to monitor usage by category, then manually delete earlier turns from the conversation log to reclaim context space.
Why D is wrong: /context provides a live breakdown of context usage with optimization suggestions — it is a diagnostic tool, not a remediation mechanism. Manually deleting earlier conversation turns is not a supported Claude Code workflow and could corrupt the session state or remove information Claude still needs for resolution.
Why C is correct: /compact is explicitly designed to manage context growth mid-session: it replaces the accumulated conversation with a structured summary, most startup content (like CLAUDE.md and auto memory) reloads automatically, and the session can continue without a restart. This directly addresses context exhaustion while preserving resolution continuity — the core tradeoff in long-running support sessions.
A team is building a Customer Support Resolution Agent that uses Claude Code's auto-fix feature on their GitHub... Show
  A. Claude will post review comment replies under the team member's GitHub username, which could confuse reviewers who don't realize an agent is responding on their behalf.
B. Claude's auto-fix replies, posted under the team member's GitHub account, could trigger Atlantis deployments to production if Claude's comment text matches the automation's trigger pattern.
  C. Claude may misinterpret ambiguous review comments and push incorrect code changes directly to the PR branch without asking for clarification.
  D. Claude Code on the web runs in a shared environment, so auto-fix could expose repository secrets to other users' sessions running concurrently.
Why B is correct: The corpus chunk explicitly warns that Claude can reply to PR review comment threads using your GitHub account, which can trigger comment-triggered automation such as Atlantis. Repositories where a PR comment can deploy infrastructure or run privileged operations represent a specific, named risk — the documentation advises reviewing automation and considering disabling auto-fix in exactly this kind of repository. This is a direct consequence of auto-fix posting comments under the user's account on issue_comment events.
A CI pipeline team is using the advisor tool to validate code generation steps in their automated build system. Each... Show
  A. Append the instruction at the end of the system prompt after all other advisor directives, using a paragraph that explains the need for brevity in CI contexts.
B. Prepend the instruction before any other sentence that mentions the advisor, using a concise line such as "The advisor should respond in under 100 words and use enumerated steps, not explanations."
  C. Inject the instruction as a user-turn message at the start of each pipeline stage, using a brief reminder to keep responses short and avoid prose explanations.
  D. Set the instruction inside the advisor tool's model parameter configuration block, specifying a maximum word count alongside the model name used for the advisor.
Why B is correct: The corpus specifies both the placement (prepend before any other sentence that mentions the advisor) and the exact format (a single conciseness instruction directing under-100-word, enumerated-step responses) as the combination that cut total advisor output tokens by roughly 35–45% in internal testing. Position matters: prepending ensures the constraint is read before the advisor's role is established, giving it the highest priority in the system prompt.
A DevOps team is integrating Claude Code into their CI pipeline and wants to automate repetitive debugging tasks —... Show
  A. Bundled skills execute fixed, deterministic logic that is faster and more predictable than Claude's reasoning, making them preferable to shell scripts for CI environments where reproducibility matters.
B. Bundled skills like /loop and /debug are prompt-based playbooks that let Claude orchestrate the work using its tools, so the pipeline gains Claude's judgment and tool-use capabilities without the team writing custom orchestration logic.
  C. Bundled skills are only available in interactive sessions and cannot be invoked from a non-interactive CI pipeline, so a custom shell script is required for automation contexts.
  D. Bundled skills delegate work to external MCP servers, so the team must configure MCP integration before any bundled skill can execute in the CI environment.
Why B is correct: Unlike built-in commands that execute fixed logic, bundled skills are prompt-based: they give Claude a detailed playbook and let it orchestrate work using its tools. This means a skill like /loop or /debug brings Claude's reasoning and tool-use capabilities to bear on the task, replacing the need for the team to write custom orchestration logic in shell scripts.
A team is running a multi-turn structured data extraction pipeline using Claude sessions. Each session processes a... Show
  A. The output_tokens field undercounts tokens because it only reports tokens from the final model call, not all calls across the session, so the real total is higher than expected.
B. The cache_read_input_tokens field represents tokens served from cache rather than processed fresh, which carry a lower per-token cost and reduce total spend relative to a naive input × price estimate.
  C. The input_tokens field includes both cached and uncached tokens in its count, so the effective price should be applied to the full input_tokens value to get the correct cost.
  D. Cache entries persist indefinitely within a session, so once the reference document is cached on the first turn it is never re-billed regardless of how much time passes between turns.
Why B is correct: The corpus specifies that input_tokens reports only uncached input tokens, while cache_read_input_tokens tracks tokens served from the prompt cache. Cache reads carry a reduced per-token cost, which is why total spend falls below a naive calculation that multiplies all input tokens by the full price. The separation of these fields in the usage object is precisely what enables accurate cost tracking by distinguishing cheap cache-read tokens from full-price uncached tokens.
A developer is using Claude to automate updates to a large internal developer portal — specifically, updating 80... Show
  A. Instruct Claude to apply updates directly to the YAML files and then run a diff to review all changes after the fact, rolling back any errors.
  B. Ask Claude to produce verbose, step-by-step commentary alongside each field update so that the developer can read through the reasoning and catch mistakes manually.
C. Have Claude first output a structured plan file listing all intended changes, validate that plan with a script before execution, and only then apply changes to the actual files.
  D. Provide Claude with a more detailed system prompt describing all 80 field names and their valid values, then have Claude apply all updates in a single pass without intermediate steps.
Why C is correct: The "plan-validate-execute" pattern produces a machine-verifiable intermediate artifact (the plan file) that can be checked by a script before any originals are touched. This catches errors — such as referencing non-existent fields or setting conflicting values — early and without destructive side effects, which is exactly the risk profile described in the scenario.
A customer support resolution agent is being built with Claude to handle complex, multi-step ticket workflows —... Show
  A. Increase the system prompt length by adding exhaustive descriptions of every tool's side effects, so Claude has full context before acting.
B. Instruct Claude to prefer the minimal set of steps needed to complete a task and to pause for confirmation before executing actions that cannot be undone.
  C. Switch to a smaller, faster model for intermediate steps to reduce the chance of runaway action chains in the workflow.
  D. Add output format constraints requiring Claude to return a JSON plan for all steps before any tool is called, so engineers can review it offline.
Why B is correct: In agentic systems, two complementary prompt engineering principles apply: minimal footprint (preferring the fewest steps and actions necessary) and human-in-the-loop checkpoints before irreversible actions. Instructing Claude to prefer minimal steps reduces unnecessary intermediate actions, while requiring confirmation before irreversible operations (like issuing refunds) prevents premature execution of consequential actions. These are prompt-level controls directly applicable to agentic system design.
A data engineering team is building a pipeline where Claude extracts structured field values from 80 legacy PDF... Show
  A. Add more detailed extraction examples to the system prompt so Claude better understands which fields are valid and required.
  B. Increase Claude's output temperature to allow it to explore a broader range of possible field mappings before committing to a final answer.
C. Insert an intermediate step where Claude produces a structured plan file (e.g., changes.json) that a validation script checks for invalid fields and conflicts before any writes occur.
  D. Switch from batch processing to processing one form at a time so that errors in a single form don't propagate across the full dataset.
Why C is correct: The "plan-validate-execute" pattern addresses this failure mode by making errors machine-verifiable before they cause irreversible changes. Claude generates a structured intermediate artifact (e.g., changes.json), a script validates it against known-good field definitions, and only after passing validation are writes applied. This catches invalid field references, missing required fields, and conflicting values early — the exact errors described in the scenario.
A team is using Claude Code to run a long structured data extraction session. They have invoked three skills at the... Show
  A. Claude re-reads skill files on each turn, so the schema validation skill file has likely drifted on disk. The fix is to update the SKILL.md file and restart the session.
B. The schema validation skill was likely dropped during auto-compaction because it was the oldest invoked skill and the combined skill budget was exhausted. The fix is to re-invoke the schema validation skill to restore its full content.
  C. Claude Code purges all invoked skills when context fills up and cannot restore them. The fix is to reduce the number of skills invoked to one so the budget is never exceeded.
  D. Skill content is limited to the first response after invocation by design. The fix is to move the schema validation instructions into the system prompt instead of a skill file.
Why B is correct: Auto-compaction re-attaches invoked skills after summarization, but fills the combined 25,000-token budget starting from the most recently invoked skill. Older skills can be dropped entirely when the budget is exhausted. Because schema validation was invoked first (oldest), it is most likely to be dropped when three skills compete for the shared budget. Re-invoking it restores the full content and brings it back to the front of the priority order.
A multi-agent research system uses a subagent that invokes Claude's `str_replace_based_edit_tool` to iteratively... Show
  A. The subagent will be unable to use str_replace commands on any file that exceeds 10,000 characters, because edits require the full file to be in context first.
  B. The max_characters parameter is only honored by text_editor_20250728 and later versions, so the team must verify their tool version matches the model in use or the parameter will have no effect.
  C. The subagent will only be able to view or edit files that are smaller than 10,000 characters; larger files will be rejected by the API with an error.
D. Setting max_characters to a value lower than the file length increases token efficiency for viewing but requires the subagent to request multiple views or work with partial context when reasoning about content beyond the truncation point.
Why D is correct: The max_characters parameter controls truncation when viewing large files — it does not prevent edits or reject files. Reducing it preserves context window space (the stated goal) but means the subagent only sees a portion of the file per view, introducing a tradeoff: the agent must paginate its views or operate with incomplete file context when the relevant content lies beyond the truncation boundary. This is a genuine design tradeoff the team must account for.
A customer support resolution agent handles two distinct workloads: (1) real-time chat sessions where customers wait... Show
  A. Use Claude Sonnet for all workloads to ensure consistent reasoning quality across both real-time and batch tasks, and apply prompt caching to reduce repeated context costs.
  B. Use Claude Haiku for all workloads since it is the lowest-cost option, and rely on prompt engineering to compensate for any reasoning gaps in complex ticket classification.
C. Use Claude Haiku for real-time chat sessions and route overnight ticket processing through the Batch API with Claude Sonnet, applying prompt caching for shared context across both pipelines.
  D. Use the Batch API for both real-time and overnight workloads to maximize throughput savings, and select Claude Haiku for all requests to minimize per-token costs.
Why C is correct: This approach applies all three relevant cost optimization principles from the corpus simultaneously: model selection matching task complexity (Haiku for simple real-time chat, Sonnet for complex reasoning), Batch API for non-time-sensitive overnight processing, and prompt caching for repeated context. Each strategy is matched to the workload characteristic it is designed to address.
A team is building a structured data extraction pipeline that streams tool calls from Claude to parse financial... Show
  A. Discard the malformed fragment entirely and return an empty string in the error response block so Claude can regenerate the output cleanly.
B. Wrap the invalid JSON string inside a valid JSON object under a descriptive key (e.g., {"INVALID_JSON": "<malformed string>"}), properly escaping any special characters, before passing it back to Claude.
  C. Base64-encode the malformed JSON string before inserting it into the error response block so that the outer JSON structure remains syntactically valid.
  D. Retry the original request with a higher max_tokens limit without returning the malformed output to Claude, since invalid JSON cannot safely be included in any response block.
Why B is correct: The corpus explicitly recommends wrapping invalid JSON in a valid JSON object with a reasonable key (e.g., {"INVALID_JSON": "..."}) when it must be passed back to the model in an error response block. This approach preserves the original malformed data for debugging while keeping the outer response structurally valid — and critically, the wrapper must itself be valid JSON, requiring proper escaping of quotes and special characters inside the string value.
A team is building a multi-agent research system where multiple engineers frequently push incremental changes to... Show
  A. Set Review Behavior to "Once after PR creation," which catches regressions on every push and auto-resolves threads, with the tradeoff that it requires an admin to manually re-enable it between pushes.
  B. Set Review Behavior to "Manual" and have engineers comment @claude review before each merge, which catches regressions on every push and auto-resolves threads, with the tradeoff that reviews only run when explicitly requested.
C. Set Review Behavior to "After every push," which catches new issues as the PR evolves and auto-resolves threads when flagged issues are fixed, with the tradeoff that this runs the most reviews and costs the most.
  D. Set Review Behavior to "After every push," which catches new issues and auto-resolves threads, with the tradeoff that it requires engineers to comment @claude review once after each push to trigger the next run.
Why C is correct: The "After every push" trigger is the only mode that both runs automatically on each push and auto-resolves threads when previously flagged issues are fixed. The documented tradeoff is explicit: this mode "runs the most reviews and costs the most," which a high-frequency, multi-contributor branch environment must consciously accept.
A developer productivity team is integrating Claude into their daily workflow using the bash tool. A senior engineer... Show
  A. The persistent bash session is a convenience feature but not critical — Claude can reconstruct environment state by re-running setup commands at the start of each tool invocation, making statelessness equally viable for complex workflows.
B. The persistent bash session is critical because it maintains state — including environment variables, working directory, and results of prior commands — across multiple tool calls within a session, enabling multi-step workflows without redundant re-initialization.
  C. The persistent bash session is important primarily for data processing tasks, but for development workflows like running builds and tests, each command invocation operates independently, so session persistence has no meaningful effect.
  D. The persistent bash session matters only when Claude needs to chain commands with && or ; operators; for sequential tool calls issued separately, each call starts a fresh shell regardless of session configuration.
Why B is correct: The bash tool explicitly provides a "persistent bash session that maintains state," including access to environment variables, working directory, and command chaining capabilities. This persistence is architecturally significant — without it, every tool invocation would require re-establishing environment context (activating virtualenvs, setting variables, navigating directories), making complex multi-step developer workflows impractical. The junior engineer's position reflects the correct design principle: session persistence is what enables coherent, stateful task execution rather than isolated one-shot commands.
A DevOps team is integrating Claude Code into their CI pipeline to automate test failure triage. During a long CI... Show
  A. Claude's computer use tool has a maximum number of screenshot calls per session; the team should batch screenshot requests to stay within the per-session tool call limit.
B. Each computer use tool call returns results that must be fed back into the context window; as the session grows, earlier observations may fall outside the active context, causing Claude to lose track of prior states. The team should summarize or checkpoint key findings periodically to keep relevant information within the active context.
  C. The computer use tool encodes screenshots as base64, which degrades in fidelity over multiple round-trips; the team should switch to text-only log parsing to avoid image compression artifacts affecting Claude's reasoning.
  D. Computer use tool results are cached server-side and not re-sent in subsequent turns, so Claude lacks access to prior observations; the team should disable prompt caching to ensure all tool results remain visible.
Why B is correct: Computer use tool results — including screenshots and action confirmations — are returned as structured content that must be included in the conversation context. As a CI session accumulates many tool interactions, the total context grows. Observations from early in the session can be pushed out of the active context window, causing Claude to lose awareness of prior states. The correct mitigation is active context management: summarizing or checkpointing key findings so that critical information remains available within the context window throughout the session.
A platform engineering team is rolling out Claude Code as part of their CI pipeline. Some engineers want to... Show
  A. Begin with fully autonomous CI runs immediately, since engineers will learn faster by observing Claude Code's behavior in production.
B. Start by using Claude Code for codebase Q&A and smaller tasks, review its plans and suggestions, then progressively expand to more autonomous operation as the team builds familiarity.
  C. Restrict Claude Code exclusively to read-only analysis tasks in CI indefinitely, since agentic usage in pipelines is not a supported pattern.
  D. Grant full autonomy only after engineers have completed formal certification, rather than basing the transition on practical experience with Claude Code's suggestions.
Why B is correct: The recommended approach is to start with guided, lower-stakes usage — such as codebase Q&A or small bug fixes — and have users review Claude Code's plans and provide feedback before it runs more agentically. This phased ramp builds the team's understanding of the agentic paradigm, making them more effective at supervising and eventually delegating broader autonomy. Jumping straight to full autonomy skips the trust-building and understanding that makes agentic workflows reliable.
A developer productivity team is using Claude with the computer use tool to automate repetitive GUI tasks—such as... Show
  A. Increase the model's context window allocation so Claude retains more historical screenshots and can reconcile visual changes over time.
B. Ensure that after each computer use action, a fresh screenshot is captured and passed to Claude before the next action is decided, so Claude always reasons from the current screen state.
  C. Pre-render all expected screen states as static images and include them in the system prompt so Claude can match observed states to known-good references.
  D. Switch from screenshot-based inputs to structured DOM extraction so Claude receives text-based context rather than visual context, eliminating stale-state ambiguity.
Why B is correct: The computer use tool requires Claude to observe the current state of the screen before deciding the next action. If actions are taken without first refreshing the screenshot, Claude operates on stale visual context and cannot reliably detect whether a prior action succeeded, a page loaded, or an element changed. The correct pattern is a tight observe-act-observe loop: capture a fresh screenshot after every action, pass it to Claude, then decide the next step. This is the core reliability principle for agentic computer use.
A developer productivity team is embedding Claude Code into their internal IDE extension using the SDK. They need to... Show
  A. Use a programmatic hook in query() callbacks, because it supports in-process logic and its scoping to the main session is sufficient for subagent write protection.
B. Use a filesystem hook in settings.json, accepting that the hook logic must be expressed as a shell command, HTTP endpoint, MCP tool, prompt, or agent — rather than inline in-process code — because filesystem hooks fire in both the main agent and any subagents it spawns.
  C. Use a programmatic hook in query() callbacks and replicate it as a filesystem hook, so both the main session and subagents are covered without any tradeoffs.
  D. Use a filesystem hook with the "prompt" type, because LLM-evaluated prompts are the only hook mechanism capable of reaching subagents spawned during a session.
Why B is correct: The corpus establishes that filesystem hooks (defined in settings.json) fire in the main agent AND any subagents it spawns, making them the only hook type that can enforce constraints across the full execution tree. The tradeoff is that logic must be expressed as one of the supported command types (command, http, mcp_tool, prompt, or agent) rather than as inline in-process code. When subagent coverage is the critical constraint, the filesystem hook is the correct choice despite losing in-process integration.
A team is building a multi-agent research system using the Anthropic managed-agents API. Each sub-agent runs as a... Show
  A. Handle agent.message by logging tool invocation details, handle agent.tool_use by displaying text output to the user, and treat session.status_idle as an error condition requiring retry.
B. Handle agent.message by extracting and displaying text blocks from its content array, handle agent.tool_use by recording the tool name for observability, and break the stream loop upon receiving session.status_idle.
  C. Handle agent.message by extracting and displaying text blocks from its content array, treat agent.tool_use as an error because agents should not invoke tools autonomously, and poll the session endpoint for completion instead of relying on session.status_idle.
  D. Discard agent.message events as intermediate noise, handle agent.tool_use by extracting text output, and break the stream loop only when the stream connection closes rather than on session.status_idle.
Why B is correct: The corpus chunk explicitly shows that agent.message carries content blocks (text is extracted from the content array), agent.tool_use carries a name field indicating which tool the agent is using (suitable for logging/observability), and session.status_idle is the designated signal that the agent has finished — the loop breaks upon receiving it. This three-way distinction is the correct structured event-handling pattern for SSE streams from managed agent sessions.
A senior engineer is setting up an agent team in Claude Code to parallelize a large code generation task — spawning... Show
A. Switch all five teammates to Claude Sonnet, reduce the team to only the instances needed for the task, and write tightly scoped spawn prompts for each teammate.
  B. Increase the max_tokens limit per teammate session, disable CLAUDE.md loading for teammates, and run teammates sequentially instead of in parallel.
  C. Use the /usage command to cap spending per teammate, reduce the number of MCP servers registered globally, and switch to a cheaper model tier only for idle teammates.
  D. Keep all five teammates active but reduce their context windows manually, disable skill loading per teammate, and compress spawn prompts using a summarization pre-pass.
Why A is correct: The corpus identifies three primary levers for managing agent team costs: using Sonnet for teammates (capability/cost balance), keeping teams small (token usage is roughly proportional to team size), and writing focused spawn prompts (everything in the spawn prompt adds to context from the start). Choice A applies all three levers directly and correctly.
A team is building a Claude Code workflow for a large-scale application refactor. The workflow involves three... Show
  A. Subagents load their definitions from .claude/agents/ at startup, which is faster than parsing a large system prompt at runtime, reducing overall latency for each phase.
B. Specialized subagents allow each phase to carry only the domain-relevant instructions — such as rollback strategies for the database phase — without that knowledge becoming noise in the prompts for unrelated phases.
  C. Subagents run in parallel by default, so splitting the three phases across subagents guarantees all three complete simultaneously rather than sequentially.
  D. A single large system prompt exceeds Claude's context window when covering three domains, so subagents are the only architectural option that avoids truncation errors.
Why B is correct: The core principle is that each subagent can carry tailored instructions with specific expertise and constraints relevant only to its domain. Domain-specific knowledge — like SQL rollback strategies for a database migration subagent — becomes unnecessary noise if included in a general-purpose or multi-domain prompt. Specialization keeps each agent's context focused and high-signal.
A team is building a Customer Support Resolution Agent that uses a suite of MCP-connected skills, including one that... Show
  A. Use the verbose description, because more context about the tool's purpose and setup reduces the chance Claude will misuse it in a support workflow.
B. Use the concise description, because skill descriptions should assume Claude already understands common file formats and library conventions, reserving tokens for information Claude cannot infer.
  C. Use the verbose description, because MCP tool schemas require exhaustive parameter documentation to pass schema validation at registration time.
  D. Use the concise description, but only after confirming the model version being used has been trained on pdfplumber documentation specifically.
Why B is correct: The corpus explicitly contrasts a concise skill description (~a few tokens) with a verbose one (~150 tokens) and identifies the concise version as correct because it "assumes Claude knows what PDFs are and how libraries work." The underlying principle is economy of context: skill descriptions should convey information Claude cannot already infer — invocation conditions, output shape, constraints — rather than re-explaining domain knowledge the model already possesses. Verbose descriptions waste context budget and add noise without improving accuracy.
A data engineering team is building a structured extraction pipeline that spawns multiple parallel Claude agents —... Show
  A. Parallel sessions in Claude Code Desktop do not support branch prefixes, making it impossible to keep agent-created branches organized across large document batches.
B. Claude Code Desktop does not support multi-agent orchestration — agent teams are only available via the CLI or Agent SDK, meaning Desktop sessions cannot act as coordinated subagents in a pipeline.
  C. Git worktrees in Claude Code Desktop store all session data in a single shared directory by default, so parallel sessions will overwrite each other's intermediate extraction outputs.
  D. The auto-archive feature in Claude Code Desktop only triggers after a pull request merges or closes, which will leave extraction sessions open and consuming resources indefinitely.
Why B is correct: The corpus explicitly states that "agent teams: multi-agent orchestration is available via the CLI and Agent SDK, not in Desktop." A structured extraction pipeline that spawns multiple coordinated agents is a multi-agent orchestration pattern. No matter how useful parallel sessions are for human-driven parallel work, Claude Code Desktop cannot serve as the orchestration layer for a programmatic agent pipeline — this requires the CLI or Agent SDK.
Study Mode Home