When you type a prompt and hit Enter, Claude Code's Ink-based React UI captures the input through `BaseTextInput`. But there's a subtlety: keystrokes start being captured *before* React even mounts. The `earlyInput` system buffers everything typed during the ~200ms startup, so nothing is lost.
Before your prompt reaches the model, Claude Code builds a rich context. It fires off *parallel* subprocesses to gather git status, branch info, and recent commits. Simultaneously, it loads every CLAUDE.md file it can find (project root, parent dirs, home directory) and any persistent memory files from `~/.claude/`. All of this runs concurrently — the `Promise.all` pattern is everywhere.
The system prompt isn't a single string — it's assembled from a dozen sources. Core identity instructions, tool-specific guidance, CLAUDE.md content, memory files, git context, date/time, environment info, and feature-flag-dependent sections all merge into one massive prompt. This is what shapes Claude's behavior for your specific project.
The assembled messages, system prompt, and tool definitions are sent to the Anthropic API as a streaming request. The `QueryEngine` handles retry logic, token counting, and cost tracking. The response streams back token by token. If the model decides to call a tool, the stream includes a `tool_use` content block — and that's where things get interesting.
The model's response determines what happens next. Different prompts take different paths through the system. Choose a scenario: