mirror of
https://github.com/codeaashu/claude-code.git
synced 2026-04-08 22:28:48 +03:00
4.2 KiB
4.2 KiB
Prompt 09: Get the QueryEngine (Core LLM Loop) Functional
Context
You are working in /workspaces/claude-code. The QueryEngine (src/QueryEngine.ts, ~46K lines) is the heart of the CLI — it:
- Sends messages to the Anthropic API (streaming)
- Processes streaming responses (text, thinking, tool_use blocks)
- Executes tools when the LLM requests them (tool loop)
- Handles retries, rate limits, and errors
- Tracks token usage and costs
- Manages conversation context (message history)
This is the most complex single file. The goal is to get it functional enough for a basic conversation loop.
Key Dependencies
The QueryEngine depends on:
src/services/api/client.ts— Anthropic SDK clientsrc/services/api/claude.ts— Message API wrappersrc/Tool.ts— Tool definitionssrc/tools.ts— Tool registrysrc/context.ts— System contextsrc/constants/prompts.ts— System prompt- Token counting utilities
- Streaming event handlers
Task
Part A: Map the QueryEngine architecture
Read src/QueryEngine.ts and create a structural map:
- Class structure — What classes/interfaces are defined?
- Public API — What method starts a query? What does it return?
- Message flow — How does a user message become an API call?
- Tool loop — How are tool calls detected, executed, and fed back?
- Streaming — How are streaming events processed?
- Retry logic — How are API errors handled?
Part B: Trace the API call path
Follow the chain from QueryEngine → API client:
- Read
src/services/api/client.ts— how is the Anthropic SDK client created? - Read
src/services/api/claude.ts— what's the message creation wrapper? - What parameters are passed? (model, max_tokens, system prompt, tools, messages)
- How is streaming handled? (SSE? SDK streaming?)
Part C: Identify and fix blockers
The QueryEngine will have dependencies on many subsystems. For each dependency:
- If it's essential (API client, tool execution) → make sure it works
- If it's optional (analytics, telemetry, policy limits) → stub or skip it
Common blockers:
- Missing API configuration → needs
ANTHROPIC_API_KEY(Prompt 05) - Policy limits service → may block execution, needs stubbing
- GrowthBook/analytics → needs stubbing or graceful failure
- Remote managed settings → needs stubbing
- Bootstrap data fetch → may need to be optional
Part D: Create a minimal conversation test
Create scripts/test-query.ts that exercises the QueryEngine directly:
// scripts/test-query.ts
// Minimal test of the QueryEngine — single query, no REPL
// Usage: ANTHROPIC_API_KEY=sk-ant-... bun scripts/test-query.ts "What is 2+2?"
import './src/shims/preload.js'
async function main() {
const query = process.argv[2] || 'What is 2+2?'
// Import and set up minimal dependencies
// You'll need to figure out the exact imports and initialization
// by reading src/QueryEngine.ts, src/query.ts, and src/replLauncher.tsx
// The basic flow should be:
// 1. Create API client
// 2. Build system prompt
// 3. Create QueryEngine instance
// 4. Send a query
// 5. Print the response
console.log(`Query: ${query}`)
console.log('---')
// TODO: Wire up the actual QueryEngine call
// This is the hardest part — document what you need to do
}
main().catch(err => {
console.error('Query test failed:', err)
process.exit(1)
})
Part E: Handle the streaming response
The QueryEngine likely uses the Anthropic SDK's streaming interface. Make sure:
- Text content is printed to stdout as it streams
- Thinking blocks are handled (displayed or hidden based on config)
- Tool use blocks trigger tool execution
- The tool loop feeds results back and continues
Part F: Document what's still broken
After getting a basic query working, document:
- Which features work
- Which features are stubbed
- What would need to happen for full functionality
Verification
ANTHROPIC_API_KEY=sk-ant-... bun scripts/test-query.ts "What is 2+2?"gets a response- Streaming output appears in real-time
- No unhandled crashes (graceful error messages are fine)
- Architecture is documented