claude-code

This commit is contained in:
ashutoshpythoncs@gmail.com
2026-03-31 18:58:05 +05:30
parent a2a44a5841
commit b564857c0b
2148 changed files with 564518 additions and 2 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

72
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,72 @@
# Contributing
Thanks for your interest in contributing to this repository!
## What This Is
This repo archives the **leaked source code** of Anthropic's Claude Code CLI. Contributions here are about **documentation, tooling, and exploration aids** — not modifying the original Claude Code source.
## What You Can Contribute
- **Documentation** — Improve or expand the [docs/](docs/) directory
- **MCP Server** — Enhance the exploration MCP server in [mcp-server/](mcp-server/)
- **Analysis** — Write-ups, architecture diagrams, or annotated walkthroughs
- **Tooling** — Scripts or tools that aid in studying the source code
- **Bug fixes** — Fix issues in the MCP server or supporting infrastructure
## What Not to Change
- **`src/` directory** — This is the original leaked source, preserved as-is. Don't modify it.
- The [`backup` branch](https://github.com/codeaashu/claude-code/tree/backup) contains the unmodified original.
## Getting Started
### Prerequisites
- **Node.js** 18+ (for the MCP server)
- **Git**
### Setup
```bash
git clone https://github.com/codeaashu/claude-code.git
cd claude-code
```
### MCP Server Development
```bash
cd mcp-server
npm install
npm run dev # Run with tsx (no build step)
npm run build # Compile to dist/
```
### Linting & Type Checking
```bash
# From the repo root — checks the leaked src/
npm run lint # Biome lint
npm run typecheck # TypeScript type check
```
## Code Style
For any new code (MCP server, tooling, scripts):
- TypeScript with strict mode
- ES modules
- 2-space indentation (tabs for `src/` to match Biome config)
- Descriptive variable names, minimal comments
## Submitting Changes
1. Fork the repository
2. Create a feature branch (`git checkout -b my-feature`)
3. Make your changes
4. Commit with a clear message
5. Push and open a pull request
## Questions?
Open an issue or reach out to [nichxbt](https://www.x.com/nichxbt).

45
Dockerfile Normal file
View File

@@ -0,0 +1,45 @@
# ─────────────────────────────────────────────────────────────
# Claude Code CLI — Production Container
# ─────────────────────────────────────────────────────────────
# Multi-stage build: builds a production bundle, then copies
# only the output into a minimal runtime image.
#
# Usage:
# docker build -t claude-code .
# docker run --rm -e ANTHROPIC_API_KEY=sk-... claude-code -p "hello"
# ─────────────────────────────────────────────────────────────
# Stage 1: Build
FROM oven/bun:1-alpine AS builder
WORKDIR /app
# Copy manifests first for layer caching
COPY package.json bun.lockb* ./
# Install all dependencies (including devDependencies for build)
RUN bun install --frozen-lockfile || bun install
# Copy source
COPY . .
# Build production bundle
RUN bun run build:prod
# Stage 2: Runtime
FROM oven/bun:1-alpine
WORKDIR /app
# Install OS-level runtime dependencies
RUN apk add --no-cache git ripgrep
# Copy only the bundled output from the builder
COPY --from=builder /app/dist/cli.mjs /app/cli.mjs
# Make it executable
RUN chmod +x /app/cli.mjs
ENTRYPOINT ["bun", "/app/cli.mjs"]

11
LICENSE Normal file
View File

@@ -0,0 +1,11 @@
UNLICENSED — NOT FOR REDISTRIBUTION
This repository contains leaked proprietary source code belonging to Anthropic, PBC.
It is published here strictly for educational and research purposes.
The original software is NOT open-source. Anthropic has not released this code
under any permissive or copyleft license. Use at your own legal risk.
For the official Claude Code CLI, see: https://docs.anthropic.com/en/docs/claude-code

449
README.md
View File

@@ -1,2 +1,447 @@
# claude-code
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.
<div align="center">
# Claude Code — Leaked Source
**The full source code of Anthropic's Claude Code CLI, leaked on March 31, 2026**
[![TypeScript](https://img.shields.io/badge/TypeScript-512K%2B_lines-3178C6?logo=typescript&logoColor=white)](#tech-stack)
[![Bun](https://img.shields.io/badge/Runtime-Bun-f472b6?logo=bun&logoColor=white)](#tech-stack)
[![React + Ink](https://img.shields.io/badge/UI-React_%2B_Ink-61DAFB?logo=react&logoColor=black)](#tech-stack)
[![Files](https://img.shields.io/badge/~1,900_files-source_only-grey)](#directory-structure)
[![MCP Server](https://img.shields.io/badge/MCP-Explorer_Server-blueviolet)](#-explore-with-mcp-server)
[![npm](https://img.shields.io/npm/v/claude-code-explorer-mcp?label=npm&color=cb3837&logo=npm)](https://www.npmjs.com/package/claude-code-explorer-mcp)
> The original unmodified leaked source is preserved in the [`backup` branch](https://github.com/codeaashu/claude-code/tree/backup).
</div>
---
## Table of Contents
- [How It Leaked](#how-it-leaked)
- [What Is Claude Code?](#what-is-claude-code)
- [Documentation](#-documentation)
- [Explore with MCP Server](#-explore-with-mcp-server)
- [Directory Structure](#directory-structure)
- [Architecture](#architecture)
- [Tool System](#1-tool-system)
- [Command System](#2-command-system)
- [Service Layer](#3-service-layer)
- [Bridge System](#4-bridge-system)
- [Permission System](#5-permission-system)
- [Feature Flags](#6-feature-flags)
- [Key Files](#key-files)
- [Tech Stack](#tech-stack)
- [Design Patterns](#design-patterns)
- [GitPretty Setup](#gitpretty-setup)
- [Contributing](#contributing)
- [Disclaimer](#disclaimer)
---
## How It Leaked
[Chaofan Shou (@Fried_rice)](https://x.com/Fried_rice) discovered that the published npm package for Claude Code included a `.map` file referencing the full, unobfuscated TypeScript source — downloadable as a zip from Anthropic's R2 storage bucket.
> **"Claude code source code has been leaked via a map file in their npm registry!"**
>
> — [@Fried_rice, March 31, 2026](https://x.com/Fried_rice/status/2038894956459290963)
---
## What Is Claude Code?
Claude Code is Anthropic's official CLI tool for interacting with Claude directly from the terminal — editing files, running commands, searching codebases, managing git workflows, and more. This repository contains the leaked `src/` directory.
| | |
|---|---|
| **Leaked** | 2026-03-31 |
| **Language** | TypeScript (strict) |
| **Runtime** | [Bun](https://bun.sh) |
| **Terminal UI** | [React](https://react.dev) + [Ink](https://github.com/vadimdemedes/ink) |
| **Scale** | ~1,900 files · 512,000+ lines of code |
---
## <20> Documentation
For in-depth guides, see the [`docs/`](docs/) directory:
| Guide | Description |
|-------|-------------|
| **[Architecture](docs/architecture.md)** | Core pipeline, startup sequence, state management, rendering, data flow |
| **[Tools Reference](docs/tools.md)** | Complete catalog of all ~40 agent tools with categories and permission model |
| **[Commands Reference](docs/commands.md)** | All ~85 slash commands organized by category |
| **[Subsystems Guide](docs/subsystems.md)** | Deep dives into Bridge, MCP, Permissions, Plugins, Skills, Tasks, Memory, Voice |
| **[Exploration Guide](docs/exploration-guide.md)** | How to navigate the codebase — study paths, grep patterns, key files |
Also see: [CONTRIBUTING.md](CONTRIBUTING.md) · [MCP Server README](mcp-server/README.md)
---
## <20>🔍 Explore with MCP Server
This repo ships an [MCP server](https://modelcontextprotocol.io/) that lets any MCP-compatible client (Claude Code, Claude Desktop, VS Code Copilot, Cursor) explore the full source interactively.
### Install from npm
The MCP server is published as [`claude-code-explorer-mcp`](https://www.npmjs.com/package/claude-code-explorer-mcp) on npm — no need to clone the repo:
```bash
# Claude Code
claude mcp add claude-code-explorer -- npx -y claude-code-explorer-mcp
```
### One-liner setup (from source)
```bash
git clone https://github.com/codeaashu/claude-code.git ~/claude-code \
&& cd ~/claude-code/mcp-server \
&& npm install && npm run build \
&& claude mcp add claude-code-explorer -- node ~/claude-code/mcp-server/dist/index.js
```
<details>
<summary><strong>Step-by-step setup</strong></summary>
```bash
# 1. Clone the repo
git clone https://github.com/codeaashu/claude-code.git
cd claude-code/mcp-server
# 2. Install & build
npm install && npm run build
# 3. Register with Claude Code
claude mcp add claude-code-explorer -- node /absolute/path/to/claude-code/mcp-server/dist/index.js
```
Replace `/absolute/path/to/claude-code` with your actual clone path.
</details>
<details>
<summary><strong>VS Code / Cursor / Claude Desktop config</strong></summary>
**VS Code** — add to `.vscode/mcp.json`:
```json
{
"servers": {
"claude-code-explorer": {
"type": "stdio",
"command": "node",
"args": ["${workspaceFolder}/mcp-server/dist/index.js"],
"env": { "CLAUDE_CODE_SRC_ROOT": "${workspaceFolder}/src" }
}
}
}
```
**Claude Desktop** — add to your config file:
```json
{
"mcpServers": {
"claude-code-explorer": {
"command": "node",
"args": ["/absolute/path/to/claude-code/mcp-server/dist/index.js"],
"env": { "CLAUDE_CODE_SRC_ROOT": "/absolute/path/to/claude-code/src" }
}
}
}
```
**Cursor** — add to `~/.cursor/mcp.json` (same format as Claude Desktop).
</details>
### Available tools & prompts
| Tool | Description |
|------|-------------|
| `list_tools` | List all ~40 agent tools with source files |
| `list_commands` | List all ~50 slash commands with source files |
| `get_tool_source` | Read full source of any tool (e.g. BashTool, FileEditTool) |
| `get_command_source` | Read source of any slash command (e.g. review, mcp) |
| `read_source_file` | Read any file from `src/` by path |
| `search_source` | Grep across the entire source tree |
| `list_directory` | Browse `src/` directories |
| `get_architecture` | High-level architecture overview |
| Prompt | Description |
|--------|-------------|
| `explain_tool` | Deep-dive into how a specific tool works |
| `explain_command` | Understand a slash command's implementation |
| `architecture_overview` | Guided tour of the full architecture |
| `how_does_it_work` | Explain any subsystem (permissions, MCP, bridge, etc.) |
| `compare_tools` | Side-by-side comparison of two tools |
**Try asking:** *"How does the BashTool work?"* · *"Search for where permissions are checked"* · *"Show me the /review command source"*
### Custom source path / Remove
```bash
# Custom source location
claude mcp add claude-code-explorer -e CLAUDE_CODE_SRC_ROOT=/path/to/src -- node /path/to/mcp-server/dist/index.js
# Remove
claude mcp remove claude-code-explorer
```
---
## Directory Structure
```
src/
├── main.tsx # Entrypoint — Commander.js CLI parser + React/Ink renderer
├── QueryEngine.ts # Core LLM API caller (~46K lines)
├── Tool.ts # Tool type definitions (~29K lines)
├── commands.ts # Command registry (~25K lines)
├── tools.ts # Tool registry
├── context.ts # System/user context collection
├── cost-tracker.ts # Token cost tracking
├── tools/ # Agent tool implementations (~40)
├── commands/ # Slash command implementations (~50)
├── components/ # Ink UI components (~140)
├── services/ # External service integrations
├── hooks/ # React hooks (incl. permission checks)
├── types/ # TypeScript type definitions
├── utils/ # Utility functions
├── screens/ # Full-screen UIs (Doctor, REPL, Resume)
├── bridge/ # IDE integration (VS Code, JetBrains)
├── coordinator/ # Multi-agent orchestration
├── plugins/ # Plugin system
├── skills/ # Skill system
├── server/ # Server mode
├── remote/ # Remote sessions
├── memdir/ # Persistent memory directory
├── tasks/ # Task management
├── state/ # State management
├── voice/ # Voice input
├── vim/ # Vim mode
├── keybindings/ # Keybinding configuration
├── schemas/ # Config schemas (Zod)
├── migrations/ # Config migrations
├── entrypoints/ # Initialization logic
├── query/ # Query pipeline
├── ink/ # Ink renderer wrapper
├── buddy/ # Companion sprite (Easter egg 🐣)
├── native-ts/ # Native TypeScript utils
├── outputStyles/ # Output styling
└── upstreamproxy/ # Proxy configuration
```
---
## Architecture
### 1. Tool System
> `src/tools/` — Every tool Claude can invoke is a self-contained module with its own input schema, permission model, and execution logic.
| Tool | Description |
|---|---|
| **File I/O** | |
| `FileReadTool` | Read files (images, PDFs, notebooks) |
| `FileWriteTool` | Create / overwrite files |
| `FileEditTool` | Partial modification (string replacement) |
| `NotebookEditTool` | Jupyter notebook editing |
| **Search** | |
| `GlobTool` | File pattern matching |
| `GrepTool` | ripgrep-based content search |
| `WebSearchTool` | Web search |
| `WebFetchTool` | Fetch URL content |
| **Execution** | |
| `BashTool` | Shell command execution |
| `SkillTool` | Skill execution |
| `MCPTool` | MCP server tool invocation |
| `LSPTool` | Language Server Protocol integration |
| **Agents & Teams** | |
| `AgentTool` | Sub-agent spawning |
| `SendMessageTool` | Inter-agent messaging |
| `TeamCreateTool` / `TeamDeleteTool` | Team management |
| `TaskCreateTool` / `TaskUpdateTool` | Task management |
| **Mode & State** | |
| `EnterPlanModeTool` / `ExitPlanModeTool` | Plan mode toggle |
| `EnterWorktreeTool` / `ExitWorktreeTool` | Git worktree isolation |
| `ToolSearchTool` | Deferred tool discovery |
| `SleepTool` | Proactive mode wait |
| `CronCreateTool` | Scheduled triggers |
| `RemoteTriggerTool` | Remote trigger |
| `SyntheticOutputTool` | Structured output generation |
### 2. Command System
> `src/commands/` — User-facing slash commands invoked with `/` in the REPL.
| Command | Description | | Command | Description |
|---|---|---|---|---|
| `/commit` | Git commit | | `/memory` | Persistent memory |
| `/review` | Code review | | `/skills` | Skill management |
| `/compact` | Context compression | | `/tasks` | Task management |
| `/mcp` | MCP server management | | `/vim` | Vim mode toggle |
| `/config` | Settings | | `/diff` | View changes |
| `/doctor` | Environment diagnostics | | `/cost` | Check usage cost |
| `/login` / `/logout` | Auth | | `/theme` | Change theme |
| `/context` | Context visualization | | `/share` | Share session |
| `/pr_comments` | PR comments | | `/resume` | Restore session |
| `/desktop` | Desktop handoff | | `/mobile` | Mobile handoff |
### 3. Service Layer
> `src/services/` — External integrations and core infrastructure.
| Service | Description |
|---|---|
| `api/` | Anthropic API client, file API, bootstrap |
| `mcp/` | Model Context Protocol connection & management |
| `oauth/` | OAuth 2.0 authentication |
| `lsp/` | Language Server Protocol manager |
| `analytics/` | GrowthBook feature flags & analytics |
| `plugins/` | Plugin loader |
| `compact/` | Conversation context compression |
| `extractMemories/` | Automatic memory extraction |
| `teamMemorySync/` | Team memory synchronization |
| `tokenEstimation.ts` | Token count estimation |
| `policyLimits/` | Organization policy limits |
| `remoteManagedSettings/` | Remote managed settings |
### 4. Bridge System
> `src/bridge/` — Bidirectional communication layer connecting IDE extensions (VS Code, JetBrains) with the CLI.
Key files: `bridgeMain.ts` (main loop) · `bridgeMessaging.ts` (protocol) · `bridgePermissionCallbacks.ts` (permission callbacks) · `replBridge.ts` (REPL session) · `jwtUtils.ts` (JWT auth) · `sessionRunner.ts` (session execution)
### 5. Permission System
> `src/hooks/toolPermission/` — Checks permissions on every tool invocation.
Prompts the user for approval/denial or auto-resolves based on the configured permission mode: `default`, `plan`, `bypassPermissions`, `auto`, etc.
### 6. Feature Flags
Dead code elimination at build time via Bun's `bun:bundle`:
```typescript
import { feature } from 'bun:bundle'
const voiceCommand = feature('VOICE_MODE')
? require('./commands/voice/index.js').default
: null
```
Notable flags: `PROACTIVE` · `KAIROS` · `BRIDGE_MODE` · `DAEMON` · `VOICE_MODE` · `AGENT_TRIGGERS` · `MONITOR_TOOL`
---
## Key Files
| File | Lines | Purpose |
|------|------:|---------|
| `QueryEngine.ts` | ~46K | Core LLM API engine — streaming, tool loops, thinking mode, retries, token counting |
| `Tool.ts` | ~29K | Base types/interfaces for all tools — input schemas, permissions, progress state |
| `commands.ts` | ~25K | Command registration & execution with conditional per-environment imports |
| `main.tsx` | — | CLI parser + React/Ink renderer; parallelizes MDM, keychain, and GrowthBook on startup |
---
## Tech Stack
| Category | Technology |
|---|---|
| Runtime | [Bun](https://bun.sh) |
| Language | TypeScript (strict) |
| Terminal UI | [React](https://react.dev) + [Ink](https://github.com/vadimdemedes/ink) |
| CLI Parsing | [Commander.js](https://github.com/tj/commander.js) (extra-typings) |
| Schema Validation | [Zod v4](https://zod.dev) |
| Code Search | [ripgrep](https://github.com/BurntSushi/ripgrep) (via GrepTool) |
| Protocols | [MCP SDK](https://modelcontextprotocol.io) · LSP |
| API | [Anthropic SDK](https://docs.anthropic.com) |
| Telemetry | OpenTelemetry + gRPC |
| Feature Flags | GrowthBook |
| Auth | OAuth 2.0 · JWT · macOS Keychain |
---
## Design Patterns
<details>
<summary><strong>Parallel Prefetch</strong> — Startup optimization</summary>
MDM settings, keychain reads, and API preconnect fire in parallel as side-effects before heavy module evaluation:
```typescript
// main.tsx
startMdmRawRead()
startKeychainPrefetch()
```
</details>
<details>
<summary><strong>Lazy Loading</strong> — Deferred heavy modules</summary>
OpenTelemetry (~400KB) and gRPC (~700KB) are loaded via dynamic `import()` only when needed.
</details>
<details>
<summary><strong>Agent Swarms</strong> — Multi-agent orchestration</summary>
Sub-agents spawn via `AgentTool`, with `coordinator/` handling orchestration. `TeamCreateTool` enables team-level parallel work.
</details>
<details>
<summary><strong>Skill System</strong> — Reusable workflows</summary>
Defined in `skills/` and executed through `SkillTool`. Users can add custom skills.
</details>
<details>
<summary><strong>Plugin Architecture</strong> — Extensibility</summary>
Built-in and third-party plugins loaded through the `plugins/` subsystem.
</details>
---
## GitPretty Setup
<details>
<summary>Show per-file emoji commit messages in GitHub's file UI</summary>
```bash
# Apply emoji commits
bash ./gitpretty-apply.sh .
# Optional: install hooks for future commits
bash ./gitpretty-apply.sh . --hooks
# Push as usual
git push origin main
```
</details>
---
## Contributing
Contributions to documentation, the MCP server, and exploration tooling are welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
> **Note:** The `src/` directory is the original leaked source and should not be modified.
---
## Disclaimer
This repository archives source code leaked from Anthropic's npm registry on **2026-03-31**. All original source code is the property of [Anthropic](https://www.anthropic.com). This is not an official release and is not licensed for redistribution. Contact [nichxbt](https://www.x.com/nichxbt) for any comments.

220
Skill.md Normal file
View File

@@ -0,0 +1,220 @@
---
name: claude-code-skill
description: Development conventions and architecture guide for the Claude Code CLI repository.
---
# Claude Code — Repository Skill
## Project Overview
Claude Code is Anthropic's CLI tool for interacting with Claude from the terminal. It supports file editing, shell commands, git workflows, code review, multi-agent coordination, IDE integration (VS Code, JetBrains), and Model Context Protocol (MCP).
**Codebase:** ~1,900 files, 512,000+ lines of TypeScript under `src/`.
## Tech Stack
| Component | Technology |
|------------------|------------------------------------------------|
| Language | TypeScript (strict mode, ES modules) |
| Runtime | Bun (JSX support, `bun:bundle` feature flags) |
| Terminal UI | React + Ink (React for CLI) |
| CLI Parser | Commander.js (`@commander-js/extra-typings`) |
| API Client | `@anthropic-ai/sdk` |
| Validation | Zod v4 |
| Linter/Formatter | Biome |
| Analytics | GrowthBook (feature flags & A/B testing) |
| Protocol | Model Context Protocol (MCP) |
## Architecture
### Directory Map (`src/`)
| Directory | Purpose |
|------------------|-----------------------------------------------------------------|
| `commands/` | ~50 slash commands (`/commit`, `/review`, `/config`, etc.) |
| `tools/` | ~40 agent tools (Bash, FileRead, FileWrite, Glob, Grep, etc.) |
| `components/` | ~140 Ink/React UI components for terminal rendering |
| `services/` | External integrations (API, OAuth, MCP, LSP, analytics, plugins)|
| `bridge/` | Bidirectional IDE communication layer |
| `state/` | React context + custom store (AppState) |
| `hooks/` | React hooks (permissions, keybindings, commands, settings) |
| `types/` | TypeScript type definitions |
| `utils/` | Utilities (shell, file ops, permissions, config, git) |
| `screens/` | Full-screen UIs (Doctor, REPL, Resume, Compact) |
| `skills/` | Bundled skills + skill loader system |
| `plugins/` | Plugin system (marketplace + bundled plugins) |
| `coordinator/` | Multi-agent coordination & supervisor logic |
| `tasks/` | Task management (shell tasks, agent tasks, teammates) |
| `context/` | React context providers (notifications, stats, FPS) |
| `memdir/` | Persistent memory system (CLAUDE.md, user/project memory) |
| `entrypoints/` | Initialization logic, Agent SDK, MCP entry |
| `voice/` | Voice input/output (STT, keyterms) |
| `vim/` | Vim mode keybinding support |
| `schemas/` | Zod configuration schemas |
| `keybindings/` | Keybinding configuration & resolver |
| `migrations/` | Config migrations between versions |
| `outputStyles/` | Output formatting & theming |
| `query/` | Query pipeline & processing |
| `server/` | Server/daemon mode |
| `remote/` | Remote session handling |
### Key Files
| File | Role |
|---------------------|-----------------------------------------------------|
| `src/main.tsx` | CLI entry point (Commander parser, startup profiling)|
| `src/QueryEngine.ts`| Core LLM API caller (streaming, tool-call loops) |
| `src/Tool.ts` | Tool type definitions & `buildTool` factory |
| `src/tools.ts` | Tool registry & presets |
| `src/commands.ts` | Command registry |
| `src/context.ts` | System/user context collection (git status, memory) |
| `src/cost-tracker.ts`| Token cost tracking |
### Entry Points & Initialization Sequence
1. `src/main.tsx` — Commander CLI parser, startup profiling
2. `src/entrypoints/init.ts` — Config, telemetry, OAuth, MDM
3. `src/entrypoints/cli.tsx` — CLI session orchestration
4. `src/entrypoints/mcp.ts` — MCP server mode
5. `src/entrypoints/sdk/` — Agent SDK (programmatic API)
6. `src/replLauncher.tsx` — REPL session launcher
Startup performs parallel initialization: MDM policy reads, Keychain prefetch, feature flag checks, then core init.
## Patterns & Conventions
### Tool Definition
Each tool lives in `src/tools/{ToolName}/` and uses `buildTool`:
```typescript
export const MyTool = buildTool({
name: 'MyTool',
aliases: ['my_tool'],
description: 'What this tool does',
inputSchema: z.object({
param: z.string(),
}),
async call(args, context, canUseTool, parentMessage, onProgress) {
// Execute and return { data: result, newMessages?: [...] }
},
async checkPermissions(input, context) { /* Permission checks */ },
isConcurrencySafe(input) { /* Can run in parallel? */ },
isReadOnly(input) { /* Non-destructive? */ },
prompt(options) { /* System prompt injection */ },
renderToolUseMessage(input, options) { /* UI for invocation */ },
renderToolResultMessage(content, progressMessages, options) { /* UI for result */ },
})
```
**Directory structure per tool:** `{ToolName}.ts` or `.tsx` (main), `UI.tsx` (rendering), `prompt.ts` (system prompt), plus utility files.
### Command Definition
Commands live in `src/commands/` and follow three types:
- **PromptCommand** — Sends a formatted prompt with injected tools (most commands)
- **LocalCommand** — Runs in-process, returns text
- **LocalJSXCommand** — Runs in-process, returns React JSX
```typescript
const command = {
type: 'prompt',
name: 'my-command',
description: 'What this command does',
progressMessage: 'working...',
allowedTools: ['Bash(git *)', 'FileRead(*)'],
source: 'builtin',
async getPromptForCommand(args, context) {
return [{ type: 'text', text: '...' }]
},
} satisfies Command
```
Commands are registered in `src/commands.ts` and invoked via `/command-name` in the REPL.
### Component Structure
- Functional React components with Ink primitives (`Box`, `Text`, `useInput()`)
- Styled with Chalk for terminal colors
- React Compiler for optimized re-renders
- Design system primitives in `src/components/design-system/`
### State Management
- `AppState` via React context + custom store (`src/state/AppStateStore.ts`)
- Mutable state object passed to tool contexts
- Selector functions for derived state
- Change observers in `src/state/onChangeAppState.ts`
### Permission System
- **Modes:** `default` (prompt per operation), `plan` (show plan, ask once), `bypassPermissions` (auto-approve), `auto` (ML classifier)
- **Rules:** Wildcard patterns — `Bash(git *)`, `FileEdit(/src/*)`
- Tools implement `checkPermissions()` returning `{ granted: boolean, reason?, prompt? }`
### Feature Flags & Build
Bun's `bun:bundle` feature flags enable dead-code elimination at build time:
```typescript
import { feature } from 'bun:bundle'
if (feature('PROACTIVE')) { /* proactive agent tools */ }
```
Notable flags: `PROACTIVE`, `KAIROS`, `BRIDGE_MODE`, `VOICE_MODE`, `COORDINATOR_MODE`, `DAEMON`, `WORKFLOW_SCRIPTS`.
Some features are also gated via `process.env.USER_TYPE === 'ant'`.
## Naming Conventions
| Element | Convention | Example |
|-------------|---------------------|----------------------------------|
| Files | PascalCase (exports) or kebab-case (commands) | `BashTool.tsx`, `commit-push-pr.ts` |
| Components | PascalCase | `App.tsx`, `PromptInput.tsx` |
| Types | PascalCase, suffix with Props/State/Context | `ToolUseContext` |
| Hooks | `use` prefix | `useCanUseTool`, `useSettings` |
| Constants | SCREAMING_SNAKE_CASE | `MAX_TOKENS`, `DEFAULT_TIMEOUT_MS`|
## Import Practices
- ES modules with `.js` extensions (Bun convention)
- Lazy imports for circular dependency breaking: `const getModule = () => require('./heavy.js')`
- Conditional imports via feature flags or `process.env`
- `biome-ignore` markers for manual import ordering where needed
## Services
| Service | Path | Purpose |
|--------------------|-------------------------------|-----------------------------------|
| API | `services/api/` | Anthropic SDK client, file uploads|
| MCP | `services/mcp/` | MCP client, tool/resource discovery|
| OAuth | `services/oauth/` | OAuth 2.0 auth flow |
| LSP | `services/lsp/` | Language Server Protocol manager |
| Analytics | `services/analytics/` | GrowthBook, telemetry, events |
| Plugins | `services/plugins/` | Plugin loader, marketplace |
| Compact | `services/compact/` | Context compression |
| Policy Limits | `services/policyLimits/` | Org rate limits, quota checking |
| Remote Settings | `services/remoteManagedSettings/` | Managed settings sync (Enterprise) |
| Token Estimation | `services/tokenEstimation.ts` | Token count estimation |
## Configuration
**Settings locations:**
- **Global:** `~/.claude/config.json`, `~/.claude/settings.json`
- **Project:** `.claude/config.json`, `.claude/settings.json`
- **System:** macOS Keychain + MDM, Windows Registry + MDM
- **Managed:** Remote sync for Enterprise users
## Guidelines
1. Read relevant source files before making changes — understand existing patterns first.
2. Follow the tool/command/component patterns above when adding new ones.
3. Keep edits minimal and focused — avoid unnecessary refactoring.
4. Use Zod for all input validation at system boundaries.
5. Gate experimental features behind `bun:bundle` feature flags or env checks.
6. Respect the permission system — tools that modify state must implement `checkPermissions()`.
7. Use lazy imports when adding dependencies that could create circular references.
8. Update this file as project conventions evolve.

34
agent.md Normal file
View File

@@ -0,0 +1,34 @@
---
name: repository-agent
description: Agent operating guide for claude-code.
---
# Agent
## Purpose
Define how an automated coding agent should operate in this repository.
## Core Rules
- Keep changes small, targeted, and easy to review.
- Preserve existing command behavior unless a task explicitly asks for a behavior change.
- Favor existing patterns in `src/commands/`, `src/tools/`, and shared utility modules.
- Avoid broad refactors while fixing localized issues.
## Workflow
1. Gather context from relevant files before editing.
2. Implement the smallest viable change.
3. Run focused validation (type checks/tests for changed areas).
4. Summarize what changed and any remaining risks.
## Code Style
- Match existing TypeScript style and naming in nearby files.
- Prefer explicit, readable logic over compact clever code.
- Add brief comments only when logic is not obvious.
## Validation
- Prefer targeted checks first, then broader checks if needed.
- If validation cannot run, clearly state what was skipped and why.
## Notes
- Repository conventions may evolve; update this file when team norms change.

49
biome.json Normal file
View File

@@ -0,0 +1,49 @@
{
"$schema": "https://biomejs.dev/schemas/1.9.4/schema.json",
"organizeImports": {
"enabled": true
},
"linter": {
"enabled": true,
"rules": {
"recommended": true,
"complexity": {
"noExcessiveCognitiveComplexity": "warn"
},
"correctness": {
"noUnusedImports": "warn",
"noUnusedVariables": "warn"
},
"style": {
"noNonNullAssertion": "off",
"useImportType": "warn"
},
"suspicious": {
"noExplicitAny": "off"
}
}
},
"formatter": {
"enabled": true,
"indentStyle": "tab",
"indentWidth": 2,
"lineWidth": 100
},
"javascript": {
"formatter": {
"quoteStyle": "single",
"semicolons": "asNeeded"
}
},
"json": {
"formatter": {
"indentStyle": "space",
"indentWidth": 2
}
},
"files": {
"ignore": ["node_modules", "dist", "*.d.ts"]
}
}

635
bun.lock Normal file
View File

@@ -0,0 +1,635 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "@anthropic-ai/claude-code",
"dependencies": {
"@anthropic-ai/sdk": "^0.39.0",
"@commander-js/extra-typings": "^13.1.0",
"@growthbook/growthbook": "^1.4.0",
"@modelcontextprotocol/sdk": "^1.12.1",
"@opentelemetry/api": "^1.9.0",
"@opentelemetry/api-logs": "^0.57.0",
"@opentelemetry/core": "^1.30.0",
"@opentelemetry/sdk-logs": "^0.57.0",
"@opentelemetry/sdk-metrics": "^1.30.0",
"@opentelemetry/sdk-trace-base": "^1.30.0",
"@xterm/addon-fit": "^0.10.0",
"@xterm/addon-search": "^0.15.0",
"@xterm/addon-unicode11": "^0.8.0",
"@xterm/addon-web-links": "^0.11.0",
"@xterm/addon-webgl": "^0.18.0",
"@xterm/xterm": "^5.5.0",
"auto-bind": "^5.0.1",
"axios": "^1.7.0",
"chalk": "^5.4.0",
"chokidar": "^4.0.0",
"cli-boxes": "^3.0.0",
"code-excerpt": "^4.0.0",
"diff": "^7.0.0",
"execa": "^9.5.0",
"figures": "^6.1.0",
"fuse.js": "^7.0.0",
"highlight.js": "^11.11.0",
"ignore": "^6.0.0",
"lodash-es": "^4.17.21",
"marked": "^15.0.0",
"node-pty": "^1.1.0",
"p-map": "^7.0.0",
"picomatch": "^4.0.0",
"proper-lockfile": "^4.1.2",
"qrcode": "^1.5.0",
"react": "^19.0.0",
"react-reconciler": "^0.31.0",
"semver": "^7.6.0",
"stack-utils": "^2.0.6",
"strip-ansi": "^7.1.0",
"supports-hyperlinks": "^3.1.0",
"tree-kill": "^1.2.2",
"type-fest": "^4.30.0",
"undici": "^7.3.0",
"usehooks-ts": "^3.1.0",
"wrap-ansi": "^9.0.0",
"ws": "^8.18.0",
"yaml": "^2.6.0",
"zod": "^3.24.0",
},
"devDependencies": {
"@biomejs/biome": "^1.9.0",
"@types/diff": "^7.0.0",
"@types/lodash-es": "^4.17.12",
"@types/node": "^22.10.0",
"@types/picomatch": "^3.0.0",
"@types/proper-lockfile": "^4.1.4",
"@types/react": "^19.0.0",
"@types/semver": "^7.5.8",
"@types/stack-utils": "^2.0.3",
"@types/ws": "^8.5.0",
"esbuild": "^0.25.0",
"typescript": "^5.7.0",
},
},
},
"packages": {
"@anthropic-ai/sdk": ["@anthropic-ai/sdk@0.39.0", "", { "dependencies": { "@types/node": "^18.11.18", "@types/node-fetch": "^2.6.4", "abort-controller": "^3.0.0", "agentkeepalive": "^4.2.1", "form-data-encoder": "1.7.2", "formdata-node": "^4.3.2", "node-fetch": "^2.6.7" } }, "sha512-eMyDIPRZbt1CCLErRCi3exlAvNkBtRe+kW5vvJyef93PmNr/clstYgHhtvmkxN82nlKgzyGPCyGxrm0JQ1ZIdg=="],
"@biomejs/biome": ["@biomejs/biome@1.9.4", "", { "optionalDependencies": { "@biomejs/cli-darwin-arm64": "1.9.4", "@biomejs/cli-darwin-x64": "1.9.4", "@biomejs/cli-linux-arm64": "1.9.4", "@biomejs/cli-linux-arm64-musl": "1.9.4", "@biomejs/cli-linux-x64": "1.9.4", "@biomejs/cli-linux-x64-musl": "1.9.4", "@biomejs/cli-win32-arm64": "1.9.4", "@biomejs/cli-win32-x64": "1.9.4" }, "bin": { "biome": "bin/biome" } }, "sha512-1rkd7G70+o9KkTn5KLmDYXihGoTaIGO9PIIN2ZB7UJxFrWw04CZHPYiMRjYsaDvVV7hP1dYNRLxSANLaBFGpog=="],
"@biomejs/cli-darwin-arm64": ["@biomejs/cli-darwin-arm64@1.9.4", "", { "os": "darwin", "cpu": "arm64" }, "sha512-bFBsPWrNvkdKrNCYeAp+xo2HecOGPAy9WyNyB/jKnnedgzl4W4Hb9ZMzYNbf8dMCGmUdSavlYHiR01QaYR58cw=="],
"@biomejs/cli-darwin-x64": ["@biomejs/cli-darwin-x64@1.9.4", "", { "os": "darwin", "cpu": "x64" }, "sha512-ngYBh/+bEedqkSevPVhLP4QfVPCpb+4BBe2p7Xs32dBgs7rh9nY2AIYUL6BgLw1JVXV8GlpKmb/hNiuIxfPfZg=="],
"@biomejs/cli-linux-arm64": ["@biomejs/cli-linux-arm64@1.9.4", "", { "os": "linux", "cpu": "arm64" }, "sha512-fJIW0+LYujdjUgJJuwesP4EjIBl/N/TcOX3IvIHJQNsAqvV2CHIogsmA94BPG6jZATS4Hi+xv4SkBBQSt1N4/g=="],
"@biomejs/cli-linux-arm64-musl": ["@biomejs/cli-linux-arm64-musl@1.9.4", "", { "os": "linux", "cpu": "arm64" }, "sha512-v665Ct9WCRjGa8+kTr0CzApU0+XXtRgwmzIf1SeKSGAv+2scAlW6JR5PMFo6FzqqZ64Po79cKODKf3/AAmECqA=="],
"@biomejs/cli-linux-x64": ["@biomejs/cli-linux-x64@1.9.4", "", { "os": "linux", "cpu": "x64" }, "sha512-lRCJv/Vi3Vlwmbd6K+oQ0KhLHMAysN8lXoCI7XeHlxaajk06u7G+UsFSO01NAs5iYuWKmVZjmiOzJ0OJmGsMwg=="],
"@biomejs/cli-linux-x64-musl": ["@biomejs/cli-linux-x64-musl@1.9.4", "", { "os": "linux", "cpu": "x64" }, "sha512-gEhi/jSBhZ2m6wjV530Yy8+fNqG8PAinM3oV7CyO+6c3CEh16Eizm21uHVsyVBEB6RIM8JHIl6AGYCv6Q6Q9Tg=="],
"@biomejs/cli-win32-arm64": ["@biomejs/cli-win32-arm64@1.9.4", "", { "os": "win32", "cpu": "arm64" }, "sha512-tlbhLk+WXZmgwoIKwHIHEBZUwxml7bRJgk0X2sPyNR3S93cdRq6XulAZRQJ17FYGGzWne0fgrXBKpl7l4M87Hg=="],
"@biomejs/cli-win32-x64": ["@biomejs/cli-win32-x64@1.9.4", "", { "os": "win32", "cpu": "x64" }, "sha512-8Y5wMhVIPaWe6jw2H+KlEm4wP/f7EW3810ZLmDlrEEy5KvBsb9ECEfu/kMWD484ijfQ8+nIi0giMgu9g1UAuuA=="],
"@commander-js/extra-typings": ["@commander-js/extra-typings@13.1.0", "", { "peerDependencies": { "commander": "~13.1.0" } }, "sha512-q5P52BYb1hwVWE6dtID7VvuJWrlfbCv4klj7BjUUOqMz4jbSZD4C9fJ9lRjL2jnBGTg+gDDlaXN51rkWcLk4fg=="],
"@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.25.12", "", { "os": "aix", "cpu": "ppc64" }, "sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA=="],
"@esbuild/android-arm": ["@esbuild/android-arm@0.25.12", "", { "os": "android", "cpu": "arm" }, "sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg=="],
"@esbuild/android-arm64": ["@esbuild/android-arm64@0.25.12", "", { "os": "android", "cpu": "arm64" }, "sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg=="],
"@esbuild/android-x64": ["@esbuild/android-x64@0.25.12", "", { "os": "android", "cpu": "x64" }, "sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg=="],
"@esbuild/darwin-arm64": ["@esbuild/darwin-arm64@0.25.12", "", { "os": "darwin", "cpu": "arm64" }, "sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg=="],
"@esbuild/darwin-x64": ["@esbuild/darwin-x64@0.25.12", "", { "os": "darwin", "cpu": "x64" }, "sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA=="],
"@esbuild/freebsd-arm64": ["@esbuild/freebsd-arm64@0.25.12", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg=="],
"@esbuild/freebsd-x64": ["@esbuild/freebsd-x64@0.25.12", "", { "os": "freebsd", "cpu": "x64" }, "sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ=="],
"@esbuild/linux-arm": ["@esbuild/linux-arm@0.25.12", "", { "os": "linux", "cpu": "arm" }, "sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw=="],
"@esbuild/linux-arm64": ["@esbuild/linux-arm64@0.25.12", "", { "os": "linux", "cpu": "arm64" }, "sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ=="],
"@esbuild/linux-ia32": ["@esbuild/linux-ia32@0.25.12", "", { "os": "linux", "cpu": "ia32" }, "sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA=="],
"@esbuild/linux-loong64": ["@esbuild/linux-loong64@0.25.12", "", { "os": "linux", "cpu": "none" }, "sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng=="],
"@esbuild/linux-mips64el": ["@esbuild/linux-mips64el@0.25.12", "", { "os": "linux", "cpu": "none" }, "sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw=="],
"@esbuild/linux-ppc64": ["@esbuild/linux-ppc64@0.25.12", "", { "os": "linux", "cpu": "ppc64" }, "sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA=="],
"@esbuild/linux-riscv64": ["@esbuild/linux-riscv64@0.25.12", "", { "os": "linux", "cpu": "none" }, "sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w=="],
"@esbuild/linux-s390x": ["@esbuild/linux-s390x@0.25.12", "", { "os": "linux", "cpu": "s390x" }, "sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg=="],
"@esbuild/linux-x64": ["@esbuild/linux-x64@0.25.12", "", { "os": "linux", "cpu": "x64" }, "sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw=="],
"@esbuild/netbsd-arm64": ["@esbuild/netbsd-arm64@0.25.12", "", { "os": "none", "cpu": "arm64" }, "sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg=="],
"@esbuild/netbsd-x64": ["@esbuild/netbsd-x64@0.25.12", "", { "os": "none", "cpu": "x64" }, "sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ=="],
"@esbuild/openbsd-arm64": ["@esbuild/openbsd-arm64@0.25.12", "", { "os": "openbsd", "cpu": "arm64" }, "sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A=="],
"@esbuild/openbsd-x64": ["@esbuild/openbsd-x64@0.25.12", "", { "os": "openbsd", "cpu": "x64" }, "sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw=="],
"@esbuild/openharmony-arm64": ["@esbuild/openharmony-arm64@0.25.12", "", { "os": "none", "cpu": "arm64" }, "sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg=="],
"@esbuild/sunos-x64": ["@esbuild/sunos-x64@0.25.12", "", { "os": "sunos", "cpu": "x64" }, "sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w=="],
"@esbuild/win32-arm64": ["@esbuild/win32-arm64@0.25.12", "", { "os": "win32", "cpu": "arm64" }, "sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg=="],
"@esbuild/win32-ia32": ["@esbuild/win32-ia32@0.25.12", "", { "os": "win32", "cpu": "ia32" }, "sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ=="],
"@esbuild/win32-x64": ["@esbuild/win32-x64@0.25.12", "", { "os": "win32", "cpu": "x64" }, "sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA=="],
"@growthbook/growthbook": ["@growthbook/growthbook@1.6.5", "", { "dependencies": { "dom-mutator": "^0.6.0" } }, "sha512-mUaMsgeUTpRIUOTn33EUXHRK6j7pxBjwqH4WpQyq+pukjd1AIzWlEa6w7i6bInJUcweGgP2beXZmaP6b6UPn7A=="],
"@hono/node-server": ["@hono/node-server@1.19.12", "", { "peerDependencies": { "hono": "^4" } }, "sha512-txsUW4SQ1iilgE0l9/e9VQWmELXifEFvmdA1j6WFh/aFPj99hIntrSsq/if0UWyGVkmrRPKA1wCeP+UCr1B9Uw=="],
"@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.29.0", "", { "dependencies": { "@hono/node-server": "^1.19.9", "ajv": "^8.17.1", "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.2.1", "express-rate-limit": "^8.2.1", "hono": "^4.11.4", "jose": "^6.1.3", "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.25 || ^4.0", "zod-to-json-schema": "^3.25.1" }, "peerDependencies": { "@cfworker/json-schema": "^4.1.1", "zod": "^3.25 || ^4.0" }, "optionalPeers": ["@cfworker/json-schema"] }, "sha512-zo37mZA9hJWpULgkRpowewez1y6ML5GsXJPY8FI0tBBCd77HEvza4jDqRKOXgHNn867PVGCyTdzqpz0izu5ZjQ=="],
"@opentelemetry/api": ["@opentelemetry/api@1.9.1", "", {}, "sha512-gLyJlPHPZYdAk1JENA9LeHejZe1Ti77/pTeFm/nMXmQH/HFZlcS/O2XJB+L8fkbrNSqhdtlvjBVjxwUYanNH5Q=="],
"@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="],
"@opentelemetry/core": ["@opentelemetry/core@1.30.1", "", { "dependencies": { "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-OOCM2C/QIURhJMuKaekP3TRBxBKxG/TWWA0TL2J6nXUtDnuCtccy49LUJF8xPFXMX+0LMcxFpCo8M9cGY1W6rQ=="],
"@opentelemetry/resources": ["@opentelemetry/resources@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-5UxZqiAgLYGFjS4s9qm5mBVo433u+dSPUFWVWXmLAD4wB65oMCoXaJP1KJa9DIYYMeHu3z4BZcStG3LC593cWA=="],
"@opentelemetry/sdk-logs": ["@opentelemetry/sdk-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api-logs": "0.57.2", "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.4.0 <1.10.0" } }, "sha512-TXFHJ5c+BKggWbdEQ/inpgIzEmS2BGQowLE9UhsMd7YYlUfBQJ4uax0VF/B5NYigdM/75OoJGhAV3upEhK+3gg=="],
"@opentelemetry/sdk-metrics": ["@opentelemetry/sdk-metrics@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-q9zcZ0Okl8jRgmy7eNW3Ku1XSgg3sDLa5evHZpCwjspw7E8Is4K/haRPDJrBcX3YSn/Y7gUvFnByNYEKQNbNog=="],
"@opentelemetry/sdk-trace-base": ["@opentelemetry/sdk-trace-base@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1", "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-jVPgBbH1gCy2Lb7X0AVQ8XAfgg0pJ4nvl8/IiQA6nxOsPvS+0zMJaFSs2ltXe0J6C8dqjcnpyqINDJmU30+uOg=="],
"@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="],
"@sec-ant/readable-stream": ["@sec-ant/readable-stream@0.4.1", "", {}, "sha512-831qok9r2t8AlxLko40y2ebgSDhenenCatLVeW/uBtnHPyhHOvG0C7TvfgecV+wHzIm5KUICgzmVpWS+IMEAeg=="],
"@sindresorhus/merge-streams": ["@sindresorhus/merge-streams@4.0.0", "", {}, "sha512-tlqY9xq5ukxTUZBmoOp+m61cqwQD5pHJtFY3Mn8CA8ps6yghLH/Hw8UPdqg4OLmFW3IFlcXnQNmo/dh8HzXYIQ=="],
"@types/diff": ["@types/diff@7.0.2", "", {}, "sha512-JSWRMozjFKsGlEjiiKajUjIJVKuKdE3oVy2DNtK+fUo8q82nhFZ2CPQwicAIkXrofahDXrWJ7mjelvZphMS98Q=="],
"@types/lodash": ["@types/lodash@4.17.24", "", {}, "sha512-gIW7lQLZbue7lRSWEFql49QJJWThrTFFeIMJdp3eH4tKoxm1OvEPg02rm4wCCSHS0cL3/Fizimb35b7k8atwsQ=="],
"@types/lodash-es": ["@types/lodash-es@4.17.12", "", { "dependencies": { "@types/lodash": "*" } }, "sha512-0NgftHUcV4v34VhXm8QBSftKVXtbkBG3ViCjs6+eJ5a6y6Mi/jiFGPc1sC7QK+9BFhWrURE3EOggmWaSxL9OzQ=="],
"@types/node": ["@types/node@22.19.15", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-F0R/h2+dsy5wJAUe3tAU6oqa2qbWY5TpNfL/RGmo1y38hiyO1w3x2jPtt76wmuaJI4DQnOBu21cNXQ2STIUUWg=="],
"@types/node-fetch": ["@types/node-fetch@2.6.13", "", { "dependencies": { "@types/node": "*", "form-data": "^4.0.4" } }, "sha512-QGpRVpzSaUs30JBSGPjOg4Uveu384erbHBoT1zeONvyCfwQxIkUshLAOqN/k9EjGviPRmWTTe6aH2qySWKTVSw=="],
"@types/picomatch": ["@types/picomatch@3.0.2", "", {}, "sha512-n0i8TD3UDB7paoMMxA3Y65vUncFJXjcUf7lQY7YyKGl6031FNjfsLs6pdLFCy2GNFxItPJG8GvvpbZc2skH7WA=="],
"@types/proper-lockfile": ["@types/proper-lockfile@4.1.4", "", { "dependencies": { "@types/retry": "*" } }, "sha512-uo2ABllncSqg9F1D4nugVl9v93RmjxF6LJzQLMLDdPaXCUIDPeOJ21Gbqi43xNKzBi/WQ0Q0dICqufzQbMjipQ=="],
"@types/react": ["@types/react@19.2.14", "", { "dependencies": { "csstype": "^3.2.2" } }, "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w=="],
"@types/retry": ["@types/retry@0.12.5", "", {}, "sha512-3xSjTp3v03X/lSQLkczaN9UIEwJMoMCA1+Nb5HfbJEQWogdeQIyVtTvxPXDQjZ5zws8rFQfVfRdz03ARihPJgw=="],
"@types/semver": ["@types/semver@7.7.1", "", {}, "sha512-FmgJfu+MOcQ370SD0ev7EI8TlCAfKYU+B4m5T3yXc1CiRN94g/SZPtsCkk506aUDtlMnFZvasDwHHUcZUEaYuA=="],
"@types/stack-utils": ["@types/stack-utils@2.0.3", "", {}, "sha512-9aEbYZ3TbYMznPdcdr3SmIrLXwC/AKZXQeCf9Pgao5CKb8CyHuEX5jzWPTkvregvhRJHcpRO6BFoGW9ycaOkYw=="],
"@types/ws": ["@types/ws@8.18.1", "", { "dependencies": { "@types/node": "*" } }, "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg=="],
"@xterm/addon-fit": ["@xterm/addon-fit@0.10.0", "", { "peerDependencies": { "@xterm/xterm": "^5.0.0" } }, "sha512-UFYkDm4HUahf2lnEyHvio51TNGiLK66mqP2JoATy7hRZeXaGMRDr00JiSF7m63vR5WKATF605yEggJKsw0JpMQ=="],
"@xterm/addon-search": ["@xterm/addon-search@0.15.0", "", { "peerDependencies": { "@xterm/xterm": "^5.0.0" } }, "sha512-ZBZKLQ+EuKE83CqCmSSz5y1tx+aNOCUaA7dm6emgOX+8J9H1FWXZyrKfzjwzV+V14TV3xToz1goIeRhXBS5qjg=="],
"@xterm/addon-unicode11": ["@xterm/addon-unicode11@0.8.0", "", { "peerDependencies": { "@xterm/xterm": "^5.0.0" } }, "sha512-LxinXu8SC4OmVa6FhgwsVCBZbr8WoSGzBl2+vqe8WcQ6hb1r6Gj9P99qTNdPiFPh4Ceiu2pC8xukZ6+2nnh49Q=="],
"@xterm/addon-web-links": ["@xterm/addon-web-links@0.11.0", "", { "peerDependencies": { "@xterm/xterm": "^5.0.0" } }, "sha512-nIHQ38pQI+a5kXnRaTgwqSHnX7KE6+4SVoceompgHL26unAxdfP6IPqUTSYPQgSwM56hsElfoNrrW5V7BUED/Q=="],
"@xterm/addon-webgl": ["@xterm/addon-webgl@0.18.0", "", { "peerDependencies": { "@xterm/xterm": "^5.0.0" } }, "sha512-xCnfMBTI+/HKPdRnSOHaJDRqEpq2Ugy8LEj9GiY4J3zJObo3joylIFaMvzBwbYRg8zLtkO0KQaStCeSfoaI2/w=="],
"@xterm/xterm": ["@xterm/xterm@5.5.0", "", {}, "sha512-hqJHYaQb5OptNunnyAnkHyM8aCjZ1MEIDTQu1iIbbTD/xops91NB5yq1ZK/dC2JDbVWtF23zUtl9JE2NqwT87A=="],
"abort-controller": ["abort-controller@3.0.0", "", { "dependencies": { "event-target-shim": "^5.0.0" } }, "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg=="],
"accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="],
"agentkeepalive": ["agentkeepalive@4.6.0", "", { "dependencies": { "humanize-ms": "^1.2.1" } }, "sha512-kja8j7PjmncONqaTsB8fQ+wE2mSU2DJ9D4XKoJ5PFWIdRMa6SLSN1ff4mOr4jCbfRSsxR4keIiySJU0N9T5hIQ=="],
"ajv": ["ajv@8.18.0", "", { "dependencies": { "fast-deep-equal": "^3.1.3", "fast-uri": "^3.0.1", "json-schema-traverse": "^1.0.0", "require-from-string": "^2.0.2" } }, "sha512-PlXPeEWMXMZ7sPYOHqmDyCJzcfNrUr3fGNKtezX14ykXOEIvyK81d+qydx89KY5O71FKMPaQ2vBfBFI5NHR63A=="],
"ajv-formats": ["ajv-formats@3.0.1", "", { "dependencies": { "ajv": "^8.0.0" }, "peerDependencies": { "ajv": "^8.0.0" } }, "sha512-8iUql50EUR+uUcdRQ3HDqa6EVyo3docL8g5WJ3FNcWmu62IbkGUue/pEyLBW8VGKKucTPgqeks4fIU1DA4yowQ=="],
"ansi-regex": ["ansi-regex@6.2.2", "", {}, "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg=="],
"ansi-styles": ["ansi-styles@6.2.3", "", {}, "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg=="],
"asynckit": ["asynckit@0.4.0", "", {}, "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q=="],
"auto-bind": ["auto-bind@5.0.1", "", {}, "sha512-ooviqdwwgfIfNmDwo94wlshcdzfO64XV0Cg6oDsDYBJfITDz1EngD2z7DkbvCWn+XIMsIqW27sEVF6qcpJrRcg=="],
"axios": ["axios@1.14.0", "", { "dependencies": { "follow-redirects": "^1.15.11", "form-data": "^4.0.5", "proxy-from-env": "^2.1.0" } }, "sha512-3Y8yrqLSwjuzpXuZ0oIYZ/XGgLwUIBU3uLvbcpb0pidD9ctpShJd43KSlEEkVQg6DS0G9NKyzOvBfUtDKEyHvQ=="],
"body-parser": ["body-parser@2.2.2", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.3", "http-errors": "^2.0.0", "iconv-lite": "^0.7.0", "on-finished": "^2.4.1", "qs": "^6.14.1", "raw-body": "^3.0.1", "type-is": "^2.0.1" } }, "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA=="],
"bytes": ["bytes@3.1.2", "", {}, "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg=="],
"call-bind-apply-helpers": ["call-bind-apply-helpers@1.0.2", "", { "dependencies": { "es-errors": "^1.3.0", "function-bind": "^1.1.2" } }, "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ=="],
"call-bound": ["call-bound@1.0.4", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.2", "get-intrinsic": "^1.3.0" } }, "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg=="],
"camelcase": ["camelcase@5.3.1", "", {}, "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="],
"chalk": ["chalk@5.6.2", "", {}, "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA=="],
"chokidar": ["chokidar@4.0.3", "", { "dependencies": { "readdirp": "^4.0.1" } }, "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA=="],
"cli-boxes": ["cli-boxes@3.0.0", "", {}, "sha512-/lzGpEWL/8PfI0BmBOPRwp0c/wFNX1RdUML3jK/RcSBA9T8mZDdQpqYBKtCFTOfQbwPqWEOpjqW+Fnayc0969g=="],
"cliui": ["cliui@6.0.0", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^6.2.0" } }, "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ=="],
"code-excerpt": ["code-excerpt@4.0.0", "", { "dependencies": { "convert-to-spaces": "^2.0.1" } }, "sha512-xxodCmBen3iy2i0WtAK8FlFNrRzjUqjRsMfho58xT/wvZU1YTM3fCnRjcy1gJPMepaRlgm/0e6w8SpWHpn3/cA=="],
"color-convert": ["color-convert@2.0.1", "", { "dependencies": { "color-name": "~1.1.4" } }, "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ=="],
"color-name": ["color-name@1.1.4", "", {}, "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="],
"combined-stream": ["combined-stream@1.0.8", "", { "dependencies": { "delayed-stream": "~1.0.0" } }, "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg=="],
"commander": ["commander@13.1.0", "", {}, "sha512-/rFeCpNJQbhSZjGVwO9RFV3xPqbnERS8MmIQzCtD/zl6gpJuV/bMLuN92oG3F7d8oDEHHRrujSXNUr8fpjntKw=="],
"content-disposition": ["content-disposition@1.0.1", "", {}, "sha512-oIXISMynqSqm241k6kcQ5UwttDILMK4BiurCfGEREw6+X9jkkpEe5T9FZaApyLGGOnFuyMWZpdolTXMtvEJ08Q=="],
"content-type": ["content-type@1.0.5", "", {}, "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA=="],
"convert-to-spaces": ["convert-to-spaces@2.0.1", "", {}, "sha512-rcQ1bsQO9799wq24uE5AM2tAILy4gXGIK/njFWcVQkGNZ96edlpY+A7bjwvzjYvLDyzmG1MmMLZhpcsb+klNMQ=="],
"cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="],
"cookie-signature": ["cookie-signature@1.2.2", "", {}, "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg=="],
"cors": ["cors@2.8.6", "", { "dependencies": { "object-assign": "^4", "vary": "^1" } }, "sha512-tJtZBBHA6vjIAaF6EnIaq6laBBP9aq/Y3ouVJjEfoHbRBcHBAHYcMh/w8LDrk2PvIMMq8gmopa5D4V8RmbrxGw=="],
"cross-spawn": ["cross-spawn@7.0.6", "", { "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="],
"csstype": ["csstype@3.2.3", "", {}, "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ=="],
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"decamelize": ["decamelize@1.2.0", "", {}, "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA=="],
"delayed-stream": ["delayed-stream@1.0.0", "", {}, "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ=="],
"depd": ["depd@2.0.0", "", {}, "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw=="],
"diff": ["diff@7.0.0", "", {}, "sha512-PJWHUb1RFevKCwaFA9RlG5tCd+FO5iRh9A8HEtkmBH2Li03iJriB6m6JIN4rGz3K3JLawI7/veA1xzRKP6ISBw=="],
"dijkstrajs": ["dijkstrajs@1.0.3", "", {}, "sha512-qiSlmBq9+BCdCA/L46dw8Uy93mloxsPSbwnm5yrKn2vMPiy8KyAskTF6zuV/j5BMsmOGZDPs7KjU+mjb670kfA=="],
"dom-mutator": ["dom-mutator@0.6.0", "", {}, "sha512-iCt9o0aYfXMUkz/43ZOAUFQYotjGB+GNbYJiJdz4TgXkyToXbbRy5S6FbTp72lRBtfpUMwEc1KmpFEU4CZeoNg=="],
"dunder-proto": ["dunder-proto@1.0.1", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" } }, "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="],
"ee-first": ["ee-first@1.1.1", "", {}, "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow=="],
"emoji-regex": ["emoji-regex@10.6.0", "", {}, "sha512-toUI84YS5YmxW219erniWD0CIVOo46xGKColeNQRgOzDorgBi1v4D71/OFzgD9GO2UGKIv1C3Sp8DAn0+j5w7A=="],
"encodeurl": ["encodeurl@2.0.0", "", {}, "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg=="],
"es-define-property": ["es-define-property@1.0.1", "", {}, "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g=="],
"es-errors": ["es-errors@1.3.0", "", {}, "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw=="],
"es-object-atoms": ["es-object-atoms@1.1.1", "", { "dependencies": { "es-errors": "^1.3.0" } }, "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA=="],
"es-set-tostringtag": ["es-set-tostringtag@2.1.0", "", { "dependencies": { "es-errors": "^1.3.0", "get-intrinsic": "^1.2.6", "has-tostringtag": "^1.0.2", "hasown": "^2.0.2" } }, "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA=="],
"esbuild": ["esbuild@0.25.12", "", { "optionalDependencies": { "@esbuild/aix-ppc64": "0.25.12", "@esbuild/android-arm": "0.25.12", "@esbuild/android-arm64": "0.25.12", "@esbuild/android-x64": "0.25.12", "@esbuild/darwin-arm64": "0.25.12", "@esbuild/darwin-x64": "0.25.12", "@esbuild/freebsd-arm64": "0.25.12", "@esbuild/freebsd-x64": "0.25.12", "@esbuild/linux-arm": "0.25.12", "@esbuild/linux-arm64": "0.25.12", "@esbuild/linux-ia32": "0.25.12", "@esbuild/linux-loong64": "0.25.12", "@esbuild/linux-mips64el": "0.25.12", "@esbuild/linux-ppc64": "0.25.12", "@esbuild/linux-riscv64": "0.25.12", "@esbuild/linux-s390x": "0.25.12", "@esbuild/linux-x64": "0.25.12", "@esbuild/netbsd-arm64": "0.25.12", "@esbuild/netbsd-x64": "0.25.12", "@esbuild/openbsd-arm64": "0.25.12", "@esbuild/openbsd-x64": "0.25.12", "@esbuild/openharmony-arm64": "0.25.12", "@esbuild/sunos-x64": "0.25.12", "@esbuild/win32-arm64": "0.25.12", "@esbuild/win32-ia32": "0.25.12", "@esbuild/win32-x64": "0.25.12" }, "bin": "bin/esbuild" }, "sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg=="],
"escape-html": ["escape-html@1.0.3", "", {}, "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow=="],
"escape-string-regexp": ["escape-string-regexp@2.0.0", "", {}, "sha512-UpzcLCXolUWcNu5HtVMHYdXJjArjsF9C0aNnquZYY4uW/Vu0miy5YoWvbV345HauVvcAUnpRuhMMcqTcGOY2+w=="],
"etag": ["etag@1.8.1", "", {}, "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg=="],
"event-target-shim": ["event-target-shim@5.0.1", "", {}, "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ=="],
"eventsource": ["eventsource@3.0.7", "", { "dependencies": { "eventsource-parser": "^3.0.1" } }, "sha512-CRT1WTyuQoD771GW56XEZFQ/ZoSfWid1alKGDYMmkt2yl8UXrVR4pspqWNEcqKvVIzg6PAltWjxcSSPrboA4iA=="],
"eventsource-parser": ["eventsource-parser@3.0.6", "", {}, "sha512-Vo1ab+QXPzZ4tCa8SwIHJFaSzy4R6SHf7BY79rFBDf0idraZWAkYrDjDj8uWaSm3S2TK+hJ7/t1CEmZ7jXw+pg=="],
"execa": ["execa@9.6.1", "", { "dependencies": { "@sindresorhus/merge-streams": "^4.0.0", "cross-spawn": "^7.0.6", "figures": "^6.1.0", "get-stream": "^9.0.0", "human-signals": "^8.0.1", "is-plain-obj": "^4.1.0", "is-stream": "^4.0.1", "npm-run-path": "^6.0.0", "pretty-ms": "^9.2.0", "signal-exit": "^4.1.0", "strip-final-newline": "^4.0.0", "yoctocolors": "^2.1.1" } }, "sha512-9Be3ZoN4LmYR90tUoVu2te2BsbzHfhJyfEiAVfz7N5/zv+jduIfLrV2xdQXOHbaD6KgpGdO9PRPM1Y4Q9QkPkA=="],
"express": ["express@5.2.1", "", { "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.1", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", "depd": "^2.0.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "finalhandler": "^2.1.0", "fresh": "^2.0.0", "http-errors": "^2.0.0", "merge-descriptors": "^2.0.0", "mime-types": "^3.0.0", "on-finished": "^2.4.1", "once": "^1.4.0", "parseurl": "^1.3.3", "proxy-addr": "^2.0.7", "qs": "^6.14.0", "range-parser": "^1.2.1", "router": "^2.2.0", "send": "^1.1.0", "serve-static": "^2.2.0", "statuses": "^2.0.1", "type-is": "^2.0.1", "vary": "^1.1.2" } }, "sha512-hIS4idWWai69NezIdRt2xFVofaF4j+6INOpJlVOLDO8zXGpUVEVzIYk12UUi2JzjEzWL3IOAxcTubgz9Po0yXw=="],
"express-rate-limit": ["express-rate-limit@8.3.2", "", { "dependencies": { "ip-address": "10.1.0" }, "peerDependencies": { "express": ">= 4.11" } }, "sha512-77VmFeJkO0/rvimEDuUC5H30oqUC4EyOhyGccfqoLebB0oiEYfM7nwPrsDsBL1gsTpwfzX8SFy2MT3TDyRq+bg=="],
"fast-deep-equal": ["fast-deep-equal@3.1.3", "", {}, "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="],
"fast-uri": ["fast-uri@3.1.0", "", {}, "sha512-iPeeDKJSWf4IEOasVVrknXpaBV0IApz/gp7S2bb7Z4Lljbl2MGJRqInZiUrQwV16cpzw/D3S5j5Julj/gT52AA=="],
"figures": ["figures@6.1.0", "", { "dependencies": { "is-unicode-supported": "^2.0.0" } }, "sha512-d+l3qxjSesT4V7v2fh+QnmFnUWv9lSpjarhShNTgBOfA0ttejbQUAlHLitbjkoRiDulW0OPoQPYIGhIC8ohejg=="],
"finalhandler": ["finalhandler@2.1.1", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-S8KoZgRZN+a5rNwqTxlZZePjT/4cnm0ROV70LedRHZ0p8u9fRID0hJUZQpkKLzro8LfmC8sx23bY6tVNxv8pQA=="],
"find-up": ["find-up@4.1.0", "", { "dependencies": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" } }, "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="],
"follow-redirects": ["follow-redirects@1.15.11", "", {}, "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="],
"form-data": ["form-data@4.0.5", "", { "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.8", "es-set-tostringtag": "^2.1.0", "hasown": "^2.0.2", "mime-types": "^2.1.12" } }, "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w=="],
"form-data-encoder": ["form-data-encoder@1.7.2", "", {}, "sha512-qfqtYan3rxrnCk1VYaA4H+Ms9xdpPqvLZa6xmMgFvhO32x7/3J/ExcTd6qpxM0vH2GdMI+poehyBZvqfMTto8A=="],
"formdata-node": ["formdata-node@4.4.1", "", { "dependencies": { "node-domexception": "1.0.0", "web-streams-polyfill": "4.0.0-beta.3" } }, "sha512-0iirZp3uVDjVGt9p49aTaqjk84TrglENEDuqfdlZQ1roC9CWlPk6Avf8EEnZNcAqPonwkG35x4n3ww/1THYAeQ=="],
"forwarded": ["forwarded@0.2.0", "", {}, "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow=="],
"fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="],
"function-bind": ["function-bind@1.1.2", "", {}, "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="],
"fuse.js": ["fuse.js@7.1.0", "", {}, "sha512-trLf4SzuuUxfusZADLINj+dE8clK1frKdmqiJNb1Es75fmI5oY6X2mxLVUciLLjxqw/xr72Dhy+lER6dGd02FQ=="],
"get-caller-file": ["get-caller-file@2.0.5", "", {}, "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="],
"get-east-asian-width": ["get-east-asian-width@1.5.0", "", {}, "sha512-CQ+bEO+Tva/qlmw24dCejulK5pMzVnUOFOijVogd3KQs07HnRIgp8TGipvCCRT06xeYEbpbgwaCxglFyiuIcmA=="],
"get-intrinsic": ["get-intrinsic@1.3.0", "", { "dependencies": { "call-bind-apply-helpers": "^1.0.2", "es-define-property": "^1.0.1", "es-errors": "^1.3.0", "es-object-atoms": "^1.1.1", "function-bind": "^1.1.2", "get-proto": "^1.0.1", "gopd": "^1.2.0", "has-symbols": "^1.1.0", "hasown": "^2.0.2", "math-intrinsics": "^1.1.0" } }, "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ=="],
"get-proto": ["get-proto@1.0.1", "", { "dependencies": { "dunder-proto": "^1.0.1", "es-object-atoms": "^1.0.0" } }, "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g=="],
"get-stream": ["get-stream@9.0.1", "", { "dependencies": { "@sec-ant/readable-stream": "^0.4.1", "is-stream": "^4.0.1" } }, "sha512-kVCxPF3vQM/N0B1PmoqVUqgHP+EeVjmZSQn+1oCRPxd2P21P2F19lIgbR3HBosbB1PUhOAoctJnfEn2GbN2eZA=="],
"gopd": ["gopd@1.2.0", "", {}, "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg=="],
"graceful-fs": ["graceful-fs@4.2.11", "", {}, "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="],
"has-flag": ["has-flag@4.0.0", "", {}, "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="],
"has-symbols": ["has-symbols@1.1.0", "", {}, "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ=="],
"has-tostringtag": ["has-tostringtag@1.0.2", "", { "dependencies": { "has-symbols": "^1.0.3" } }, "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw=="],
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
"highlight.js": ["highlight.js@11.11.1", "", {}, "sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w=="],
"hono": ["hono@4.12.9", "", {}, "sha512-wy3T8Zm2bsEvxKZM5w21VdHDDcwVS1yUFFY6i8UobSsKfFceT7TOwhbhfKsDyx7tYQlmRM5FLpIuYvNFyjctiA=="],
"http-errors": ["http-errors@2.0.1", "", { "dependencies": { "depd": "~2.0.0", "inherits": "~2.0.4", "setprototypeof": "~1.2.0", "statuses": "~2.0.2", "toidentifier": "~1.0.1" } }, "sha512-4FbRdAX+bSdmo4AUFuS0WNiPz8NgFt+r8ThgNWmlrjQjt1Q7ZR9+zTlce2859x4KSXrwIsaeTqDoKQmtP8pLmQ=="],
"human-signals": ["human-signals@8.0.1", "", {}, "sha512-eKCa6bwnJhvxj14kZk5NCPc6Hb6BdsU9DZcOnmQKSnO1VKrfV0zCvtttPZUsBvjmNDn8rpcJfpwSYnHBjc95MQ=="],
"humanize-ms": ["humanize-ms@1.2.1", "", { "dependencies": { "ms": "^2.0.0" } }, "sha512-Fl70vYtsAFb/C06PTS9dZBo7ihau+Tu/DNCk/OyHhea07S+aeMWpFFkUaXRa8fI+ScZbEI8dfSxwY7gxZ9SAVQ=="],
"iconv-lite": ["iconv-lite@0.7.2", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-im9DjEDQ55s9fL4EYzOAv0yMqmMBSZp6G0VvFyTMPKWxiSBHUj9NW/qqLmXUwXrrM7AvqSlTCfvqRb0cM8yYqw=="],
"ignore": ["ignore@6.0.2", "", {}, "sha512-InwqeHHN2XpumIkMvpl/DCJVrAHgCsG5+cn1XlnLWGwtZBm8QJfSusItfrwx81CTp5agNZqpKU2J/ccC5nGT4A=="],
"inherits": ["inherits@2.0.4", "", {}, "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="],
"ip-address": ["ip-address@10.1.0", "", {}, "sha512-XXADHxXmvT9+CRxhXg56LJovE+bmWnEWB78LB83VZTprKTmaC5QfruXocxzTZ2Kl0DNwKuBdlIhjL8LeY8Sf8Q=="],
"ipaddr.js": ["ipaddr.js@1.9.1", "", {}, "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g=="],
"is-fullwidth-code-point": ["is-fullwidth-code-point@3.0.0", "", {}, "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="],
"is-plain-obj": ["is-plain-obj@4.1.0", "", {}, "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg=="],
"is-promise": ["is-promise@4.0.0", "", {}, "sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ=="],
"is-stream": ["is-stream@4.0.1", "", {}, "sha512-Dnz92NInDqYckGEUJv689RbRiTSEHCQ7wOVeALbkOz999YpqT46yMRIGtSNl2iCL1waAZSx40+h59NV/EwzV/A=="],
"is-unicode-supported": ["is-unicode-supported@2.1.0", "", {}, "sha512-mE00Gnza5EEB3Ds0HfMyllZzbBrmLOX3vfWoj9A9PEnTfratQ/BcaJOuMhnkhjXvb2+FkY3VuHqtAGpTPmglFQ=="],
"isexe": ["isexe@2.0.0", "", {}, "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="],
"jose": ["jose@6.2.2", "", {}, "sha512-d7kPDd34KO/YnzaDOlikGpOurfF0ByC2sEV4cANCtdqLlTfBlw2p14O/5d/zv40gJPbIQxfES3nSx1/oYNyuZQ=="],
"json-schema-traverse": ["json-schema-traverse@1.0.0", "", {}, "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug=="],
"json-schema-typed": ["json-schema-typed@8.0.2", "", {}, "sha512-fQhoXdcvc3V28x7C7BMs4P5+kNlgUURe2jmUT1T//oBRMDrqy1QPelJimwZGo7Hg9VPV3EQV5Bnq4hbFy2vetA=="],
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"lodash.debounce": ["lodash.debounce@4.0.8", "", {}, "sha512-FT1yDzDYEoYWhnSGnpE/4Kj1fLZkDFyqRb7fNt6FdYOSxlUWAtp42Eh6Wb0rGIv/m9Bgo7x4GhQbm5Ys4SG5ow=="],
"marked": ["marked@15.0.12", "", { "bin": "bin/marked.js" }, "sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA=="],
"math-intrinsics": ["math-intrinsics@1.1.0", "", {}, "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g=="],
"media-typer": ["media-typer@1.1.0", "", {}, "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw=="],
"merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="],
"mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="],
"mime-types": ["mime-types@3.0.2", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-Lbgzdk0h4juoQ9fCKXW4by0UJqj+nOOrI9MJ1sSj4nI8aI2eo1qmvQEie4VD1glsS250n15LsWsYtCugiStS5A=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="],
"node-addon-api": ["node-addon-api@7.1.1", "", {}, "sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ=="],
"node-domexception": ["node-domexception@1.0.0", "", {}, "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="],
"node-fetch": ["node-fetch@2.7.0", "", { "dependencies": { "whatwg-url": "^5.0.0" }, "peerDependencies": { "encoding": "^0.1.0" }, "optionalPeers": ["encoding"] }, "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A=="],
"node-pty": ["node-pty@1.1.0", "", { "dependencies": { "node-addon-api": "^7.1.0" } }, "sha512-20JqtutY6JPXTUnL0ij1uad7Qe1baT46lyolh2sSENDd4sTzKZ4nmAFkeAARDKwmlLjPx6XKRlwRUxwjOy+lUg=="],
"npm-run-path": ["npm-run-path@6.0.0", "", { "dependencies": { "path-key": "^4.0.0", "unicorn-magic": "^0.3.0" } }, "sha512-9qny7Z9DsQU8Ou39ERsPU4OZQlSTP47ShQzuKZ6PRXpYLtIFgl/DEBYEXKlvcEa+9tHVcK8CF81Y2V72qaZhWA=="],
"object-assign": ["object-assign@4.1.1", "", {}, "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="],
"object-inspect": ["object-inspect@1.13.4", "", {}, "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew=="],
"on-finished": ["on-finished@2.4.1", "", { "dependencies": { "ee-first": "1.1.1" } }, "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"p-limit": ["p-limit@2.3.0", "", { "dependencies": { "p-try": "^2.0.0" } }, "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w=="],
"p-locate": ["p-locate@4.1.0", "", { "dependencies": { "p-limit": "^2.2.0" } }, "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A=="],
"p-map": ["p-map@7.0.4", "", {}, "sha512-tkAQEw8ysMzmkhgw8k+1U/iPhWNhykKnSk4Rd5zLoPJCuJaGRPo6YposrZgaxHKzDHdDWWZvE/Sk7hsL2X/CpQ=="],
"p-try": ["p-try@2.2.0", "", {}, "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ=="],
"parse-ms": ["parse-ms@4.0.0", "", {}, "sha512-TXfryirbmq34y8QBwgqCVLi+8oA3oWx2eAnSn62ITyEhEYaWRlVZ2DvMM9eZbMs/RfxPu/PK/aBLyGj4IrqMHw=="],
"parseurl": ["parseurl@1.3.3", "", {}, "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ=="],
"path-exists": ["path-exists@4.0.0", "", {}, "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="],
"path-key": ["path-key@3.1.1", "", {}, "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="],
"path-to-regexp": ["path-to-regexp@8.4.1", "", {}, "sha512-fvU78fIjZ+SBM9YwCknCvKOUKkLVqtWDVctl0s7xIqfmfb38t2TT4ZU2gHm+Z8xGwgW+QWEU3oQSAzIbo89Ggw=="],
"picomatch": ["picomatch@4.0.4", "", {}, "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="],
"pkce-challenge": ["pkce-challenge@5.0.1", "", {}, "sha512-wQ0b/W4Fr01qtpHlqSqspcj3EhBvimsdh0KlHhH8HRZnMsEa0ea2fTULOXOS9ccQr3om+GcGRk4e+isrZWV8qQ=="],
"pngjs": ["pngjs@5.0.0", "", {}, "sha512-40QW5YalBNfQo5yRYmiw7Yz6TKKVr3h6970B2YE+3fQpsWcrbj1PzJgxeJ19DRQjhMbKPIuMY8rFaXc8moolVw=="],
"pretty-ms": ["pretty-ms@9.3.0", "", { "dependencies": { "parse-ms": "^4.0.0" } }, "sha512-gjVS5hOP+M3wMm5nmNOucbIrqudzs9v/57bWRHQWLYklXqoXKrVfYW2W9+glfGsqtPgpiz5WwyEEB+ksXIx3gQ=="],
"proper-lockfile": ["proper-lockfile@4.1.2", "", { "dependencies": { "graceful-fs": "^4.2.4", "retry": "^0.12.0", "signal-exit": "^3.0.2" } }, "sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA=="],
"proxy-addr": ["proxy-addr@2.0.7", "", { "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" } }, "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg=="],
"proxy-from-env": ["proxy-from-env@2.1.0", "", {}, "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA=="],
"qrcode": ["qrcode@1.5.4", "", { "dependencies": { "dijkstrajs": "^1.0.1", "pngjs": "^5.0.0", "yargs": "^15.3.1" }, "bin": "bin/qrcode" }, "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg=="],
"qs": ["qs@6.15.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ=="],
"range-parser": ["range-parser@1.2.1", "", {}, "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg=="],
"raw-body": ["raw-body@3.0.2", "", { "dependencies": { "bytes": "~3.1.2", "http-errors": "~2.0.1", "iconv-lite": "~0.7.0", "unpipe": "~1.0.0" } }, "sha512-K5zQjDllxWkf7Z5xJdV0/B0WTNqx6vxG70zJE4N0kBs4LovmEYWJzQGxC9bS9RAKu3bgM40lrd5zoLJ12MQ5BA=="],
"react": ["react@19.2.4", "", {}, "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ=="],
"react-reconciler": ["react-reconciler@0.31.0", "", { "dependencies": { "scheduler": "^0.25.0" }, "peerDependencies": { "react": "^19.0.0" } }, "sha512-7Ob7Z+URmesIsIVRjnLoDGwBEG/tVitidU0nMsqX/eeJaLY89RISO/10ERe0MqmzuKUUB1rmY+h1itMbUHg9BQ=="],
"readdirp": ["readdirp@4.1.2", "", {}, "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg=="],
"require-directory": ["require-directory@2.1.1", "", {}, "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="],
"require-from-string": ["require-from-string@2.0.2", "", {}, "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="],
"require-main-filename": ["require-main-filename@2.0.0", "", {}, "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg=="],
"retry": ["retry@0.12.0", "", {}, "sha512-9LkiTwjUh6rT555DtE9rTX+BKByPfrMzEAtnlEtdEwr3Nkffwiihqe2bWADg+OQRjt9gl6ICdmB/ZFDCGAtSow=="],
"router": ["router@2.2.0", "", { "dependencies": { "debug": "^4.4.0", "depd": "^2.0.0", "is-promise": "^4.0.0", "parseurl": "^1.3.3", "path-to-regexp": "^8.0.0" } }, "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ=="],
"safer-buffer": ["safer-buffer@2.1.2", "", {}, "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="],
"scheduler": ["scheduler@0.25.0", "", {}, "sha512-xFVuu11jh+xcO7JOAGJNOXld8/TcEHK/4CituBUeUb5hqxJLj9YuemAEuvm9gQ/+pgXYfbQuqAkiYu+u7YEsNA=="],
"semver": ["semver@7.7.4", "", { "bin": "bin/semver.js" }, "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA=="],
"send": ["send@1.2.1", "", { "dependencies": { "debug": "^4.4.3", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "fresh": "^2.0.0", "http-errors": "^2.0.1", "mime-types": "^3.0.2", "ms": "^2.1.3", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "statuses": "^2.0.2" } }, "sha512-1gnZf7DFcoIcajTjTwjwuDjzuz4PPcY2StKPlsGAQ1+YH20IRVrBaXSWmdjowTJ6u8Rc01PoYOGHXfP1mYcZNQ=="],
"serve-static": ["serve-static@2.2.1", "", { "dependencies": { "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "parseurl": "^1.3.3", "send": "^1.2.0" } }, "sha512-xRXBn0pPqQTVQiC8wyQrKs2MOlX24zQ0POGaj0kultvoOCstBQM5yvOhAVSUwOMjQtTvsPWoNCHfPGwaaQJhTw=="],
"set-blocking": ["set-blocking@2.0.0", "", {}, "sha512-KiKBS8AnWGEyLzofFfmvKwpdPzqiy16LvQfK3yv/fVH7Bj13/wl3JSR1J+rfgRE9q7xUJK4qvgS8raSOeLUehw=="],
"setprototypeof": ["setprototypeof@1.2.0", "", {}, "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw=="],
"shebang-command": ["shebang-command@2.0.0", "", { "dependencies": { "shebang-regex": "^3.0.0" } }, "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA=="],
"shebang-regex": ["shebang-regex@3.0.0", "", {}, "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="],
"side-channel": ["side-channel@1.1.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3", "side-channel-list": "^1.0.0", "side-channel-map": "^1.0.1", "side-channel-weakmap": "^1.0.2" } }, "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw=="],
"side-channel-list": ["side-channel-list@1.0.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3" } }, "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA=="],
"side-channel-map": ["side-channel-map@1.0.1", "", { "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3" } }, "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA=="],
"side-channel-weakmap": ["side-channel-weakmap@1.0.2", "", { "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3", "side-channel-map": "^1.0.1" } }, "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A=="],
"signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
"stack-utils": ["stack-utils@2.0.6", "", { "dependencies": { "escape-string-regexp": "^2.0.0" } }, "sha512-XlkWvfIm6RmsWtNJx+uqtKLS8eqFbxUg0ZzLXqY0caEy9l7hruX8IpiDnjsLavoBgqCCR71TqWO8MaXYheJ3RQ=="],
"statuses": ["statuses@2.0.2", "", {}, "sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw=="],
"string-width": ["string-width@7.2.0", "", { "dependencies": { "emoji-regex": "^10.3.0", "get-east-asian-width": "^1.0.0", "strip-ansi": "^7.1.0" } }, "sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ=="],
"strip-ansi": ["strip-ansi@7.2.0", "", { "dependencies": { "ansi-regex": "^6.2.2" } }, "sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w=="],
"strip-final-newline": ["strip-final-newline@4.0.0", "", {}, "sha512-aulFJcD6YK8V1G7iRB5tigAP4TsHBZZrOV8pjV++zdUwmeV8uzbY7yn6h9MswN62adStNZFuCIx4haBnRuMDaw=="],
"supports-color": ["supports-color@7.2.0", "", { "dependencies": { "has-flag": "^4.0.0" } }, "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw=="],
"supports-hyperlinks": ["supports-hyperlinks@3.2.0", "", { "dependencies": { "has-flag": "^4.0.0", "supports-color": "^7.0.0" } }, "sha512-zFObLMyZeEwzAoKCyu1B91U79K2t7ApXuQfo8OuxwXLDgcKxuwM+YvcbIhm6QWqz7mHUH1TVytR1PwVVjEuMig=="],
"toidentifier": ["toidentifier@1.0.1", "", {}, "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA=="],
"tr46": ["tr46@0.0.3", "", {}, "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="],
"tree-kill": ["tree-kill@1.2.2", "", { "bin": "cli.js" }, "sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A=="],
"type-fest": ["type-fest@4.41.0", "", {}, "sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA=="],
"type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="],
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"undici": ["undici@7.24.6", "", {}, "sha512-Xi4agocCbRzt0yYMZGMA6ApD7gvtUFaxm4ZmeacWI4cZxaF6C+8I8QfofC20NAePiB/IcvZmzkJ7XPa471AEtA=="],
"undici-types": ["undici-types@6.21.0", "", {}, "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="],
"unicorn-magic": ["unicorn-magic@0.3.0", "", {}, "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA=="],
"unpipe": ["unpipe@1.0.0", "", {}, "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ=="],
"usehooks-ts": ["usehooks-ts@3.1.1", "", { "dependencies": { "lodash.debounce": "^4.0.8" }, "peerDependencies": { "react": "^16.8.0 || ^17 || ^18 || ^19 || ^19.0.0-rc" } }, "sha512-I4diPp9Cq6ieSUH2wu+fDAVQO43xwtulo+fKEidHUwZPnYImbtkTjzIJYcDcJqxgmX31GVqNFURodvcgHcW0pA=="],
"vary": ["vary@1.1.2", "", {}, "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg=="],
"web-streams-polyfill": ["web-streams-polyfill@4.0.0-beta.3", "", {}, "sha512-QW95TCTaHmsYfHDybGMwO5IJIM93I/6vTRk+daHTWFPhwh+C8Cg7j7XyKrwrj8Ib6vYXe0ocYNrmzY4xAAN6ug=="],
"webidl-conversions": ["webidl-conversions@3.0.1", "", {}, "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="],
"whatwg-url": ["whatwg-url@5.0.0", "", { "dependencies": { "tr46": "~0.0.3", "webidl-conversions": "^3.0.0" } }, "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw=="],
"which": ["which@2.0.2", "", { "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "bin/node-which" } }, "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="],
"which-module": ["which-module@2.0.1", "", {}, "sha512-iBdZ57RDvnOR9AGBhML2vFZf7h8vmBjhoaZqODJBFWHVtKkDmKuHai3cx5PgVMrX5YDNp27AofYbAwctSS+vhQ=="],
"wrap-ansi": ["wrap-ansi@9.0.2", "", { "dependencies": { "ansi-styles": "^6.2.1", "string-width": "^7.0.0", "strip-ansi": "^7.1.0" } }, "sha512-42AtmgqjV+X1VpdOfyTGOYRi0/zsoLqtXQckTmqTeybT+BDIbM/Guxo7x3pE2vtpr1ok6xRqM9OpBe+Jyoqyww=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"ws": ["ws@8.20.0", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": ">=5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA=="],
"y18n": ["y18n@4.0.3", "", {}, "sha512-JKhqTOwSrqNA1NY5lSztJ1GrBiUodLMmIZuLiDaMRJ+itFd+ABVE8XBjOvIWL+rSqNDC74LCSFmlb/U4UZ4hJQ=="],
"yaml": ["yaml@2.8.3", "", { "bin": "bin.mjs" }, "sha512-AvbaCLOO2Otw/lW5bmh9d/WEdcDFdQp2Z2ZUH3pX9U2ihyUY0nvLv7J6TrWowklRGPYbB/IuIMfYgxaCPg5Bpg=="],
"yargs": ["yargs@15.4.1", "", { "dependencies": { "cliui": "^6.0.0", "decamelize": "^1.2.0", "find-up": "^4.1.0", "get-caller-file": "^2.0.1", "require-directory": "^2.1.1", "require-main-filename": "^2.0.0", "set-blocking": "^2.0.0", "string-width": "^4.2.0", "which-module": "^2.0.0", "y18n": "^4.0.0", "yargs-parser": "^18.1.2" } }, "sha512-aePbxDmcYW++PaqBsJ+HYUFwCdv4LVvdnhBy78E57PIor8/OVvhMrADFFEDh8DHDFRv/O9i3lPhsENjO7QX0+A=="],
"yargs-parser": ["yargs-parser@18.1.3", "", { "dependencies": { "camelcase": "^5.0.0", "decamelize": "^1.2.0" } }, "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ=="],
"yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="],
"zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="],
"zod-to-json-schema": ["zod-to-json-schema@3.25.2", "", { "peerDependencies": { "zod": "^3.25.28 || ^4" } }, "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA=="],
"@anthropic-ai/sdk/@types/node": ["@types/node@18.19.130", "", { "dependencies": { "undici-types": "~5.26.4" } }, "sha512-GRaXQx6jGfL8sKfaIDD6OupbIHBr9jv7Jnaml9tB7l4v068PAOXqfcujMMo5PhbIs6ggR1XODELqahT2R8v0fg=="],
"cliui/string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"cliui/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"cliui/wrap-ansi": ["wrap-ansi@6.2.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA=="],
"form-data/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="],
"npm-run-path/path-key": ["path-key@4.0.0", "", {}, "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ=="],
"proper-lockfile/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
"yargs/string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"@anthropic-ai/sdk/@types/node/undici-types": ["undici-types@5.26.5", "", {}, "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="],
"cliui/string-width/emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
"cliui/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
"yargs/string-width/emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
"yargs/string-width/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"yargs/string-width/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
}
}

4
bunfig.toml Normal file
View File

@@ -0,0 +1,4 @@
# bunfig.toml — Bun configuration for development mode
# The plugin intercepts `bun:bundle` imports → src/shims/bun-bundle.ts
preload = ["./scripts/bun-plugin-shims.ts"]

35
docker/.dockerignore Normal file
View File

@@ -0,0 +1,35 @@
# NOTE: Docker reads .dockerignore from the build context root.
# The canonical copy lives at /.dockerignore — keep both in sync.
# Dependencies (rebuilt in container)
node_modules
# Git metadata
.git
.github
.gitignore
# Build output (rebuilt in container)
dist
# Env files — never bake secrets into the image
.env
.env.*
# Logs and debug
*.log
npm-debug.log*
bun-debug.log*
# Test artifacts
coverage
.nyc_output
# Editor / OS noise
.DS_Store
Thumbs.db
.vscode
.idea
# Docker context itself
docker

83
docker/Dockerfile Normal file
View File

@@ -0,0 +1,83 @@
# ─────────────────────────────────────────────────────────────
# Claude Web Terminal — Production Container
# ─────────────────────────────────────────────────────────────
# Multi-stage build: compiles node-pty native module and bundles
# the Claude CLI, then copies artifacts into a slim runtime image.
#
# Usage:
# docker build -f docker/Dockerfile -t claude-web .
# docker run -p 3000:3000 -e ANTHROPIC_API_KEY=sk-ant-... claude-web
# ─────────────────────────────────────────────────────────────
# ── Stage 1: Build ────────────────────────────────────────────
FROM oven/bun:1 AS builder
WORKDIR /app
# Build tools required to compile node-pty's native C++ addon
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
# Copy manifests first for layer caching
COPY package.json bun.lockb* ./
# Install all deps (triggers node-pty native compilation)
RUN bun install --frozen-lockfile 2>/dev/null || bun install
# Copy source tree
COPY . .
# Bundle the Claude CLI (produces dist/cli.mjs)
RUN bun run build:prod
# ── Stage 2: Runtime ──────────────────────────────────────────
FROM oven/bun:1 AS runtime
WORKDIR /app
# curl for health checks; no build tools needed at runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
# Non-root user that PTY sessions will run under
RUN groupadd -r claude && useradd -r -g claude -m -d /home/claude claude
# Compiled node_modules (includes native node-pty binary)
COPY --from=builder /app/node_modules ./node_modules
# Bundled Claude CLI
COPY --from=builder /app/dist ./dist
# PTY server source (bun runs TypeScript natively)
COPY --from=builder /app/src/server ./src/server
# TypeScript config needed for bun's module resolution
COPY --from=builder /app/tsconfig.json ./tsconfig.json
# Thin wrapper so the PTY server can exec `claude` as a subprocess
RUN printf '#!/bin/sh\nexec bun /app/dist/cli.mjs "$@"\n' \
> /usr/local/bin/claude && chmod +x /usr/local/bin/claude
# Entrypoint script
COPY docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Allow the claude user to write its config into its home dir
RUN chown -R claude:claude /home/claude
# ── Defaults ──────────────────────────────────────────────────
ENV NODE_ENV=production \
PORT=3000 \
MAX_SESSIONS=5 \
CLAUDE_BIN=claude
EXPOSE 3000
USER claude
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD curl -f http://localhost:${PORT:-3000}/health || exit 1
ENTRYPOINT ["/entrypoint.sh"]

28
docker/docker-compose.yml Normal file
View File

@@ -0,0 +1,28 @@
services:
claude-web:
build:
context: ..
dockerfile: docker/Dockerfile
ports:
- "${PORT:-3000}:3000"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- AUTH_TOKEN=${AUTH_TOKEN:-}
- MAX_SESSIONS=${MAX_SESSIONS:-5}
- ALLOWED_ORIGINS=${ALLOWED_ORIGINS:-}
volumes:
# Persist Claude's config and session data across restarts
- claude-data:/home/claude/.claude
tmpfs:
# PTY processes write temp files here; no persistent storage needed
- /tmp:mode=1777
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
volumes:
claude-data:

28
docker/entrypoint.sh Normal file
View File

@@ -0,0 +1,28 @@
#!/bin/sh
set -e
# ── Validate required env vars ────────────────────────────────
if [ -z "$ANTHROPIC_API_KEY" ]; then
echo "ERROR: ANTHROPIC_API_KEY is not set." >&2
echo "" >&2
echo " docker run -p 3000:3000 -e ANTHROPIC_API_KEY=sk-ant-... claude-web" >&2
echo "" >&2
echo " Or via docker-compose with a .env file:" >&2
echo " ANTHROPIC_API_KEY=sk-ant-... docker-compose up" >&2
exit 1
fi
# The API key is forwarded to child PTY processes via process.env,
# so the claude CLI will pick it up automatically — no config file needed.
echo "Claude Web Terminal starting on port ${PORT:-3000}..."
if [ -n "$AUTH_TOKEN" ]; then
echo " Auth token protection: enabled"
fi
if [ -n "$ALLOWED_ORIGINS" ]; then
echo " Allowed origins: $ALLOWED_ORIGINS"
fi
echo " Max sessions: ${MAX_SESSIONS:-5}"
# Hand off to the PTY WebSocket server
exec bun /app/src/server/web/pty-server.ts

224
docs/architecture.md Normal file
View File

@@ -0,0 +1,224 @@
# Architecture
> Deep-dive into how Claude Code is structured internally.
---
## High-Level Overview
Claude Code is a terminal-native AI coding assistant built as a single-binary CLI. The architecture follows a pipeline model:
```
User Input → CLI Parser → Query Engine → LLM API → Tool Execution Loop → Terminal UI
```
The entire UI layer is built with **React + Ink** (React for the terminal), making it a fully reactive CLI application with components, hooks, state management, and all the patterns you'd expect in a React web app — just rendered to the terminal.
---
## Core Pipeline
### 1. Entrypoint (`src/main.tsx`)
The CLI parser is built with [Commander.js](https://github.com/tj/commander.js) (`@commander-js/extra-typings`). On startup, it:
- Fires parallel prefetch side-effects (MDM settings, Keychain, API preconnect) before heavy module imports
- Parses CLI arguments and flags
- Initializes the React/Ink renderer
- Hands off to the REPL launcher (`src/replLauncher.tsx`)
### 2. Initialization (`src/entrypoints/`)
| File | Role |
|------|------|
| `cli.tsx` | CLI session orchestration — the main path from launch to REPL |
| `init.ts` | Config, telemetry, OAuth, MDM policy initialization |
| `mcp.ts` | MCP server mode entrypoint (Claude Code as an MCP server) |
| `sdk/` | Agent SDK — programmatic API for embedding Claude Code |
Startup performs parallel initialization: MDM policy reads, Keychain prefetch, feature flag checks, then core init.
### 3. Query Engine (`src/QueryEngine.ts`, ~46K lines)
The heart of Claude Code. Handles:
- **Streaming responses** from the Anthropic API
- **Tool-call loops** — when the LLM requests a tool, execute it and feed the result back
- **Thinking mode** — extended thinking with budget management
- **Retry logic** — automatic retries with backoff for transient failures
- **Token counting** — tracks input/output tokens and cost per turn
- **Context management** — manages conversation history and context windows
### 4. Tool System (`src/Tool.ts` + `src/tools/`)
Every capability Claude can invoke is a **tool**. Each tool is self-contained with:
- **Input schema** (Zod validation)
- **Permission model** (what needs user approval)
- **Execution logic** (the actual implementation)
- **UI components** (how invocation/results render in the terminal)
Tools are registered in `src/tools.ts` and discovered by the Query Engine during tool-call loops.
See [Tools Reference](tools.md) for the complete catalog.
### 5. Command System (`src/commands.ts` + `src/commands/`)
User-facing slash commands (`/commit`, `/review`, `/mcp`, etc.) that can be typed in the REPL. Three types:
| Type | Description | Example |
|------|-------------|---------|
| **PromptCommand** | Sends a formatted prompt to the LLM with injected tools | `/review`, `/commit` |
| **LocalCommand** | Runs in-process, returns plain text | `/cost`, `/version` |
| **LocalJSXCommand** | Runs in-process, returns React JSX | `/doctor`, `/install` |
Commands are registered in `src/commands.ts` and invoked via `/command-name` in the REPL.
See [Commands Reference](commands.md) for the complete catalog.
---
## State Management
Claude Code uses a **React context + custom store** pattern:
| Component | Location | Purpose |
|-----------|----------|---------|
| `AppState` | `src/state/AppStateStore.ts` | Global mutable state object |
| Context Providers | `src/context/` | React context for notifications, stats, FPS |
| Selectors | `src/state/` | Derived state functions |
| Change Observers | `src/state/onChangeAppState.ts` | Side-effects on state changes |
The `AppState` object is passed into tool contexts, giving tools access to conversation history, settings, and runtime state.
---
## UI Layer
### Components (`src/components/`, ~140 components)
- Functional React components using Ink primitives (`Box`, `Text`, `useInput()`)
- Styled with [Chalk](https://github.com/chalk/chalk) for terminal colors
- React Compiler enabled for optimized re-renders
- Design system primitives in `src/components/design-system/`
### Screens (`src/screens/`)
Full-screen UI modes:
| Screen | Purpose |
|--------|---------|
| `REPL.tsx` | Main interactive REPL (the default screen) |
| `Doctor.tsx` | Environment diagnostics (`/doctor`) |
| `ResumeConversation.tsx` | Session restore (`/resume`) |
### Hooks (`src/hooks/`, ~80 hooks)
Standard React hooks pattern. Notable categories:
- **Permission hooks** — `useCanUseTool`, `src/hooks/toolPermission/`
- **IDE integration** — `useIDEIntegration`, `useIdeConnectionStatus`, `useDiffInIDE`
- **Input handling** — `useTextInput`, `useVimInput`, `usePasteHandler`, `useInputBuffer`
- **Session management** — `useSessionBackgrounding`, `useRemoteSession`, `useAssistantHistory`
- **Plugin/skill hooks** — `useManagePlugins`, `useSkillsChange`
- **Notification hooks** — `src/hooks/notifs/` (rate limits, deprecation warnings, etc.)
---
## Configuration & Schemas
### Config Schemas (`src/schemas/`)
Zod v4-based schemas for all configuration:
- User settings
- Project-level settings
- Organization/enterprise policies
- Permission rules
### Migrations (`src/migrations/`)
Handles config format changes between versions — reads old configs and transforms them to the current schema.
---
## Build System
### Bun Runtime
Claude Code runs on [Bun](https://bun.sh) (not Node.js). Key implications:
- Native JSX/TSX support without a transpilation step
- `bun:bundle` feature flags for dead-code elimination
- ES modules with `.js` extensions (Bun convention)
### Feature Flags (Dead Code Elimination)
```typescript
import { feature } from 'bun:bundle'
// Code inside inactive feature flags is completely stripped at build time
if (feature('VOICE_MODE')) {
const voiceCommand = require('./commands/voice/index.js').default
}
```
Notable flags:
| Flag | Feature |
|------|---------|
| `PROACTIVE` | Proactive agent mode (autonomous actions) |
| `KAIROS` | Kairos subsystem |
| `BRIDGE_MODE` | IDE bridge integration |
| `DAEMON` | Background daemon mode |
| `VOICE_MODE` | Voice input/output |
| `AGENT_TRIGGERS` | Triggered agent actions |
| `MONITOR_TOOL` | Monitoring tool |
| `COORDINATOR_MODE` | Multi-agent coordinator |
| `WORKFLOW_SCRIPTS` | Workflow automation scripts |
### Lazy Loading
Heavy modules are deferred via dynamic `import()` until first use:
- OpenTelemetry (~400KB)
- gRPC (~700KB)
- Other optional dependencies
---
## Error Handling & Telemetry
### Telemetry (`src/services/analytics/`)
- [GrowthBook](https://www.growthbook.io/) for feature flags and A/B testing
- [OpenTelemetry](https://opentelemetry.io/) for distributed tracing and metrics
- Custom event tracking for usage analytics
### Cost Tracking (`src/cost-tracker.ts`)
Tracks token usage and estimated cost per conversation turn. Accessible via the `/cost` command.
### Diagnostics (`/doctor` command)
The `Doctor.tsx` screen runs environment checks: API connectivity, authentication, tool availability, MCP server status, and more.
---
## Concurrency Model
Claude Code uses a **single-threaded event loop** (Bun/Node.js model) with:
- Async/await for I/O operations
- React's concurrent rendering for UI updates
- Web Workers or child processes for CPU-intensive tasks (gRPC, etc.)
- Tool concurrency safety — each tool declares `isConcurrencySafe()` to indicate if it can run in parallel with other tools
---
## See Also
- [Tools Reference](tools.md) — Complete catalog of all 40 agent tools
- [Commands Reference](commands.md) — Complete catalog of all slash commands
- [Subsystems Guide](subsystems.md) — Bridge, MCP, permissions, skills, plugins, and more
- [Exploration Guide](exploration-guide.md) — How to navigate this codebase

239
docs/bridge.md Normal file
View File

@@ -0,0 +1,239 @@
# Bridge Layer (VS Code / JetBrains IDE Integration)
## Architecture Overview
The bridge (`src/bridge/`, ~31 files) connects Claude Code CLI sessions to
remote IDE extensions (VS Code, JetBrains) and the claude.ai web UI. It is
gated behind `feature('BRIDGE_MODE')` which defaults to `false`.
### Protocols
The bridge uses **two transport generations**:
| Version | Read Path | Write Path | Negotiation |
|---------|-----------|------------|-------------|
| **v1 (env-based)** | WebSocket to Session-Ingress (`ws(s)://.../v1/session_ingress/ws/{sessionId}`) | HTTP POST to Session-Ingress | Environments API poll/ack/dispatch |
| **v2 (env-less)** | SSE stream via `SSETransport` | `CCRClient``/worker/*` endpoints | Direct `POST /v1/code/sessions/{id}/bridge` → worker JWT |
Both wrapped behind `ReplBridgeTransport` interface (`replBridgeTransport.ts`).
The v1 path: register environment → poll for work → acknowledge → spawn session.
The v2 path: create session → POST `/bridge` for JWT → SSE + CCRClient directly.
### Authentication
1. **OAuth tokens** — claude.ai subscription required (`isClaudeAISubscriber()`)
2. **JWT** — Session-Ingress tokens (`sk-ant-si-` prefixed) with `exp` claims.
`jwtUtils.ts` decodes and schedules proactive refresh before expiry.
3. **Trusted Device token**`X-Trusted-Device-Token` header for elevated
security tier sessions. Enrolled via `trustedDevice.ts`.
4. **Environment secret** — base64url-encoded `WorkSecret` containing
`session_ingress_token`, `api_base_url`, git sources, auth tokens.
Dev override: `CLAUDE_BRIDGE_OAUTH_TOKEN` and `CLAUDE_BRIDGE_BASE_URL`
(ant-only, `process.env.USER_TYPE === 'ant'`).
### Message Flow (IDE ↔ CLI)
```
IDE / claude.ai ──WebSocket/SSE──→ Session-Ingress ──→ CLI (replBridge)
←── POST / CCRClient writes ──── Session-Ingress ←── CLI
```
**Inbound** (server → CLI):
- `user` messages (prompts from web UI) → `handleIngressMessage()` → enqueued to REPL
- `control_request` (initialize, set_model, interrupt, set_permission_mode, set_max_thinking_tokens)
- `control_response` (permission decisions from IDE)
**Outbound** (CLI → server):
- `assistant` messages (Claude's responses)
- `user` messages (echoed for sync)
- `result` messages (turn completion)
- System events, tool starts, activities
Dedup: `BoundedUUIDSet` tracks recent posted/inbound UUIDs to reject echoes
and re-deliveries.
### Lifecycle
1. **Entitlement check**: `isBridgeEnabled()` / `isBridgeEnabledBlocking()`
GrowthBook gate `tengu_ccr_bridge` + OAuth subscriber check
2. **Session creation**: `createBridgeSession()` → POST to API
3. **Transport init**: v1 `HybridTransport` or v2 `SSETransport` + `CCRClient`
4. **Message pump**: Read inbound via transport, write outbound via batch
5. **Token refresh**: Proactive JWT refresh via `createTokenRefreshScheduler()`
6. **Teardown**: `teardown()` → flush pending → close transport → archive session
Spawn modes for `claude remote-control`:
- `single-session`: One session in cwd, bridge tears down when it ends
- `worktree`: Persistent server, each session gets an isolated git worktree
- `same-dir`: Persistent server, sessions share cwd
### Key Types
- `BridgeConfig` — Full bridge configuration (dir, auth, URLs, spawn mode, timeouts)
- `WorkSecret` — Decoded work payload (token, API URL, git sources, MCP config)
- `SessionHandle` — Running session (kill, activities, stdin, token update)
- `ReplBridgeHandle` — REPL bridge API (write messages, control requests, teardown)
- `BridgeState``'ready' | 'connected' | 'reconnecting' | 'failed'`
- `SpawnMode``'single-session' | 'worktree' | 'same-dir'`
---
## Feature Gate Analysis
### Must Work (currently works correctly)
The `feature('BRIDGE_MODE')` gate in `src/shims/bun-bundle.ts` defaults to
`false` (reads `CLAUDE_CODE_BRIDGE_MODE` env var). All critical code paths
are properly guarded:
| Location | Guard |
|----------|-------|
| `src/entrypoints/cli.tsx:112` | `feature('BRIDGE_MODE') && args[0] === 'remote-control'` |
| `src/main.tsx:2246` | `feature('BRIDGE_MODE') && remoteControlOption !== undefined` |
| `src/main.tsx:3866` | `if (feature('BRIDGE_MODE'))` (Commander subcommand) |
| `src/hooks/useReplBridge.tsx:79-88` | All `useAppState` calls gated by `feature('BRIDGE_MODE')` ternary |
| `src/hooks/useReplBridge.tsx:99` | `useEffect` body gated by `feature('BRIDGE_MODE')` |
| `src/components/PromptInput/PromptInputFooter.tsx:160` | `if (!feature('BRIDGE_MODE')) return null` |
| `src/components/Settings/Config.tsx:930` | `feature('BRIDGE_MODE') && isBridgeEnabled()` spread |
| `src/tools/BriefTool/upload.ts:99` | `if (feature('BRIDGE_MODE'))` |
| `src/tools/ConfigTool/supportedSettings.ts:153` | `feature('BRIDGE_MODE')` spread |
### Can Defer (full bridge functionality)
All of the following are behind the feature gate and inactive:
- `runBridgeLoop()` — Full bridge orchestration in `bridgeMain.ts`
- `initReplBridge()` — REPL bridge initialization
- `initBridgeCore()` / `initEnvLessBridgeCore()` — Transport negotiation
- `createBridgeApiClient()` — Environments API calls
- `BridgeUI` — Bridge status display and QR codes
- Token refresh scheduling
- Multi-session management (worktree mode)
- Permission delegation to IDE
### Won't Break
Static imports of bridge modules from outside `src/bridge/` do NOT crash because:
1. **All bridge files exist** — they're in the repo, so imports resolve.
2. **No side effects at import time** — bridge modules define functions/types
but don't execute bridge logic on import.
3. **Runtime guards** — Functions like `isBridgeEnabled()` return `false`
when `feature('BRIDGE_MODE')` is false. `getReplBridgeHandle()` returns
`null`. `useReplBridge` short-circuits via ternary operators.
Files with unguarded static imports (safe because files exist):
- `src/hooks/useReplBridge.tsx` — imports types and utils from bridge
- `src/components/Settings/Config.tsx` — imports `isBridgeEnabled` (returns false)
- `src/components/PromptInput/PromptInputFooter.tsx` — early-returns null
- `src/tools/SendMessageTool/SendMessageTool.ts``getReplBridgeHandle()` returns null
- `src/tools/BriefTool/upload.ts` — guarded at call site
- `src/commands/logout/logout.tsx``clearTrustedDeviceTokenCache` is a no-op
---
## Bridge Stub
Created `src/bridge/stub.ts` with:
- `isBridgeAvailable()` → always returns `false`
- `noopBridgeHandle` — silent no-op `ReplBridgeHandle`
- `noopBridgeLogger` — silent no-op `BridgeLogger`
Available for any future code that needs a safe fallback when bridge is off.
---
## Bridge Activation (Future Work)
To enable the bridge:
### 1. Environment Variable
```bash
export CLAUDE_CODE_BRIDGE_MODE=true
```
### 2. Authentication Requirements
- Must be logged in to claude.ai with an active subscription
(`isClaudeAISubscriber()` must return `true`)
- OAuth tokens obtained via `claude auth login` (needs `user:profile` scope)
- GrowthBook gate `tengu_ccr_bridge` must be enabled for the user's org
### 3. IDE Extension
- VS Code: Claude Code extension (connects via the bridge's Session-Ingress layer)
- JetBrains: Similar integration (same protocol)
- Web: `claude.ai/code?bridge={environmentId}` URL
### 4. Network / Ports
- **Session-Ingress**: WebSocket (`wss://`) or SSE for reads; HTTPS POST for writes
- **API base**: Production `api.claude.ai` (configured via OAuth config)
- Dev overrides: `CLAUDE_BRIDGE_BASE_URL`, localhost uses `ws://` and `/v2/` paths
- QR code displayed in terminal links to `claude.ai/code?bridge={envId}`
### 5. Running Remote Control
```bash
# Single session (tears down when session ends)
claude remote-control
# Named session
claude remote-control "my-project"
# With specific spawn mode (requires tengu_ccr_bridge_multi_session gate)
claude remote-control --spawn worktree
claude remote-control --spawn same-dir
```
### 6. Additional Flags
- `--remote-control [name]` / `--rc [name]` — Start REPL with bridge pre-enabled
- `--debug-file <path>` — Write debug log to file
- `--session-id <id>` — Resume an existing session
---
## Chrome Extension Bridge
### `--claude-in-chrome-mcp` (cli.tsx:72)
Launches a **Claude-in-Chrome MCP server** via `runClaudeInChromeMcpServer()` from
`src/utils/claudeInChrome/mcpServer.ts`. This:
- Creates a `StdioServerTransport` (MCP over stdin/stdout)
- Uses `@ant/claude-for-chrome-mcp` package to create an MCP server
- Bridges between Claude Code and the Chrome extension
- Supports both native socket (local) and WebSocket bridge (`wss://bridge.claudeusercontent.com`)
- Gated by `tengu_copper_bridge` GrowthBook flag (or `USER_TYPE=ant`)
**Not gated by `feature('BRIDGE_MODE')`** — this is a separate subsystem. It only
runs when explicitly invoked with `--claude-in-chrome-mcp` flag.
### `--chrome-native-host` (cli.tsx:79)
Launches the **Chrome Native Messaging Host** via `runChromeNativeHost()` from
`src/utils/claudeInChrome/chromeNativeHost.ts`. This:
- Implements Chrome's native messaging protocol (4-byte length prefix + JSON over stdin/stdout)
- Creates a Unix domain socket server at a secure path
- Proxies MCP messages between Chrome extension and local Claude Code instances
- Has its own debug logging to `~/.claude/debug/chrome-native-host.txt` (ant-only)
**Not gated by `feature('BRIDGE_MODE')`** — separate entry point. Only activated
when Chrome calls the registered native messaging host binary.
### Safety
Both Chrome paths:
- Are **dynamic imports** — only loaded when the specific flag is passed
- Return immediately after their own `await` — no side effects on normal CLI startup
- Cannot crash normal operation because they're entirely separate code paths
- Have no dependency on the bridge feature flag
---
## Verification Summary
| Check | Status |
|-------|--------|
| `feature('BRIDGE_MODE')` returns `false` by default | ✅ Verified in `src/shims/bun-bundle.ts` |
| Bridge code not executed when disabled | ✅ All call sites use `feature()` guard |
| No bridge-related errors on startup | ✅ Imports resolve (files exist), no side effects |
| CLI works in terminal-only mode | ✅ Bridge is purely additive |
| Chrome paths don't crash | ✅ Separate dynamic imports, only on explicit flags |
| Stub available for safety | ✅ Created `src/bridge/stub.ts` |

211
docs/commands.md Normal file
View File

@@ -0,0 +1,211 @@
# Commands Reference
> Complete catalog of all slash commands in Claude Code.
---
## Overview
Commands are user-facing actions invoked with a `/` prefix in the REPL (e.g., `/commit`, `/review`). They live in `src/commands/` and are registered in `src/commands.ts`.
### Command Types
| Type | Description | Example |
|------|-------------|---------|
| **PromptCommand** | Sends a formatted prompt to the LLM with injected tools | `/review`, `/commit` |
| **LocalCommand** | Runs in-process, returns plain text | `/cost`, `/version` |
| **LocalJSXCommand** | Runs in-process, returns React JSX | `/install`, `/doctor` |
### Command Definition Pattern
```typescript
const command = {
type: 'prompt',
name: 'my-command',
description: 'What this command does',
progressMessage: 'working...',
allowedTools: ['Bash(git *)', 'FileRead(*)'],
source: 'builtin',
async getPromptForCommand(args, context) {
return [{ type: 'text', text: '...' }]
},
} satisfies Command
```
---
## Git & Version Control
| Command | Source | Description |
|---------|--------|-------------|
| `/commit` | `commit.ts` | Create a git commit with an AI-generated message |
| `/commit-push-pr` | `commit-push-pr.ts` | Commit, push, and create a PR in one step |
| `/branch` | `branch/` | Create or switch git branches |
| `/diff` | `diff/` | View file changes (staged, unstaged, or against a ref) |
| `/pr_comments` | `pr_comments/` | View and address PR review comments |
| `/rewind` | `rewind/` | Revert to a previous state |
## Code Quality
| Command | Source | Description |
|---------|--------|-------------|
| `/review` | `review.ts` | AI-powered code review of staged/unstaged changes |
| `/security-review` | `security-review.ts` | Security-focused code review |
| `/advisor` | `advisor.ts` | Get architectural or design advice |
| `/bughunter` | `bughunter/` | Find potential bugs in the codebase |
## Session & Context
| Command | Source | Description |
|---------|--------|-------------|
| `/compact` | `compact/` | Compress conversation context to fit more history |
| `/context` | `context/` | Visualize current context (files, memory, etc.) |
| `/resume` | `resume/` | Restore a previous conversation session |
| `/session` | `session/` | Manage sessions (list, switch, delete) |
| `/share` | `share/` | Share a session via link |
| `/export` | `export/` | Export conversation to a file |
| `/summary` | `summary/` | Generate a summary of the current session |
| `/clear` | `clear/` | Clear the conversation history |
## Configuration & Settings
| Command | Source | Description |
|---------|--------|-------------|
| `/config` | `config/` | View or modify Claude Code settings |
| `/permissions` | `permissions/` | Manage tool permission rules |
| `/theme` | `theme/` | Change the terminal color theme |
| `/output-style` | `output-style/` | Change output formatting style |
| `/color` | `color/` | Toggle color output |
| `/keybindings` | `keybindings/` | View or customize keybindings |
| `/vim` | `vim/` | Toggle vim mode for input |
| `/effort` | `effort/` | Adjust response effort level |
| `/model` | `model/` | Switch the active model |
| `/privacy-settings` | `privacy-settings/` | Manage privacy/data settings |
| `/fast` | `fast/` | Toggle fast mode (shorter responses) |
| `/brief` | `brief.ts` | Toggle brief output mode |
## Memory & Knowledge
| Command | Source | Description |
|---------|--------|-------------|
| `/memory` | `memory/` | Manage persistent memory (CLAUDE.md files) |
| `/add-dir` | `add-dir/` | Add a directory to the project context |
| `/files` | `files/` | List files in the current context |
## MCP & Plugins
| Command | Source | Description |
|---------|--------|-------------|
| `/mcp` | `mcp/` | Manage MCP server connections |
| `/plugin` | `plugin/` | Install, remove, or manage plugins |
| `/reload-plugins` | `reload-plugins/` | Reload all installed plugins |
| `/skills` | `skills/` | View and manage skills |
## Authentication
| Command | Source | Description |
|---------|--------|-------------|
| `/login` | `login/` | Authenticate with Anthropic |
| `/logout` | `logout/` | Sign out |
| `/oauth-refresh` | `oauth-refresh/` | Refresh OAuth tokens |
## Tasks & Agents
| Command | Source | Description |
|---------|--------|-------------|
| `/tasks` | `tasks/` | Manage background tasks |
| `/agents` | `agents/` | Manage sub-agents |
| `/ultraplan` | `ultraplan.tsx` | Generate a detailed execution plan |
| `/plan` | `plan/` | Enter planning mode |
## Diagnostics & Status
| Command | Source | Description |
|---------|--------|-------------|
| `/doctor` | `doctor/` | Run environment diagnostics |
| `/status` | `status/` | Show system and session status |
| `/stats` | `stats/` | Show session statistics |
| `/cost` | `cost/` | Display token usage and estimated cost |
| `/version` | `version.ts` | Show Claude Code version |
| `/usage` | `usage/` | Show detailed API usage |
| `/extra-usage` | `extra-usage/` | Show extended usage details |
| `/rate-limit-options` | `rate-limit-options/` | View rate limit configuration |
## Installation & Setup
| Command | Source | Description |
|---------|--------|-------------|
| `/install` | `install.tsx` | Install or update Claude Code |
| `/upgrade` | `upgrade/` | Upgrade to the latest version |
| `/init` | `init.ts` | Initialize a project (create CLAUDE.md) |
| `/init-verifiers` | `init-verifiers.ts` | Set up verifier hooks |
| `/onboarding` | `onboarding/` | Run the first-time setup wizard |
| `/terminalSetup` | `terminalSetup/` | Configure terminal integration |
## IDE & Desktop Integration
| Command | Source | Description |
|---------|--------|-------------|
| `/bridge` | `bridge/` | Manage IDE bridge connections |
| `/bridge-kick` | `bridge-kick.ts` | Force-restart the IDE bridge |
| `/ide` | `ide/` | Open in IDE |
| `/desktop` | `desktop/` | Hand off to the desktop app |
| `/mobile` | `mobile/` | Hand off to the mobile app |
| `/teleport` | `teleport/` | Transfer session to another device |
## Remote & Environment
| Command | Source | Description |
|---------|--------|-------------|
| `/remote-env` | `remote-env/` | Configure remote environment |
| `/remote-setup` | `remote-setup/` | Set up remote session |
| `/env` | `env/` | View environment variables |
| `/sandbox-toggle` | `sandbox-toggle/` | Toggle sandbox mode |
## Misc
| Command | Source | Description |
|---------|--------|-------------|
| `/help` | `help/` | Show help and available commands |
| `/exit` | `exit/` | Exit Claude Code |
| `/copy` | `copy/` | Copy content to clipboard |
| `/feedback` | `feedback/` | Send feedback to Anthropic |
| `/release-notes` | `release-notes/` | View release notes |
| `/rename` | `rename/` | Rename the current session |
| `/tag` | `tag/` | Tag the current session |
| `/insights` | `insights.ts` | Show codebase insights |
| `/stickers` | `stickers/` | Easter egg — stickers |
| `/good-claude` | `good-claude/` | Easter egg — praise Claude |
| `/voice` | `voice/` | Toggle voice input mode |
| `/chrome` | `chrome/` | Chrome extension integration |
| `/issue` | `issue/` | File a GitHub issue |
| `/statusline` | `statusline.tsx` | Customize the status line |
| `/thinkback` | `thinkback/` | Replay Claude's thinking process |
| `/thinkback-play` | `thinkback-play/` | Animated thinking replay |
| `/passes` | `passes/` | Multi-pass execution |
| `/x402` | `x402/` | x402 payment protocol integration |
## Internal / Debug Commands
| Command | Source | Description |
|---------|--------|-------------|
| `/ant-trace` | `ant-trace/` | Anthropic-internal tracing |
| `/autofix-pr` | `autofix-pr/` | Auto-fix PR issues |
| `/backfill-sessions` | `backfill-sessions/` | Backfill session data |
| `/break-cache` | `break-cache/` | Invalidate caches |
| `/btw` | `btw/` | "By the way" interjection |
| `/ctx_viz` | `ctx_viz/` | Context visualization (debug) |
| `/debug-tool-call` | `debug-tool-call/` | Debug a specific tool call |
| `/heapdump` | `heapdump/` | Dump heap for memory analysis |
| `/hooks` | `hooks/` | Manage hook scripts |
| `/mock-limits` | `mock-limits/` | Mock rate limits for testing |
| `/perf-issue` | `perf-issue/` | Report performance issues |
| `/reset-limits` | `reset-limits/` | Reset rate limit counters |
---
## See Also
- [Architecture](architecture.md) — How the command system fits into the pipeline
- [Tools Reference](tools.md) — Agent tools (different from slash commands)
- [Exploration Guide](exploration-guide.md) — Finding command source code

246
docs/exploration-guide.md Normal file
View File

@@ -0,0 +1,246 @@
# Exploration Guide
> How to navigate and study the Claude Code source code.
---
## Quick Start
This is a **read-only reference codebase** — there's no build system or test suite. The goal is to understand how a production AI coding assistant is built.
### Orientation
| What | Where |
|------|-------|
| CLI entrypoint | `src/main.tsx` |
| Core LLM engine | `src/QueryEngine.ts` (~46K lines) |
| Tool definitions | `src/Tool.ts` (~29K lines) |
| Command registry | `src/commands.ts` (~25K lines) |
| Tool registry | `src/tools.ts` |
| Context collection | `src/context.ts` |
| All tool implementations | `src/tools/` (40 subdirectories) |
| All command implementations | `src/commands/` (~85 subdirectories + 15 files) |
---
## Finding Things
### "How does tool X work?"
1. Go to `src/tools/{ToolName}/`
2. Main implementation is `{ToolName}.ts` or `.tsx`
3. UI rendering is in `UI.tsx`
4. System prompt contribution is in `prompt.ts`
Example — understanding BashTool:
```
src/tools/BashTool/
├── BashTool.ts ← Core execution logic
├── UI.tsx ← How bash output renders in terminal
├── prompt.ts ← What the system prompt says about bash
└── ...
```
### "How does command X work?"
1. Check `src/commands/{command-name}/` (directory) or `src/commands/{command-name}.ts` (file)
2. Look for the `getPromptForCommand()` function (PromptCommands) or direct implementation (LocalCommands)
### "How does feature X work?"
| Feature | Start Here |
|---------|-----------|
| Permissions | `src/hooks/toolPermission/` |
| IDE bridge | `src/bridge/bridgeMain.ts` |
| MCP client | `src/services/mcp/` |
| Plugin system | `src/plugins/` + `src/services/plugins/` |
| Skills | `src/skills/` |
| Voice input | `src/voice/` + `src/services/voice.ts` |
| Multi-agent | `src/coordinator/` |
| Memory | `src/memdir/` |
| Authentication | `src/services/oauth/` |
| Config schemas | `src/schemas/` |
| State management | `src/state/` |
### "How does an API call flow?"
Trace from user input to API response:
```
src/main.tsx ← CLI parsing
→ src/replLauncher.tsx ← REPL session start
→ src/QueryEngine.ts ← Core engine
→ src/services/api/ ← Anthropic SDK client
→ (Anthropic API) ← HTTP/streaming
← Tool use response
→ src/tools/{ToolName}/ ← Tool execution
← Tool result
→ (feed back to API) ← Continue the loop
```
---
## Code Patterns to Recognize
### `buildTool()` — Tool Factory
Every tool uses this pattern:
```typescript
export const MyTool = buildTool({
name: 'MyTool',
inputSchema: z.object({ ... }),
async call(args, context) { ... },
async checkPermissions(input, context) { ... },
})
```
### Feature Flag Gates
```typescript
import { feature } from 'bun:bundle'
if (feature('VOICE_MODE')) {
// This code is stripped at build time if VOICE_MODE is off
}
```
### Anthropic-Internal Gates
```typescript
if (process.env.USER_TYPE === 'ant') {
// Anthropic employee-only features
}
```
### Index Re-exports
Most directories have an `index.ts` that re-exports the public API:
```typescript
// src/tools/BashTool/index.ts
export { BashTool } from './BashTool.js'
```
### Lazy Dynamic Imports
Heavy modules are loaded only when needed:
```typescript
const { OpenTelemetry } = await import('./heavy-module.js')
```
### ESM with `.js` Extensions
Bun convention — all imports use `.js` extensions even for `.ts` files:
```typescript
import { something } from './utils.js' // Actually imports utils.ts
```
---
## Key Files by Size
The largest files contain the most logic and are worth studying:
| File | Lines | What's Inside |
|------|-------|---------------|
| `QueryEngine.ts` | ~46K | Streaming, tool loops, retries, token counting |
| `Tool.ts` | ~29K | Tool types, `buildTool`, permission models |
| `commands.ts` | ~25K | Command registry, conditional loading |
| `main.tsx` | — | CLI parser, startup optimization |
| `context.ts` | — | OS, shell, git, user context assembly |
---
## Study Paths
### Path 1: "How does a tool work end-to-end?"
1. Read `src/Tool.ts` — understand the `buildTool` interface
2. Pick a simple tool like `FileReadTool` in `src/tools/FileReadTool/`
3. Trace how `QueryEngine.ts` calls tools during the tool loop
4. See how permissions are checked in `src/hooks/toolPermission/`
### Path 2: "How does the UI work?"
1. Read `src/screens/REPL.tsx` — the main screen
2. Explore `src/components/` — pick a few components
3. See `src/hooks/useTextInput.ts` — how user input is captured
4. Check `src/ink/` — the Ink renderer wrapper
### Path 3: "How does the IDE integration work?"
1. Start at `src/bridge/bridgeMain.ts`
2. Follow `bridgeMessaging.ts` for the message protocol
3. See `bridgePermissionCallbacks.ts` for how permissions route to the IDE
4. Check `replBridge.ts` for REPL session bridging
### Path 4: "How do plugins extend Claude Code?"
1. Read `src/types/plugin.ts` — the plugin API surface
2. See `src/services/plugins/` — how plugins are loaded
3. Check `src/plugins/builtinPlugins.ts` — built-in examples
4. Look at `src/plugins/bundled/` — bundled plugin code
### Path 5: "How does MCP work?"
1. Read `src/services/mcp/` — the MCP client
2. See `src/tools/MCPTool/` — how MCP tools are invoked
3. Check `src/entrypoints/mcp.ts` — Claude Code as an MCP server
4. Look at `src/skills/mcpSkillBuilders.ts` — skills from MCP
---
## Using the MCP Server for Exploration
This repo includes a standalone MCP server (`mcp-server/`) that lets any MCP-compatible client explore the source code. See the [MCP Server README](../mcp-server/README.md) for setup.
Once connected, you can ask an AI assistant to explore the source:
- "How does the BashTool work?"
- "Search for where permissions are checked"
- "List all files in the bridge directory"
- "Read QueryEngine.ts lines 1-100"
---
## Grep Patterns
Useful grep/ripgrep patterns for finding things:
```bash
# Find all tool definitions
rg "buildTool\(" src/tools/
# Find all command definitions
rg "satisfies Command" src/commands/
# Find feature flag usage
rg "feature\(" src/
# Find Anthropic-internal gates
rg "USER_TYPE.*ant" src/
# Find all React hooks
rg "^export function use" src/hooks/
# Find all Zod schemas
rg "z\.object\(" src/schemas/
# Find all system prompt contributions
rg "prompt\(" src/tools/*/prompt.ts
# Find permission rule patterns
rg "checkPermissions" src/tools/
```
---
## See Also
- [Architecture](architecture.md) — Overall system design
- [Tools Reference](tools.md) — Complete tool catalog
- [Commands Reference](commands.md) — All slash commands
- [Subsystems Guide](subsystems.md) — Deep dives into Bridge, MCP, Permissions, etc.

346
docs/subsystems.md Normal file
View File

@@ -0,0 +1,346 @@
# Subsystems Guide
> Detailed documentation of Claude Code's major subsystems.
---
## Table of Contents
- [Bridge (IDE Integration)](#bridge-ide-integration)
- [MCP (Model Context Protocol)](#mcp-model-context-protocol)
- [Permission System](#permission-system)
- [Plugin System](#plugin-system)
- [Skill System](#skill-system)
- [Task System](#task-system)
- [Memory System](#memory-system)
- [Coordinator (Multi-Agent)](#coordinator-multi-agent)
- [Voice System](#voice-system)
- [Service Layer](#service-layer)
---
## Bridge (IDE Integration)
**Location:** `src/bridge/`
The bridge is a bidirectional communication layer connecting Claude Code's CLI with IDE extensions (VS Code, JetBrains). It allows the CLI to run as a backend for IDE-based interfaces.
### Architecture
```
┌──────────────────┐ ┌──────────────────────┐
│ IDE Extension │◄───────►│ Bridge Layer │
│ (VS Code, JB) │ JWT │ (src/bridge/) │
│ │ Auth │ │
│ - UI rendering │ │ - Session mgmt │
│ - File watching │ │ - Message routing │
│ - Diff display │ │ - Permission proxy │
└──────────────────┘ └──────────┬───────────┘
┌──────────────────────┐
│ Claude Code Core │
│ (QueryEngine, Tools) │
└──────────────────────┘
```
### Key Files
| File | Purpose |
|------|---------|
| `bridgeMain.ts` | Main bridge loop — starts the bidirectional channel |
| `bridgeMessaging.ts` | Message protocol (serialize/deserialize) |
| `bridgePermissionCallbacks.ts` | Routes permission prompts to the IDE |
| `bridgeApi.ts` | API surface exposed to the IDE |
| `bridgeConfig.ts` | Bridge configuration |
| `replBridge.ts` | Connects the REPL session to the bridge |
| `jwtUtils.ts` | JWT-based authentication between CLI and IDE |
| `sessionRunner.ts` | Manages bridge session execution |
| `createSession.ts` | Creates new bridge sessions |
| `trustedDevice.ts` | Device trust verification |
| `workSecret.ts` | Workspace-scoped secrets |
| `inboundMessages.ts` | Handles messages coming from the IDE |
| `inboundAttachments.ts` | Handles file attachments from the IDE |
| `types.ts` | TypeScript types for the bridge protocol |
### Feature Flag
The bridge is gated behind the `BRIDGE_MODE` feature flag and is stripped from non-IDE builds.
---
## MCP (Model Context Protocol)
**Location:** `src/services/mcp/`
Claude Code acts as both an **MCP client** (consuming tools/resources from MCP servers) and can run as an **MCP server** (exposing its own tools via `src/entrypoints/mcp.ts`).
### Client Features
- **Tool discovery** — Enumerates tools from connected MCP servers
- **Resource browsing** — Lists and reads MCP-exposed resources
- **Dynamic tool loading** — `ToolSearchTool` discovers tools at runtime
- **Authentication** — `McpAuthTool` handles MCP server auth flows
- **Connectivity monitoring** — `useMcpConnectivityStatus` hook tracks connection health
### Server Mode
When launched via `src/entrypoints/mcp.ts`, Claude Code exposes its own tools and resources via the MCP protocol, allowing other AI agents to use Claude Code as a tool server.
### Related Tools
| Tool | Purpose |
|------|---------|
| `MCPTool` | Invoke tools on connected MCP servers |
| `ListMcpResourcesTool` | List available MCP resources |
| `ReadMcpResourceTool` | Read a specific MCP resource |
| `McpAuthTool` | Authenticate with an MCP server |
| `ToolSearchTool` | Discover deferred tools from MCP servers |
### Configuration
MCP servers are configured via `/mcp` command or settings files. The server approval flow lives in `src/services/mcpServerApproval.tsx`.
---
## Permission System
**Location:** `src/hooks/toolPermission/`
Every tool invocation passes through a centralized permission check before execution.
### Permission Modes
| Mode | Behavior |
|------|----------|
| `default` | Prompts the user for each potentially destructive operation |
| `plan` | Shows the full execution plan, asks once for batch approval |
| `bypassPermissions` | Auto-approves all operations (dangerous — for trusted environments) |
| `auto` | ML-based classifier automatically decides (experimental) |
### How It Works
1. Tool is invoked by the Query Engine
2. `checkPermissions(input, context)` is called on the tool
3. Permission handler checks against configured rules
4. If not auto-approved, user is prompted via terminal or IDE
### Permission Rules
Rules use wildcard patterns to match tool invocations:
```
Bash(git *) # Allow all git commands without prompt
Bash(npm test) # Allow 'npm test' specifically
FileEdit(/src/*) # Allow edits to anything under src/
FileRead(*) # Allow reading any file
```
### Key Files
| File | Path |
|------|------|
| Permission context | `src/hooks/toolPermission/PermissionContext.ts` |
| Permission handlers | `src/hooks/toolPermission/handlers/` |
| Permission logging | `src/hooks/toolPermission/permissionLogging.ts` |
| Permission types | `src/types/permissions.ts` |
---
## Plugin System
**Location:** `src/plugins/`, `src/services/plugins/`
Claude Code supports installable plugins that can extend its capabilities.
### Structure
| Component | Location | Purpose |
|-----------|----------|---------|
| Plugin loader | `src/services/plugins/` | Discovers and loads plugins |
| Built-in plugins | `src/plugins/builtinPlugins.ts` | Plugins that ship with Claude Code |
| Bundled plugins | `src/plugins/bundled/` | Plugin code bundled into the binary |
| Plugin types | `src/types/plugin.ts` | TypeScript types for plugin API |
### Plugin Lifecycle
1. **Discovery** — Scans plugin directories and marketplace
2. **Installation** — Downloaded and registered (`/plugin` command)
3. **Loading** — Initialized at startup or on-demand
4. **Execution** — Plugins can contribute tools, commands, and prompts
5. **Auto-update**`usePluginAutoupdateNotification` handles updates
### Related Commands
| Command | Purpose |
|---------|---------|
| `/plugin` | Install, remove, or manage plugins |
| `/reload-plugins` | Reload all installed plugins |
---
## Skill System
**Location:** `src/skills/`
Skills are reusable, named workflows that bundle prompts and tool configurations for specific tasks.
### Structure
| Component | Location | Purpose |
|-----------|----------|---------|
| Bundled skills | `src/skills/bundled/` | Skills that ship with Claude Code |
| Skill loader | `src/skills/loadSkillsDir.ts` | Loads skills from disk |
| MCP skill builders | `src/skills/mcpSkillBuilders.ts` | Creates skills from MCP resources |
| Skill registry | `src/skills/bundledSkills.ts` | Registration of all bundled skills |
### Bundled Skills (16)
| Skill | Purpose |
|-------|---------|
| `batch` | Batch operations across multiple files |
| `claudeApi` | Direct Anthropic API interaction |
| `claudeInChrome` | Chrome extension integration |
| `debug` | Debugging workflows |
| `keybindings` | Keybinding configuration |
| `loop` | Iterative refinement loops |
| `loremIpsum` | Generate placeholder text |
| `remember` | Persist information to memory |
| `scheduleRemoteAgents` | Schedule agents for remote execution |
| `simplify` | Simplify complex code |
| `skillify` | Create new skills from workflows |
| `stuck` | Get unstuck when blocked |
| `updateConfig` | Modify configuration programmatically |
| `verify` / `verifyContent` | Verify code correctness |
### Execution
Skills are invoked via the `SkillTool` or the `/skills` command. Users can also create custom skills.
---
## Task System
**Location:** `src/tasks/`
Manages background and parallel work items — shell tasks, agent tasks, and teammate agents.
### Task Types
| Type | Location | Purpose |
|------|----------|---------|
| `LocalShellTask` | `LocalShellTask/` | Background shell command execution |
| `LocalAgentTask` | `LocalAgentTask/` | Sub-agent running locally |
| `RemoteAgentTask` | `RemoteAgentTask/` | Agent running on a remote machine |
| `InProcessTeammateTask` | `InProcessTeammateTask/` | Parallel teammate agent |
| `DreamTask` | `DreamTask/` | Background "dreaming" process |
| `LocalMainSessionTask` | `LocalMainSessionTask.ts` | Main session as a task |
### Task Tools
| Tool | Purpose |
|------|---------|
| `TaskCreateTool` | Create a new background task |
| `TaskUpdateTool` | Update task status |
| `TaskGetTool` | Retrieve task details |
| `TaskListTool` | List all tasks |
| `TaskOutputTool` | Get task output |
| `TaskStopTool` | Stop a running task |
---
## Memory System
**Location:** `src/memdir/`
Claude Code's persistent memory system, based on `CLAUDE.md` files.
### Memory Hierarchy
| Scope | Location | Purpose |
|-------|----------|---------|
| Project memory | `CLAUDE.md` in project root | Project-specific facts, conventions |
| User memory | `~/.claude/CLAUDE.md` | User preferences, cross-project |
| Extracted memories | `src/services/extractMemories/` | Auto-extracted from conversations |
| Team memory sync | `src/services/teamMemorySync/` | Shared team knowledge |
### Related
- `/memory` command for managing memories
- `remember` skill for persisting information
- `useMemoryUsage` hook for tracking memory size
---
## Coordinator (Multi-Agent)
**Location:** `src/coordinator/`
Orchestrates multiple agents working in parallel on different aspects of a task.
### How It Works
- `coordinatorMode.ts` manages the coordinator lifecycle
- `TeamCreateTool` and `TeamDeleteTool` manage agent teams
- `SendMessageTool` enables inter-agent communication
- `AgentTool` spawns sub-agents
Gated behind the `COORDINATOR_MODE` feature flag.
---
## Voice System
**Location:** `src/voice/`
Voice input/output support for hands-free interaction.
### Components
| File | Location | Purpose |
|------|----------|---------|
| Voice service | `src/services/voice.ts` | Core voice processing |
| STT streaming | `src/services/voiceStreamSTT.ts` | Speech-to-text streaming |
| Key terms | `src/services/voiceKeyterms.ts` | Domain-specific vocabulary |
| Voice hooks | `src/hooks/useVoice.ts`, `useVoiceEnabled.ts`, `useVoiceIntegration.tsx` | React hooks |
| Voice command | `src/commands/voice/` | `/voice` slash command |
Gated behind the `VOICE_MODE` feature flag.
---
## Service Layer
**Location:** `src/services/`
External integrations and shared services.
| Service | Path | Purpose |
|---------|------|---------|
| **API** | `api/` | Anthropic SDK client, file uploads, bootstrap |
| **MCP** | `mcp/` | MCP client connections and tool discovery |
| **OAuth** | `oauth/` | OAuth 2.0 authentication flow |
| **LSP** | `lsp/` | Language Server Protocol manager |
| **Analytics** | `analytics/` | GrowthBook feature flags, telemetry |
| **Plugins** | `plugins/` | Plugin loader and marketplace |
| **Compact** | `compact/` | Conversation context compression |
| **Policy Limits** | `policyLimits/` | Organization rate limits/quota |
| **Remote Settings** | `remoteManagedSettings/` | Enterprise managed settings sync |
| **Token Estimation** | `tokenEstimation.ts` | Token count estimation |
| **Team Memory** | `teamMemorySync/` | Team knowledge synchronization |
| **Tips** | `tips/` | Contextual usage tips |
| **Agent Summary** | `AgentSummary/` | Agent work summaries |
| **Prompt Suggestion** | `PromptSuggestion/` | Suggested follow-up prompts |
| **Session Memory** | `SessionMemory/` | Session-level memory |
| **Magic Docs** | `MagicDocs/` | Documentation generation |
| **Auto Dream** | `autoDream/` | Background ideation |
| **x402** | `x402/` | x402 payment protocol |
---
## See Also
- [Architecture](architecture.md) — How subsystems connect in the core pipeline
- [Tools Reference](tools.md) — Tools related to each subsystem
- [Commands Reference](commands.md) — Commands for managing subsystems
- [Exploration Guide](exploration-guide.md) — Finding subsystem source code

173
docs/tools.md Normal file
View File

@@ -0,0 +1,173 @@
# Tools Reference
> Complete catalog of all ~40 agent tools in Claude Code.
---
## Overview
Every tool lives in `src/tools/<ToolName>/` as a self-contained module. Each tool defines:
- **Input schema** — Zod-validated parameters
- **Permission model** — What requires user approval
- **Execution logic** — The tool's implementation
- **UI components** — Terminal rendering for invocation and results
- **Concurrency safety** — Whether it can run in parallel
Tools are registered in `src/tools.ts` and invoked by the Query Engine during LLM tool-call loops.
### Tool Definition Pattern
```typescript
export const MyTool = buildTool({
name: 'MyTool',
aliases: ['my_tool'],
description: 'What this tool does',
inputSchema: z.object({
param: z.string(),
}),
async call(args, context, canUseTool, parentMessage, onProgress) {
// Execute and return { data: result, newMessages?: [...] }
},
async checkPermissions(input, context) { /* Permission checks */ },
isConcurrencySafe(input) { /* Can run in parallel? */ },
isReadOnly(input) { /* Non-destructive? */ },
prompt(options) { /* System prompt injection */ },
renderToolUseMessage(input, options) { /* UI for invocation */ },
renderToolResultMessage(content, progressMessages, options) { /* UI for result */ },
})
```
**Directory structure per tool:**
```
src/tools/MyTool/
├── MyTool.ts # Main implementation
├── UI.tsx # Terminal rendering
├── prompt.ts # System prompt contribution
└── utils.ts # Tool-specific helpers
```
---
## File System Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **FileReadTool** | Read file contents (text, images, PDFs, notebooks). Supports line ranges | Yes |
| **FileWriteTool** | Create or overwrite files | No |
| **FileEditTool** | Partial file modification via string replacement | No |
| **GlobTool** | Find files matching glob patterns (e.g. `**/*.ts`) | Yes |
| **GrepTool** | Content search using ripgrep (regex-capable) | Yes |
| **NotebookEditTool** | Edit Jupyter notebook cells | No |
| **TodoWriteTool** | Write to a structured todo/task file | No |
## Shell & Execution Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **BashTool** | Execute shell commands in bash | No |
| **PowerShellTool** | Execute PowerShell commands (Windows) | No |
| **REPLTool** | Run code in a REPL session (Python, Node, etc.) | No |
## Agent & Orchestration Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **AgentTool** | Spawn a sub-agent for complex tasks | No |
| **SendMessageTool** | Send messages between agents | No |
| **TeamCreateTool** | Create a team of parallel agents | No |
| **TeamDeleteTool** | Remove a team agent | No |
| **EnterPlanModeTool** | Switch to planning mode (no execution) | No |
| **ExitPlanModeTool** | Exit planning mode, resume execution | No |
| **EnterWorktreeTool** | Isolate work in a git worktree | No |
| **ExitWorktreeTool** | Exit worktree isolation | No |
| **SleepTool** | Pause execution (proactive mode) | Yes |
| **SyntheticOutputTool** | Generate structured output | Yes |
## Task Management Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **TaskCreateTool** | Create a new background task | No |
| **TaskUpdateTool** | Update a task's status or details | No |
| **TaskGetTool** | Get details of a specific task | Yes |
| **TaskListTool** | List all tasks | Yes |
| **TaskOutputTool** | Get output from a completed task | Yes |
| **TaskStopTool** | Stop a running task | No |
## Web Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **WebFetchTool** | Fetch content from a URL | Yes |
| **WebSearchTool** | Search the web | Yes |
## MCP (Model Context Protocol) Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **MCPTool** | Invoke tools on connected MCP servers | Varies |
| **ListMcpResourcesTool** | List resources exposed by MCP servers | Yes |
| **ReadMcpResourceTool** | Read a specific MCP resource | Yes |
| **McpAuthTool** | Handle MCP server authentication | No |
| **ToolSearchTool** | Discover deferred/dynamic tools from MCP servers | Yes |
## Integration Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **LSPTool** | Language Server Protocol operations (go-to-definition, find references, etc.) | Yes |
| **SkillTool** | Execute a registered skill | Varies |
## Scheduling & Triggers
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **ScheduleCronTool** | Create a scheduled cron trigger | No |
| **RemoteTriggerTool** | Fire a remote trigger | No |
## Utility Tools
| Tool | Description | Read-Only |
|------|-------------|-----------|
| **AskUserQuestionTool** | Prompt the user for input during execution | Yes |
| **BriefTool** | Generate a brief/summary | Yes |
| **ConfigTool** | Read or modify Claude Code configuration | No |
---
## Permission Model
Every tool invocation passes through the permission system (`src/hooks/toolPermission/`). Permission modes:
| Mode | Behavior |
|------|----------|
| `default` | Prompt the user for each potentially destructive operation |
| `plan` | Show the full plan, ask once |
| `bypassPermissions` | Auto-approve everything (dangerous) |
| `auto` | ML-based classifier decides |
Permission rules use wildcard patterns:
```
Bash(git *) # Allow all git commands
FileEdit(/src/*) # Allow edits to anything in src/
FileRead(*) # Allow reading any file
```
Each tool implements `checkPermissions()` returning `{ granted: boolean, reason?, prompt? }`.
---
## Tool Presets
Tools are grouped into presets in `src/tools.ts` for different contexts (e.g. read-only tools for code review, full toolset for development).
---
## See Also
- [Architecture](architecture.md) — How tools fit into the overall pipeline
- [Subsystems Guide](subsystems.md) — MCP, permissions, and other tool-related subsystems
- [Exploration Guide](exploration-guide.md) — How to read tool source code

43
gitpretty-apply.sh Normal file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
# Apply gitpretty's per-file beautification so GitHub file history shows
# readable, themed commit messages for each file.
REPO_PATH="${1:-.}"
INSTALL_HOOKS="${2:-}"
GITPRETTY_HOME="${HOME}/.gitpretty"
if ! command -v git >/dev/null 2>&1; then
echo "git is required but was not found on PATH"
exit 1
fi
if [ ! -d "${REPO_PATH}/.git" ]; then
echo "Target is not a git repository: ${REPO_PATH}"
echo "Usage: $0 [repo-path] [--hooks]"
exit 1
fi
if [ ! -d "${GITPRETTY_HOME}" ]; then
echo "Installing gitpretty into ${GITPRETTY_HOME} ..."
git clone https://github.com/codeaashu/gitpretty.git "${GITPRETTY_HOME}"
fi
chmod +x "${GITPRETTY_HOME}"/*.sh "${GITPRETTY_HOME}"/scripts/*.sh
if [ "${INSTALL_HOOKS}" = "--hooks" ]; then
echo "Installing gitpretty hooks in ${REPO_PATH} ..."
(
cd "${REPO_PATH}"
"${GITPRETTY_HOME}"/scripts/emoji-hooks.sh install
)
fi
echo "Running per-file beautify commits in ${REPO_PATH} ..."
"${GITPRETTY_HOME}"/emoji-file-commits.sh "${REPO_PATH}"
echo "Done. Review with: git -C ${REPO_PATH} log --oneline -n 20"

3545
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

94
package.json Normal file
View File

@@ -0,0 +1,94 @@
{
"name": "@anthropic-ai/claude-code",
"version": "0.0.0-leaked",
"description": "Anthropic Claude Code CLI — leaked source (2026-03-31). Not an official release.",
"license": "UNLICENSED",
"private": true,
"type": "module",
"main": "src/entrypoints/cli.tsx",
"bin": {
"claude": "src/entrypoints/cli.tsx"
},
"scripts": {
"build": "bun scripts/build-bundle.ts",
"build:watch": "bun scripts/build-bundle.ts --watch",
"build:prod": "bun scripts/build-bundle.ts --minify",
"build:web": "bun scripts/build-web.ts",
"build:web:watch": "bun scripts/build-web.ts --watch",
"build:web:prod": "bun scripts/build-web.ts --minify",
"typecheck": "tsc --noEmit",
"lint": "biome check src/",
"lint:fix": "biome check --write src/",
"format": "biome format --write src/",
"check": "biome check src/ && tsc --noEmit"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.39.0",
"@commander-js/extra-typings": "^13.1.0",
"@growthbook/growthbook": "^1.4.0",
"@modelcontextprotocol/sdk": "^1.12.1",
"@opentelemetry/api": "^1.9.0",
"@opentelemetry/api-logs": "^0.57.0",
"@opentelemetry/core": "^1.30.0",
"@opentelemetry/sdk-logs": "^0.57.0",
"@opentelemetry/sdk-metrics": "^1.30.0",
"@opentelemetry/sdk-trace-base": "^1.30.0",
"@xterm/addon-fit": "^0.10.0",
"@xterm/addon-search": "^0.15.0",
"@xterm/addon-unicode11": "^0.8.0",
"@xterm/addon-web-links": "^0.11.0",
"@xterm/addon-webgl": "^0.18.0",
"@xterm/xterm": "^5.5.0",
"auto-bind": "^5.0.1",
"axios": "^1.7.0",
"chalk": "^5.4.0",
"chokidar": "^4.0.0",
"cli-boxes": "^3.0.0",
"code-excerpt": "^4.0.0",
"diff": "^7.0.0",
"execa": "^9.5.0",
"figures": "^6.1.0",
"fuse.js": "^7.0.0",
"highlight.js": "^11.11.0",
"ignore": "^6.0.0",
"lodash-es": "^4.17.21",
"marked": "^15.0.0",
"node-pty": "^1.1.0",
"p-map": "^7.0.0",
"picomatch": "^4.0.0",
"proper-lockfile": "^4.1.2",
"qrcode": "^1.5.0",
"react": "^19.0.0",
"react-reconciler": "^0.31.0",
"semver": "^7.6.0",
"stack-utils": "^2.0.6",
"strip-ansi": "^7.1.0",
"supports-hyperlinks": "^3.1.0",
"tree-kill": "^1.2.2",
"type-fest": "^4.30.0",
"undici": "^7.3.0",
"usehooks-ts": "^3.1.0",
"wrap-ansi": "^9.0.0",
"ws": "^8.18.0",
"yaml": "^2.6.0",
"zod": "^3.24.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.0",
"@types/diff": "^7.0.0",
"@types/lodash-es": "^4.17.12",
"@types/node": "^22.10.0",
"@types/picomatch": "^3.0.0",
"@types/proper-lockfile": "^4.1.4",
"@types/react": "^19.0.0",
"@types/semver": "^7.5.8",
"@types/stack-utils": "^2.0.3",
"@types/ws": "^8.5.0",
"esbuild": "^0.25.0",
"typescript": "^5.7.0"
},
"engines": {
"bun": ">=1.1.0"
},
"packageManager": "bun@1.1.0"
}

35
prompts/00-overview.md Normal file
View File

@@ -0,0 +1,35 @@
# Build-Out Prompt Index
Run these prompts **in order** in separate chat sessions. Each one is self-contained.
| # | File | What It Does | Depends On |
|---|------|-------------|------------|
| 01 | `01-install-bun-and-deps.md` | Install Bun runtime, install all dependencies | — |
| 02 | `02-runtime-shims.md` | Create `bun:bundle` runtime shim + `MACRO` globals so code runs without Bun's bundler | 01 |
| 03 | `03-build-config.md` | Create esbuild-based build system that bundles the CLI to a single runnable file | 01, 02 |
| 04 | `04-fix-mcp-server.md` | Fix TypeScript errors in `mcp-server/` and make it build | 01 |
| 05 | `05-env-and-auth.md` | Set up `.env` file, API key config, OAuth stubs | 01 |
| 06 | `06-ink-react-terminal-ui.md` | Verify and fix the Ink/React terminal rendering pipeline | 01, 02, 03 |
| 07 | `07-tool-system.md` | Audit and wire up the 40+ tool implementations (BashTool, FileEditTool, etc.) | 0103 |
| 08 | `08-command-system.md` | Audit and wire up the 50+ slash commands (/commit, /review, etc.) | 0103, 07 |
| 09 | `09-query-engine.md` | Get the core LLM call loop (QueryEngine) functional — streaming, tool calls, retries | 0103, 05, 07 |
| 10 | `10-context-and-prompts.md` | Wire up system prompt construction, context gathering, memory system | 0103 |
| 11 | `11-mcp-integration.md` | Get MCP client/server integration working — registry, tool discovery | 0104 |
| 12 | `12-services-layer.md` | Wire up analytics, policy limits, remote settings, session memory | 0103, 05 |
| 13 | `13-bridge-ide.md` | Stub out or implement the VS Code / JetBrains bridge layer | 0103, 09 |
| 14 | `14-dev-runner.md` | Create `npm run dev` / `bun run dev` script that launches the CLI in dev mode | 0103 |
| 15 | `15-production-bundle.md` | Create production build: minified bundle, platform-specific packaging | 03 |
| 16 | `16-testing.md` | Add test infrastructure (vitest), write smoke tests for core subsystems | All |
## Quick Start
1. Open a new Copilot chat
2. Paste the contents of `01-install-bun-and-deps.md`
3. Follow the instructions / let the agent run
4. Repeat for `02`, `03`, etc.
## Notes
- Prompts 0713 can be run somewhat in **parallel** (they touch different subsystems)
- If a prompt fails, fix the issue before moving to the next one
- Each prompt is designed to be **independently verifiable** — it tells you how to confirm it worked

View File

@@ -0,0 +1,38 @@
# Prompt 01: Install Bun Runtime & Dependencies
## Context
You are working in `/workspaces/claude-code`, which contains the leaked source code of Anthropic's Claude Code CLI. It's a TypeScript/TSX project that uses **Bun** as its runtime (not Node.js). The `package.json` specifies `"engines": { "bun": ">=1.1.0" }`.
There is no `bun.lockb` lockfile — it was not included in the leak.
## Task
1. **Install Bun** (if not already installed):
```
curl -fsSL https://bun.sh/install | bash
```
Then ensure `bun` is on the PATH.
2. **Run `bun install`** in the project root (`/workspaces/claude-code`) to install all dependencies. This will generate a `bun.lockb` lockfile.
3. **Verify the install** — confirm that:
- `node_modules/` exists and has the major packages: `@anthropic-ai/sdk`, `react`, `chalk`, `@commander-js/extra-typings`, `ink` (may not exist separately — check `@anthropic-ai/sdk`, `zod`, `@modelcontextprotocol/sdk`)
- `bun --version` returns 1.1.0+
4. **Run the typecheck** to see current state:
```
bun run typecheck
```
Report any errors — don't fix them yet, just capture the output.
5. **Also install deps for the mcp-server sub-project**:
```
cd mcp-server && npm install && cd ..
```
## Verification
- `bun --version` outputs >= 1.1.0
- `ls node_modules/@anthropic-ai/sdk` succeeds
- `bun run typecheck` runs (errors are expected at this stage, just report them)

137
prompts/02-runtime-shims.md Normal file
View File

@@ -0,0 +1,137 @@
# Prompt 02: Runtime Shims for `bun:bundle` Feature Flags & `MACRO` Globals
## Context
You are working in `/workspaces/claude-code`. This is the Claude Code CLI source. It was built to run under **Bun's bundler** which provides two build-time features that don't exist at runtime:
### 1. `bun:bundle` feature flags
Throughout the code you'll find:
```ts
import { feature } from 'bun:bundle'
if (feature('BRIDGE_MODE')) { ... }
```
Bun's bundler replaces `feature('X')` with `true`/`false` at build time for dead-code elimination. Without the bundler, this import fails at runtime.
**Current state**: There's a type stub at `src/types/bun-bundle.d.ts` that satisfies TypeScript, but there's no runtime module. We need a real module.
### 2. `MACRO` global object
The code references a global `MACRO` object with these properties:
- `MACRO.VERSION` — package version string (e.g., `"1.0.53"`)
- `MACRO.PACKAGE_URL` — npm package name (e.g., `"@anthropic-ai/claude-code"`)
- `MACRO.ISSUES_EXPLAINER` — feedback URL/instructions string
These are normally inlined by the bundler. Some files already guard with `typeof MACRO !== 'undefined'`, but most don't.
## Task
### Part A: Create `bun:bundle` runtime module
Create a file at `src/shims/bun-bundle.ts` that exports a `feature()` function. Feature flags should be configurable via environment variables so we can toggle them:
```ts
// src/shims/bun-bundle.ts
// Map of feature flags to their enabled state.
// In production Bun builds, these are compile-time constants.
// For our dev build, we read from env vars with sensible defaults.
const FEATURE_FLAGS: Record<string, boolean> = {
PROACTIVE: envBool('CLAUDE_CODE_PROACTIVE', false),
KAIROS: envBool('CLAUDE_CODE_KAIROS', false),
BRIDGE_MODE: envBool('CLAUDE_CODE_BRIDGE_MODE', false),
DAEMON: envBool('CLAUDE_CODE_DAEMON', false),
VOICE_MODE: envBool('CLAUDE_CODE_VOICE_MODE', false),
AGENT_TRIGGERS: envBool('CLAUDE_CODE_AGENT_TRIGGERS', false),
MONITOR_TOOL: envBool('CLAUDE_CODE_MONITOR_TOOL', false),
COORDINATOR_MODE: envBool('CLAUDE_CODE_COORDINATOR_MODE', false),
ABLATION_BASELINE: false, // always off for external builds
DUMP_SYSTEM_PROMPT: envBool('CLAUDE_CODE_DUMP_SYSTEM_PROMPT', false),
BG_SESSIONS: envBool('CLAUDE_CODE_BG_SESSIONS', false),
}
function envBool(key: string, fallback: boolean): boolean {
const v = process.env[key]
if (v === undefined) return fallback
return v === '1' || v === 'true'
}
export function feature(name: string): boolean {
return FEATURE_FLAGS[name] ?? false
}
```
### Part B: Create `MACRO` global definition
Create a file at `src/shims/macro.ts` that defines and installs the global `MACRO` object:
```ts
// src/shims/macro.ts
// Read version from package.json at startup
import { readFileSync } from 'fs'
import { resolve, dirname } from 'path'
import { fileURLToPath } from 'url'
const __filename = fileURLToPath(import.meta.url)
const pkgPath = resolve(dirname(__filename), '..', '..', 'package.json')
let version = '0.0.0-dev'
try {
const pkg = JSON.parse(readFileSync(pkgPath, 'utf-8'))
version = pkg.version || version
} catch {}
const MACRO_OBJ = {
VERSION: version,
PACKAGE_URL: '@anthropic-ai/claude-code',
ISSUES_EXPLAINER: 'report issues at https://github.com/anthropics/claude-code/issues',
}
// Install as global
;(globalThis as any).MACRO = MACRO_OBJ
export default MACRO_OBJ
```
### Part C: Create a preload/bootstrap file
Create `src/shims/preload.ts` that imports both shims so they're available before any app code runs:
```ts
// src/shims/preload.ts
// Must be loaded before any application code.
// Provides runtime equivalents of Bun bundler build-time features.
import './macro.js'
// bun:bundle is resolved via the build alias, not imported here
```
### Part D: Update tsconfig.json `paths`
The current tsconfig.json has:
```json
"paths": {
"bun:bundle": ["./src/types/bun-bundle.d.ts"]
}
```
This handles type-checking. For runtime, we'll need the build system (Prompt 03) to alias `bun:bundle``src/shims/bun-bundle.ts`. **Don't change tsconfig.json** — the type stub is correct for `tsc`. Just note this for the next prompt.
### Part E: Add global MACRO type declaration
Check if there's already a global type declaration for `MACRO`. If not, add one to `src/types/bun-bundle.d.ts` or a new `src/types/macro.d.ts`:
```ts
declare const MACRO: {
VERSION: string
PACKAGE_URL: string
ISSUES_EXPLAINER: string
}
```
Make sure `tsc --noEmit` still passes after your changes.
## Verification
1. `bun run typecheck` should pass (or have the same errors as before — no new errors)
2. The files `src/shims/bun-bundle.ts`, `src/shims/macro.ts`, `src/shims/preload.ts` exist
3. Running `bun -e "import { feature } from './src/shims/bun-bundle.ts'; console.log(feature('BRIDGE_MODE'))"` should print `false`
4. Running `bun -e "import './src/shims/macro.ts'; console.log(MACRO.VERSION)"` should print the version

165
prompts/03-build-config.md Normal file
View File

@@ -0,0 +1,165 @@
# Prompt 03: Create esbuild-Based Build System
## Context
You are working in `/workspaces/claude-code`. This is the Claude Code CLI — a TypeScript/TSX terminal app using React + Ink. It was originally built using **Bun's bundler** with feature flags, but that build config wasn't included in the leak.
We need to create a build system that:
1. Bundles the entire `src/` tree into a runnable output
2. Aliases `bun:bundle` → our shim at `src/shims/bun-bundle.ts`
3. Injects the `MACRO` global (via `src/shims/macro.ts` preload)
4. Handles TSX/JSX (React)
5. Handles ESM `.js` extension imports (the code uses `import from './foo.js'` which maps to `./foo.ts`)
6. Produces output that can run under **Bun** (primary) or **Node.js 20+** (secondary)
## Existing Files
- `src/shims/bun-bundle.ts` — runtime `feature()` function (created in Prompt 02)
- `src/shims/macro.ts` — global `MACRO` object (created in Prompt 02)
- `src/shims/preload.ts` — preload bootstrap (created in Prompt 02)
- `src/entrypoints/cli.tsx` — main entrypoint
- `tsconfig.json` — has `"jsx": "react-jsx"`, `"module": "ESNext"`, `"moduleResolution": "bundler"`
## Task
### Part A: Install esbuild
```bash
bun add -d esbuild
```
### Part B: Create build script
Create `scripts/build-bundle.ts` (a Bun-runnable build script):
```ts
// scripts/build-bundle.ts
// Usage: bun scripts/build-bundle.ts [--watch] [--minify]
import * as esbuild from 'esbuild'
import { resolve } from 'path'
const ROOT = resolve(import.meta.dir, '..')
const watch = process.argv.includes('--watch')
const minify = process.argv.includes('--minify')
const buildOptions: esbuild.BuildOptions = {
entryPoints: [resolve(ROOT, 'src/entrypoints/cli.tsx')],
bundle: true,
platform: 'node',
target: 'node20',
format: 'esm',
outdir: resolve(ROOT, 'dist'),
outExtension: { '.js': '.mjs' },
// Inject the MACRO global before all other code
inject: [resolve(ROOT, 'src/shims/macro.ts')],
// Alias bun:bundle to our runtime shim
alias: {
'bun:bundle': resolve(ROOT, 'src/shims/bun-bundle.ts'),
},
// Don't bundle node built-ins or native packages
external: [
// Node built-ins
'fs', 'path', 'os', 'crypto', 'child_process', 'http', 'https',
'net', 'tls', 'url', 'util', 'stream', 'events', 'buffer',
'querystring', 'readline', 'zlib', 'assert', 'tty', 'worker_threads',
'perf_hooks', 'async_hooks', 'dns', 'dgram', 'cluster',
'node:*',
// Native addons that can't be bundled
'fsevents',
],
jsx: 'automatic',
// Source maps for debugging
sourcemap: true,
minify,
// Banner: shebang for CLI + preload the MACRO global
banner: {
js: '#!/usr/bin/env node\n',
},
// Handle the .js → .ts resolution that the codebase uses
resolveExtensions: ['.tsx', '.ts', '.jsx', '.js', '.json'],
logLevel: 'info',
}
async function main() {
if (watch) {
const ctx = await esbuild.context(buildOptions)
await ctx.watch()
console.log('Watching for changes...')
} else {
const result = await esbuild.build(buildOptions)
if (result.errors.length > 0) {
console.error('Build failed')
process.exit(1)
}
console.log('Build complete → dist/')
}
}
main().catch(err => {
console.error(err)
process.exit(1)
})
```
**Important**: This is a starting point. You will likely need to iterate on the externals list and alias configuration. The codebase has ~1,900 files — some imports may need special handling. When you run the build:
1. Run it: `bun scripts/build-bundle.ts`
2. Look at the errors
3. Fix them (add externals, fix aliases, etc.)
4. Repeat until it bundles successfully
Common issues you'll hit:
- **npm packages that use native modules** → add to `external`
- **Dynamic `require()` calls** behind `process.env.USER_TYPE === 'ant'` → these are Anthropic-internal, wrap them or stub them
- **Circular dependencies** → esbuild handles these but may warn
- **Re-exports from barrel files** → should work but watch for issues
### Part C: Add npm scripts
Add these to `package.json` `"scripts"`:
```json
{
"build": "bun scripts/build-bundle.ts",
"build:watch": "bun scripts/build-bundle.ts --watch",
"build:prod": "bun scripts/build-bundle.ts --minify"
}
```
### Part D: Create dist output directory
Add `dist/` to `.gitignore` (create one if it doesn't exist).
### Part E: Iterate on build errors
Run the build and fix whatever comes up. The goal is a clean `bun scripts/build-bundle.ts` that produces `dist/cli.mjs`.
**Strategy for unresolvable modules**: If modules reference Anthropic-internal packages or Bun-specific APIs (like `Bun.hash`, `Bun.file`), create minimal stubs in `src/shims/` that provide compatible fallbacks.
### Part F: Test the output
After a successful build:
```bash
node dist/cli.mjs --version
# or
bun dist/cli.mjs --version
```
This should print the version. It will likely crash after that because no API key is configured — that's fine for now.
## Verification
1. `bun scripts/build-bundle.ts` completes without errors
2. `dist/cli.mjs` exists
3. `bun dist/cli.mjs --version` or `node dist/cli.mjs --version` prints a version string
4. `package.json` has `build`, `build:watch`, `build:prod` scripts

View File

@@ -0,0 +1,85 @@
# Prompt 04: Fix MCP Server Build
## Context
You are working in `/workspaces/claude-code/mcp-server/`. This is a separate sub-project that provides an MCP (Model Context Protocol) server for exploring the Claude Code source. It's a simpler, self-contained TypeScript project.
Currently `npm run build` (which runs `tsc`) fails with TypeScript errors.
## Task
1. **Run the build and capture errors**:
```bash
cd /workspaces/claude-code/mcp-server
npm run build 2>&1
```
2. **Fix all TypeScript errors** in `mcp-server/src/server.ts` and `mcp-server/src/index.ts`. Common issues include:
- Duplicate function implementations
- Missing imports
- Type mismatches with the MCP SDK types
3. **Verify the fix**:
```bash
npm run build
```
Should complete with zero errors and produce output in `mcp-server/dist/`.
4. **Test the MCP server runs**:
```bash
node dist/index.js --help 2>&1 || node dist/index.js 2>&1 | head -5
```
It may hang waiting for stdio input (that's normal for an MCP server) — just verify it starts without crashing.
## Key Files
- `mcp-server/package.json` — build script and dependencies
- `mcp-server/tsconfig.json` — TypeScript config
- `mcp-server/src/server.ts` — Main server logic (tools, resources, prompts)
- `mcp-server/src/index.ts` — Entrypoint (stdio transport)
## Verification
1. `cd mcp-server && npm run build` succeeds with zero errors
2. `ls mcp-server/dist/` shows compiled `.js` files
3. `node mcp-server/dist/index.js` starts without immediate crash
---
## Fix: GitHub Push Protection Blocked Push
`git push origin main` is rejected because commits `5e99ad6` and `7adaa5d` contain a **GitHub OAuth token** in `.mcpregistry_github_token`. The file was deleted from disk but still exists in git history. You must rewrite history to remove it.
### Step 1: Scrub secrets from history
```bash
git filter-branch --force --index-filter \
'git rm --cached --ignore-unmatch .mcpregistry_github_token .mcpregistry_registry_token mcp-server/.mcpregistry_github_token mcp-server/.mcpregistry_registry_token' \
--prune-empty HEAD~5..HEAD
```
### Step 2: Push
```bash
git push origin main
```
### Alternative: Interactive rebase
```bash
git rebase -i HEAD~5
# Change "pick" to "edit" for commits 5e99ad6 and 7adaa5d
# At each stop, run:
git rm --cached .mcpregistry_github_token .mcpregistry_registry_token 2>/dev/null
git rm --cached mcp-server/.mcpregistry_github_token mcp-server/.mcpregistry_registry_token 2>/dev/null
git commit --amend --no-edit
git rebase --continue
```
### Step 3: Prevent future leaks
```bash
echo ".mcpregistry_github_token" >> .gitignore
echo ".mcpregistry_registry_token" >> .gitignore
git add .gitignore && git commit -m "chore: gitignore token files"
```

100
prompts/05-env-and-auth.md Normal file
View File

@@ -0,0 +1,100 @@
# Prompt 05: Environment Configuration & API Authentication
## Context
You are working in `/workspaces/claude-code`. The CLI needs an Anthropic API key to function. The auth system supports multiple backends:
- **Direct API** (`ANTHROPIC_API_KEY`) — simplest
- **OAuth** (Claude.ai subscription) — complex browser flow
- **AWS Bedrock** — `AWS_*` env vars
- **Google Vertex AI** — GCP credentials
- **Azure Foundry** — `ANTHROPIC_FOUNDRY_API_KEY`
## Task
### Part A: Create `.env` file from the existing code
Search the codebase for all environment variables used. Key files to check:
- `src/entrypoints/cli.tsx` (reads env vars at top level)
- `src/services/api/client.ts` (API client construction)
- `src/utils/auth.ts` (authentication)
- `src/utils/config.ts` (config loading)
- `src/constants/` (any hardcoded config)
- `src/entrypoints/init.ts` (initialization reads)
Create a `.env.example` file (or update the existing one if it exists) with ALL discoverable env vars, organized by category, with documentation comments. At minimum include:
```env
# ─── Authentication ───
ANTHROPIC_API_KEY= # Required: Your Anthropic API key (sk-ant-...)
# ─── API Configuration ───
ANTHROPIC_BASE_URL= # Custom API endpoint (default: https://api.anthropic.com)
ANTHROPIC_MODEL= # Override default model (e.g., claude-sonnet-4-20250514)
ANTHROPIC_SMALL_FAST_MODEL= # Model for fast/cheap operations (e.g., claude-haiku)
# ─── Feature Flags (used by bun:bundle shim) ───
CLAUDE_CODE_PROACTIVE=false
CLAUDE_CODE_BRIDGE_MODE=false
CLAUDE_CODE_COORDINATOR_MODE=false
CLAUDE_CODE_VOICE_MODE=false
# ─── Debug ───
CLAUDE_CODE_DEBUG_LOG_LEVEL= # debug, info, warn, error
DEBUG=false
```
### Part B: Trace the API client setup
Read `src/services/api/client.ts` to understand how the Anthropic SDK is initialized. Document:
1. What env vars it reads
2. How it selects between API backends (direct, Bedrock, Vertex, etc.)
3. Where the API key comes from (env var? keychain? OAuth token?)
Create a comment block at the top of `.env.example` explaining how auth works.
### Part C: Create a minimal auth test
Create `scripts/test-auth.ts`:
```ts
// scripts/test-auth.ts
// Quick test that the API key is configured and can reach Anthropic
// Usage: bun scripts/test-auth.ts
import Anthropic from '@anthropic-ai/sdk'
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
})
async function main() {
try {
const msg = await client.messages.create({
model: process.env.ANTHROPIC_MODEL || 'claude-sonnet-4-20250514',
max_tokens: 50,
messages: [{ role: 'user', content: 'Say "hello" and nothing else.' }],
})
console.log('✅ API connection successful!')
console.log('Response:', msg.content[0].type === 'text' ? msg.content[0].text : msg.content[0])
} catch (err: any) {
console.error('❌ API connection failed:', err.message)
process.exit(1)
}
}
main()
```
### Part D: Stub OAuth for development
The OAuth flow (`src/services/oauth/`) requires browser interaction and Anthropic's OAuth endpoints. For development, we want to bypass it.
Search for where the auth decision is made (likely in `src/utils/auth.ts` or `src/entrypoints/init.ts`). Document what would need to be stubbed to skip OAuth and use only `ANTHROPIC_API_KEY`.
Don't modify source files yet — just document findings in a comment at the bottom of `.env.example`.
## Verification
1. `.env.example` exists with comprehensive env var documentation
2. `scripts/test-auth.ts` exists
3. With a valid `ANTHROPIC_API_KEY` set: `bun scripts/test-auth.ts` prints success
4. Without an API key: `bun scripts/test-auth.ts` prints a clear error

View File

@@ -0,0 +1,110 @@
# Prompt 06: Verify and Fix the Ink/React Terminal UI Pipeline
## Context
You are working in `/workspaces/claude-code`. The CLI renders its UI using **React + Ink** — a framework that renders React components to the terminal (not a browser). This project includes a **custom fork of Ink** embedded directly in `src/ink/`.
Key files:
- `src/ink.ts` — Public API (re-exports `render()` and `createRoot()`, wraps with `ThemeProvider`)
- `src/ink/root.ts` — Ink's root renderer
- `src/ink/ink.tsx` — Core Ink component
- `src/ink/reconciler.ts` — React reconciler for terminal output
- `src/ink/dom.ts` — Terminal DOM implementation
- `src/ink/renderer.ts` — Renders virtual DOM to terminal strings
- `src/ink/components/` — Built-in Ink components (Box, Text, etc.)
- `src/components/` — Claude Code's ~140 custom components
## Task
### Part A: Trace the render pipeline
Read these files in order and document the rendering flow:
1. `src/ink.ts` → how `render()` and `createRoot()` work
2. `src/ink/root.ts` → how Ink creates a root and mounts React
3. `src/ink/reconciler.ts` → what React reconciler is used
4. `src/ink/renderer.ts` → how the virtual DOM becomes terminal output
5. `src/ink/dom.ts` → what the "DOM nodes" look like
Create a brief architecture doc in a comment block or README section.
### Part B: Verify Ink components compile
Check that the core Ink components are self-contained:
```
src/ink/components/
```
List them all and verify they don't have missing imports.
### Part C: Check the ThemeProvider
Read `src/components/design-system/ThemeProvider.tsx` (or wherever it lives). Verify it:
1. Exists
2. Exports a `ThemeProvider` component
3. The theme system doesn't depend on external resources
### Part D: Create a minimal render test
Create `scripts/test-ink.tsx`:
```tsx
// scripts/test-ink.tsx
// Minimal test that the Ink terminal UI renders
// Usage: bun scripts/test-ink.tsx
import React from 'react'
// We need the shims loaded first
import './src/shims/preload.js'
// Now try to use Ink
import { render } from './src/ink.js'
// Minimal component
function Hello() {
return <Text>Hello from Claude Code Ink UI!</Text>
}
// Need to import Text from Ink
import { Text } from './src/ink/components/Text.js'
async function main() {
const instance = await render(<Hello />)
// Give it a moment to render
setTimeout(() => {
instance.unmount()
process.exit(0)
}, 500)
}
main().catch(err => {
console.error('Ink render test failed:', err)
process.exit(1)
})
```
Adjust the imports based on what you find — the Text component path may differ.
### Part E: Fix any issues
If Ink rendering fails, the common issues are:
1. **Missing `yoga-wasm-web` or `yoga-layout`** — Ink uses Yoga for flexbox layout. Check if there's a Yoga dependency or if it's embedded.
2. **React version mismatch** — The code uses React 19. Verify the reconciler is compatible.
3. **Terminal detection** — Ink checks if stdout is a TTY. In some environments this may need to be forced.
4. **Missing chalk/ansi dependency** — Terminal colors.
Fix whatever you find to make the test render successfully.
### Part F: Verify component imports
Check that `src/components/` components can import from the Ink system without errors. Pick 3-5 key components:
- `src/components/MessageResponse.tsx` (or similar — the main chat message renderer)
- `src/components/ToolUseResult.tsx` (or similar — tool output display)
- `src/components/PermissionRequest.tsx` (or similar — permission modal)
Read their imports and verify nothing is missing.
## Verification
1. `scripts/test-ink.tsx` renders "Hello from Claude Code Ink UI!" to the terminal
2. No new TypeScript errors introduced
3. You've documented the render pipeline flow

104
prompts/07-tool-system.md Normal file
View File

@@ -0,0 +1,104 @@
# Prompt 07: Audit and Wire Up the Tool System
## Context
You are working in `/workspaces/claude-code`. The Claude Code CLI has ~40 tools that the LLM can invoke during conversations. Each tool is in `src/tools/<ToolName>/` and follows a consistent pattern.
Key files:
- `src/Tool.ts` (~29K lines) — Tool type definitions, `ToolUseContext`, `PermissionResult`, etc.
- `src/tools.ts` — Tool registry (`getTools()` function that returns all available tools)
- `src/tools/` — Individual tool directories
## Task
### Part A: Understand the Tool interface
Read `src/Tool.ts` and document the `Tool` interface. Key questions:
1. What fields does a `Tool` have? (name, description, inputSchema, execute, etc.)
2. What is `ToolUseContext`? What does it provide to tool execution?
3. How do tool permissions work? (`PermissionResult`, `needsPermission`)
4. How do tools declare their input schema? (JSON Schema / Zod)
### Part B: Audit the tool registry
Read `src/tools.ts` fully. It dynamically imports tools behind feature flags and env checks:
```ts
const REPLTool = process.env.USER_TYPE === 'ant' ? ... : null
const SleepTool = feature('PROACTIVE') || feature('KAIROS') ? ... : null
```
Create a complete inventory of:
1. **Always-available tools** — imported unconditionally
2. **Feature-gated tools** — which feature flag enables them
3. **Ant-only tools** — gated behind `USER_TYPE === 'ant'` (Anthropic internal)
4. **Broken/missing tools** — any tools referenced but not found
### Part C: Verify each tool compiles
For each tool directory in `src/tools/`, check:
1. Does it have an `index.ts` or main file?
2. Does it export a tool definition matching the `Tool` interface?
3. Are its imports resolvable?
Focus on the **core 10 tools** that are essential for basic operation:
- `BashTool` — shell command execution
- `FileReadTool` — read files
- `FileWriteTool` — write files
- `FileEditTool` — edit files (search & replace)
- `GlobTool` — find files by pattern
- `GrepTool` — search file contents
- `AgentTool` — spawn sub-agent
- `WebFetchTool` — HTTP requests
- `AskUserQuestionTool` — ask the user for input
- `TodoWriteTool` — todo list management
### Part D: Fix import issues
The tool registry (`src/tools.ts`) uses dynamic imports with `bun:bundle` feature flags. With our runtime shim, these should work — but verify:
1. Feature-gated imports resolve when the flag is `false` (should be skipped)
2. Feature-gated imports resolve when the flag is `true` (should load)
3. Ant-only tools gracefully handle `process.env.USER_TYPE !== 'ant'`
Fix any import resolution errors.
### Part E: Create a tool smoke test
Create `scripts/test-tools.ts`:
```ts
// scripts/test-tools.ts
// Verify all tools load without errors
// Usage: bun scripts/test-tools.ts
import './src/shims/preload.js'
async function main() {
const { getTools } = await import('./src/tools.js')
// getTools() may need arguments — check its signature
const tools = getTools(/* ... */)
console.log(`Loaded ${tools.length} tools:\n`)
for (const tool of tools) {
console.log(`${tool.name}`)
}
}
main().catch(err => {
console.error('Tool loading failed:', err)
process.exit(1)
})
```
Adapt the script to match the actual `getTools()` signature.
### Part F: Stub Anthropic-internal tools
Any tools gated behind `USER_TYPE === 'ant'` should be cleanly excluded. Verify the null checks work and don't cause runtime errors when these tools are missing from the registry.
## Verification
1. `scripts/test-tools.ts` runs and lists all available tools without errors
2. The core 10 tools listed above are all present
3. No TypeScript errors in `src/tools/` or `src/tools.ts`
4. Ant-only tools are cleanly excluded (no crashes)

View File

@@ -0,0 +1,90 @@
# Prompt 08: Audit and Wire Up the Command System
## Context
You are working in `/workspaces/claude-code`. The CLI has ~50 slash commands (e.g., `/commit`, `/review`, `/init`, `/config`). These are registered in `src/commands.ts` and implemented in `src/commands/`.
Key files:
- `src/commands.ts` (~25K lines) — Command registry (`getCommands()`)
- `src/commands/` — Individual command implementations
- `src/types/command.ts` — Command type definition
## Task
### Part A: Understand the Command interface
Read `src/types/command.ts` and the top of `src/commands.ts`. Document:
1. The `Command` type (name, description, execute, args, etc.)
2. How commands are registered
3. How command execution is triggered (from the REPL? from CLI args?)
### Part B: Audit the command registry
Read `src/commands.ts` fully. Create a complete inventory of all commands, organized by category:
**Essential commands** (needed for basic operation):
- `/help` — show help
- `/config` — view/edit configuration
- `/init` — initialize a project
- `/commit` — git commit
- `/review` — code review
**Feature-gated commands** (behind feature flags or USER_TYPE):
- List which flag enables each
**Potentially broken commands** (reference missing imports or services):
- List any that can't resolve their imports
### Part C: Verify core commands compile
For the essential commands listed above, read their implementations and check:
1. All imports resolve
2. They don't depend on unavailable services
3. The function signatures match the Command type
### Part D: Fix import issues
Similar to the tool system, commands may have:
- Feature-gated imports that need the `bun:bundle` shim
- Ant-only code paths
- Dynamic imports that need correct paths
Fix whatever is broken.
### Part E: Handle "moved to plugin" commands
There's a file `src/commands/createMovedToPluginCommand.ts`. Read it — some commands have been migrated to the plugin system. These should gracefully tell the user the command has moved, not crash.
### Part F: Create a command smoke test
Create `scripts/test-commands.ts`:
```ts
// scripts/test-commands.ts
// Verify all commands load without errors
// Usage: bun scripts/test-commands.ts
import './src/shims/preload.js'
async function main() {
const { getCommands } = await import('./src/commands.js')
const commands = getCommands(/* check signature */)
console.log(`Loaded ${commands.length} commands:\n`)
for (const cmd of commands) {
console.log(` /${cmd.name}${cmd.description || '(no description)'}`)
}
}
main().catch(err => {
console.error('Command loading failed:', err)
process.exit(1)
})
```
## Verification
1. `scripts/test-commands.ts` lists all available commands
2. Core commands (`/help`, `/config`, `/init`, `/commit`) are present
3. No runtime crashes from missing imports
4. Moved-to-plugin commands show a friendly message instead of crashing

118
prompts/09-query-engine.md Normal file
View File

@@ -0,0 +1,118 @@
# Prompt 09: Get the QueryEngine (Core LLM Loop) Functional
## Context
You are working in `/workspaces/claude-code`. The `QueryEngine` (`src/QueryEngine.ts`, ~46K lines) is the heart of the CLI — it:
1. Sends messages to the Anthropic API (streaming)
2. Processes streaming responses (text, thinking, tool_use blocks)
3. Executes tools when the LLM requests them (tool loop)
4. Handles retries, rate limits, and errors
5. Tracks token usage and costs
6. Manages conversation context (message history)
This is the most complex single file. The goal is to get it functional enough for a basic conversation loop.
## Key Dependencies
The QueryEngine depends on:
- `src/services/api/client.ts` — Anthropic SDK client
- `src/services/api/claude.ts` — Message API wrapper
- `src/Tool.ts` — Tool definitions
- `src/tools.ts` — Tool registry
- `src/context.ts` — System context
- `src/constants/prompts.ts` — System prompt
- Token counting utilities
- Streaming event handlers
## Task
### Part A: Map the QueryEngine architecture
Read `src/QueryEngine.ts` and create a structural map:
1. **Class structure** — What classes/interfaces are defined?
2. **Public API** — What method starts a query? What does it return?
3. **Message flow** — How does a user message become an API call?
4. **Tool loop** — How are tool calls detected, executed, and fed back?
5. **Streaming** — How are streaming events processed?
6. **Retry logic** — How are API errors handled?
### Part B: Trace the API call path
Follow the chain from QueryEngine → API client:
1. Read `src/services/api/client.ts` — how is the Anthropic SDK client created?
2. Read `src/services/api/claude.ts` — what's the message creation wrapper?
3. What parameters are passed? (model, max_tokens, system prompt, tools, messages)
4. How is streaming handled? (SSE? SDK streaming?)
### Part C: Identify and fix blockers
The QueryEngine will have dependencies on many subsystems. For each dependency:
- **If it's essential** (API client, tool execution) → make sure it works
- **If it's optional** (analytics, telemetry, policy limits) → stub or skip it
Common blockers:
1. **Missing API configuration** → needs `ANTHROPIC_API_KEY` (Prompt 05)
2. **Policy limits service** → may block execution, needs stubbing
3. **GrowthBook/analytics** → needs stubbing or graceful failure
4. **Remote managed settings** → needs stubbing
5. **Bootstrap data fetch** → may need to be optional
### Part D: Create a minimal conversation test
Create `scripts/test-query.ts` that exercises the QueryEngine directly:
```ts
// scripts/test-query.ts
// Minimal test of the QueryEngine — single query, no REPL
// Usage: ANTHROPIC_API_KEY=sk-ant-... bun scripts/test-query.ts "What is 2+2?"
import './src/shims/preload.js'
async function main() {
const query = process.argv[2] || 'What is 2+2?'
// Import and set up minimal dependencies
// You'll need to figure out the exact imports and initialization
// by reading src/QueryEngine.ts, src/query.ts, and src/replLauncher.tsx
// The basic flow should be:
// 1. Create API client
// 2. Build system prompt
// 3. Create QueryEngine instance
// 4. Send a query
// 5. Print the response
console.log(`Query: ${query}`)
console.log('---')
// TODO: Wire up the actual QueryEngine call
// This is the hardest part — document what you need to do
}
main().catch(err => {
console.error('Query test failed:', err)
process.exit(1)
})
```
### Part E: Handle the streaming response
The QueryEngine likely uses the Anthropic SDK's streaming interface. Make sure:
1. Text content is printed to stdout as it streams
2. Thinking blocks are handled (displayed or hidden based on config)
3. Tool use blocks trigger tool execution
4. The tool loop feeds results back and continues
### Part F: Document what's still broken
After getting a basic query working, document:
1. Which features work
2. Which features are stubbed
3. What would need to happen for full functionality
## Verification
1. `ANTHROPIC_API_KEY=sk-ant-... bun scripts/test-query.ts "What is 2+2?"` gets a response
2. Streaming output appears in real-time
3. No unhandled crashes (graceful error messages are fine)
4. Architecture is documented

View File

@@ -0,0 +1,100 @@
# Prompt 10: Wire Up System Prompt, Context Gathering & Memory System
## Context
You are working in `/workspaces/claude-code`. The CLI constructs a detailed system prompt before each conversation. This prompt includes:
1. **Static instructions** — core behavior rules (from `src/constants/prompts.ts`)
2. **Dynamic context** — OS, shell, git status, working directory (from `src/context.ts`)
3. **Tool descriptions** — auto-generated from tool schemas
4. **Memory** — persistent `.claude.md` files (from `src/memdir/`)
5. **User context** — config, preferences, project settings
## Key Files
- `src/constants/prompts.ts` — System prompt construction
- `src/constants/system.ts` — System identity strings
- `src/context.ts` — OS/shell/git context collection
- `src/context/` — Additional context modules
- `src/memdir/` — Memory directory system (reads `.claude.md`, `CLAUDE.md` files)
- `src/utils/messages.ts` — Message construction helpers
## Task
### Part A: Trace the system prompt construction
Read `src/constants/prompts.ts` and map:
1. What is `getSystemPrompt()`'s signature and return type?
2. What sections does the system prompt contain?
3. How are tools described in the prompt?
4. What model-specific variations exist?
5. Where does the `MACRO.ISSUES_EXPLAINER` reference resolve to?
### Part B: Fix the context gathering
Read `src/context.ts` and:
1. Understand `getSystemContext()` and `getUserContext()`
2. These collect OS info, shell version, git status, etc.
3. Verify they work on Linux (this codebase was likely developed on macOS, so some paths may be macOS-specific)
4. Fix any platform-specific issues
### Part C: Wire up the memory system
Read `src/memdir/` directory:
1. How does it find `.claude.md` / `CLAUDE.md` files?
2. How is memory content injected into the system prompt?
3. Does it support project-level, user-level, and session-level memory?
Verify it works by:
1. Creating a test `CLAUDE.md` in the project root
2. Running the system prompt builder
3. Checking the memory appears in the output
### Part D: Create a prompt inspection script
Create `scripts/test-prompt.ts`:
```ts
// scripts/test-prompt.ts
// Dump the full system prompt that would be sent to the API
// Usage: bun scripts/test-prompt.ts
import './src/shims/preload.js'
async function main() {
// Import the prompt builder
const { getSystemPrompt } = await import('./src/constants/prompts.js')
// May need to pass tools list and model name
// Check the function signature
const prompt = await getSystemPrompt([], 'claude-sonnet-4-20250514')
console.log('=== SYSTEM PROMPT ===')
console.log(prompt.join('\n'))
console.log('=== END ===')
console.log(`\nTotal length: ${prompt.join('\n').length} characters`)
}
main().catch(err => {
console.error('Prompt test failed:', err)
process.exit(1)
})
```
### Part E: Fix MACRO references in prompts
The prompt system references `MACRO.ISSUES_EXPLAINER`. Make sure our `MACRO` global (from `src/shims/macro.ts`) provides this value. If the prompt references other `MACRO` fields, add them too.
### Part F: Context module audit
Check `src/context/` for additional context modules:
- Project detection (language, framework)
- Git integration (branch, status, recent commits)
- Environment detection (CI, container, SSH)
Verify these work in our dev environment.
## Verification
1. `bun scripts/test-prompt.ts` dumps a complete system prompt
2. The prompt includes: tool descriptions, OS context, memory content
3. No `undefined` or `MACRO.` references in the output
4. Memory system reads `.claude.md` from the project root

View File

@@ -0,0 +1,113 @@
# Prompt 11: MCP Client/Server Integration
## Context
You are working in `/workspaces/claude-code`. The CLI has built-in MCP (Model Context Protocol) support:
- **MCP Client** — connects to external MCP servers (tools, resources)
- **MCP Server** — exposes Claude Code itself as an MCP server
MCP lets the CLI use tools provided by external servers and lets other clients use Claude Code as a tool provider.
## Key Files
- `src/services/mcp/` — MCP client implementation
- `src/services/mcp/types.ts` — MCP config types
- `src/entrypoints/mcp.ts` — MCP server mode entrypoint
- `src/tools/MCPTool/` — Tool that calls MCP servers
- `src/tools/ListMcpResourcesTool/` — Lists MCP resources
- `src/tools/ReadMcpResourceTool/` — Reads MCP resources
- `src/tools/McpAuthTool/` — MCP server authentication
- `mcp-server/` — Standalone MCP server sub-project (from Prompt 04)
## Task
### Part A: Understand MCP client architecture
Read `src/services/mcp/` directory:
1. How are MCP servers discovered? (`.mcp.json` config file?)
2. How are MCP server connections established? (stdio, HTTP, SSE?)
3. How are MCP tools registered and made available?
4. What is the `ScopedMcpServerConfig` type?
### Part B: Understand MCP config format
Search for `.mcp.json` or MCP config loading code. Document:
1. Where does the config file live? (`~/.claude/.mcp.json`? project root?)
2. What's the config schema? (server name, command, args, env?)
3. How are multiple servers configured?
Example config you might find:
```json
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["path/to/server.js"],
"env": {}
}
}
}
```
### Part C: Verify MCP SDK integration
The project uses `@modelcontextprotocol/sdk` (^1.12.1). Check:
1. Is it installed in `node_modules/`?
2. Does the import work: `import { Client } from '@modelcontextprotocol/sdk/client/index.js'`
3. Are there version compatibility issues?
### Part D: Test MCP client with our own server
Create a test that:
1. Starts the `mcp-server/` we fixed in Prompt 04 as a child process
2. Connects to it via stdio using the MCP client from `src/services/mcp/`
3. Lists available tools
4. Calls one tool (e.g., `list_files` or `search_code`)
Create `scripts/test-mcp.ts`:
```ts
// scripts/test-mcp.ts
// Test MCP client/server roundtrip
// Usage: bun scripts/test-mcp.ts
import './src/shims/preload.js'
// TODO:
// 1. Spawn mcp-server as a child process (stdio transport)
// 2. Create MCP client from src/services/mcp/
// 3. Connect client to server
// 4. List tools
// 5. Call a tool
// 6. Print results
```
### Part E: Test MCP server mode
The CLI can run as an MCP server itself (`src/entrypoints/mcp.ts`). Read this file and verify:
1. What tools does it expose?
2. What resources does it provide?
3. Can it be started with `bun src/entrypoints/mcp.ts`?
### Part F: Create sample MCP config
Create a `.mcp.json` in the project root (or wherever the app looks for it) that configures the local MCP server:
```json
{
"mcpServers": {
"claude-code-explorer": {
"command": "node",
"args": ["mcp-server/dist/index.js"],
"env": {
"CLAUDE_CODE_SRC_ROOT": "./src"
}
}
}
}
```
## Verification
1. MCP client code in `src/services/mcp/` loads without errors
2. MCP server mode (`src/entrypoints/mcp.ts`) starts without crashing
3. A roundtrip test (client → server → response) works
4. `.mcp.json` config file is created and parseable

View File

@@ -0,0 +1,123 @@
# Prompt 12: Wire Up Services Layer (Analytics, Policy, Settings, Sessions)
## Context
You are working in `/workspaces/claude-code`. The CLI has several background services that run during operation:
- **Analytics/Telemetry** — GrowthBook feature flags, OpenTelemetry traces
- **Policy Limits** — rate limiting, quota enforcement from Anthropic backend
- **Remote Managed Settings** — server-pushed configuration
- **Session Memory** — persistent conversation history across invocations
- **Bootstrap Data** — initial config fetched from API on startup
Most of these talk to Anthropic's backend servers and will fail in our dev build. The goal is to make them fail gracefully (not crash the app) or provide stubs.
## Key Files
- `src/services/analytics/growthbook.ts` — GrowthBook feature flag client
- `src/services/analytics/` — Telemetry, event logging
- `src/services/policyLimits/` — Rate limit enforcement
- `src/services/remoteManagedSettings/` — Server-pushed settings
- `src/services/SessionMemory/` — Conversation persistence
- `src/services/api/bootstrap.ts` — Initial data fetch
- `src/entrypoints/init.ts` — Where most services are initialized
- `src/cost-tracker.ts` — Token usage and cost tracking
## Task
### Part A: Map the initialization sequence
Read `src/entrypoints/init.ts` carefully. Document:
1. What services are initialized, in what order?
2. Which are blocking (must complete before app starts)?
3. Which are fire-and-forget (async, can fail silently)?
4. What happens if each one fails?
### Part B: Make GrowthBook optional
Read `src/services/analytics/growthbook.ts`:
1. How is GrowthBook initialized?
2. Where is it called from? (feature flag checks throughout the codebase)
3. What happens if initialization fails?
**Goal**: Make GrowthBook fail silently — all feature flag checks should return `false` (default) if GrowthBook is unavailable. This may already be handled, but verify it.
### Part C: Stub policy limits
Read `src/services/policyLimits/`:
1. What limits does it enforce? (messages per minute, tokens per day, etc.)
2. What happens when a limit is hit?
3. Where is `loadPolicyLimits()` called?
**Goal**: Make the app work without policy limits. Either:
- Stub the service to return "no limits" (allow everything)
- Or catch and ignore errors from the API call
### Part D: Make remote settings optional
Read `src/services/remoteManagedSettings/`:
1. What settings does it manage?
2. What's the fallback when the server is unreachable?
**Goal**: Ensure the app works with default settings when the remote endpoint fails.
### Part E: Handle bootstrap data
Read `src/services/api/bootstrap.ts`:
1. What data does it fetch?
2. What uses this data?
3. What happens if the fetch fails?
**Goal**: Provide sensible defaults when bootstrap fails (no API key = no bootstrap).
### Part F: Verify session memory
Read `src/services/SessionMemory/`:
1. Where is session data stored? (filesystem path)
2. How are sessions identified?
3. Does it work with the local filesystem?
**Goal**: Session memory should work out of the box since it's local filesystem.
### Part G: Wire up cost tracking
Read `src/cost-tracker.ts`:
1. How are costs calculated?
2. Where is usage reported?
3. Does it persist across sessions?
**Goal**: Cost tracking should work locally (just display, no remote reporting needed).
### Part H: Create a services smoke test
Create `scripts/test-services.ts`:
```ts
// scripts/test-services.ts
// Test that all services initialize without crashing
// Usage: bun scripts/test-services.ts
import './src/shims/preload.js'
async function main() {
console.log('Testing service initialization...')
// Try to run the init sequence
try {
const { init } = await import('./src/entrypoints/init.js')
await init()
console.log('✅ Services initialized')
} catch (err: any) {
console.error('❌ Init failed:', err.message)
// Document which service failed and why
}
}
main()
```
## Verification
1. `bun scripts/test-services.ts` completes without crashing (warnings are fine)
2. Missing remote services log warnings, not crashes
3. Session memory reads/writes to the local filesystem
4. Cost tracking displays locally
5. The app can start even when Anthropic's backend is unreachable (with just an API key)

75
prompts/13-bridge-ide.md Normal file
View File

@@ -0,0 +1,75 @@
# Prompt 13: Bridge Layer (VS Code / JetBrains IDE Integration)
## Context
You are working in `/workspaces/claude-code`. The "Bridge" is the subsystem that connects Claude Code to IDE extensions (VS Code, JetBrains). It enables:
- Remote control of Claude Code from an IDE
- Sharing file context between IDE and CLI
- Permission approvals from the IDE UI
- Session management across IDE and terminal
The Bridge is **gated behind `feature('BRIDGE_MODE')`** and is the most complex optional subsystem (~30 files in `src/bridge/`).
## Key Files
- `src/bridge/bridgeMain.ts` — Main bridge orchestration
- `src/bridge/bridgeApi.ts` — Bridge API endpoints
- `src/bridge/bridgeMessaging.ts` — WebSocket/HTTP messaging
- `src/bridge/bridgeConfig.ts` — Bridge configuration
- `src/bridge/bridgeUI.ts` — Bridge UI rendering
- `src/bridge/jwtUtils.ts` — JWT authentication for bridge connections
- `src/bridge/types.ts` — Bridge types
- `src/bridge/initReplBridge.ts` — REPL integration
- `src/bridge/replBridge.ts` — REPL bridge handle
## Task
### Part A: Understand the bridge architecture
Read `src/bridge/types.ts` and `src/bridge/bridgeMain.ts` (first 100 lines). Document:
1. What protocols does the bridge use? (WebSocket, HTTP polling, etc.)
2. How does authentication work? (JWT)
3. What messages flow between IDE and CLI?
4. How is the bridge lifecycle managed?
### Part B: Assess what's needed vs. what can be deferred
The bridge is a **nice-to-have** for initial build-out. Categorize:
1. **Must work**: Feature flag gate (`feature('BRIDGE_MODE')` returns `false` → bridge code is skipped)
2. **Can defer**: Full bridge functionality
3. **Might break**: Code paths that assume bridge is available even when disabled
### Part C: Verify the feature gate works
Ensure that when `CLAUDE_CODE_BRIDGE_MODE=false` (or unset):
1. Bridge code is not imported
2. Bridge initialization is skipped
3. No bridge-related errors appear
4. The CLI works normally in terminal-only mode
### Part D: Stub the bridge for safety
If any code paths reference bridge functionality outside the feature gate:
1. Create `src/bridge/stub.ts` with no-op implementations
2. Make sure imports from `src/bridge/` resolve without crashing
3. Ensure the REPL works without bridge
### Part E: Document bridge activation
For future work, document what would be needed to enable the bridge:
1. Set `CLAUDE_CODE_BRIDGE_MODE=true`
2. What IDE extension is needed?
3. What authentication setup is required?
4. What ports/sockets does it use?
### Part F: Check the Chrome extension bridge
There's a `--claude-in-chrome-mcp` and `--chrome-native-host` mode referenced in `src/entrypoints/cli.tsx`. Read these paths and document what they do. These can be deferred — just make sure they don't crash when not in use.
## Verification
1. CLI works normally with bridge disabled (default)
2. No bridge-related errors in stdout/stderr
3. `feature('BRIDGE_MODE')` correctly returns `false`
4. Bridge architecture is documented for future enablement
5. No dangling imports that crash when bridge is off

137
prompts/14-dev-runner.md Normal file
View File

@@ -0,0 +1,137 @@
# Prompt 14: Create Development Runner
## Context
You are working in `/workspaces/claude-code`. By now you should have:
- Bun installed (Prompt 01)
- Runtime shims for `bun:bundle` and `MACRO` (Prompt 02)
- A build system (Prompt 03)
- Environment config (Prompt 05)
Now we need a way to **run the CLI in development mode** — quickly launching it without a full production build.
## Task
### Part A: Create `bun run dev` script
Bun can run TypeScript directly without compilation. Create a development launcher.
**Option 1: Direct Bun execution** (preferred)
Create `scripts/dev.ts`:
```ts
// scripts/dev.ts
// Development launcher — runs the CLI directly via Bun
// Usage: bun scripts/dev.ts [args...]
// Or: bun run dev [args...]
// Load shims first
import '../src/shims/preload.js'
// Register bun:bundle module resolver
// Since Bun natively supports the module, we may need to
// register our shim. Check if this is needed.
// Launch the CLI
await import('../src/entrypoints/cli.js')
```
**Option 2: Bun with preload**
Use Bun's `--preload` flag:
```bash
bun --preload ./src/shims/preload.ts src/entrypoints/cli.tsx
```
**Investigate which approach works** with the `bun:bundle` import. The tricky part is that `bun:bundle` is a special Bun module name — at runtime (without the bundler), Bun may not recognize it. You'll need to either:
1. Use Bun's `bunfig.toml` to create a module alias
2. Use a loader/plugin to intercept the import
3. Use a pre-transform step to rewrite imports
### Part B: Handle the `bun:bundle` import at runtime
This is the critical challenge. Options to investigate:
**Option A: `bunfig.toml` alias**
```toml
[resolve]
alias = { "bun:bundle" = "./src/shims/bun-bundle.ts" }
```
**Option B: Bun plugin**
Create a Bun plugin that intercepts `bun:bundle`:
```ts
// scripts/bun-plugin-shims.ts
import { plugin } from 'bun'
plugin({
name: 'bun-bundle-shim',
setup(build) {
build.onResolve({ filter: /^bun:bundle$/ }, () => ({
path: resolve(import.meta.dir, '../src/shims/bun-bundle.ts'),
}))
},
})
```
Then reference it in `bunfig.toml`:
```toml
preload = ["./scripts/bun-plugin-shims.ts"]
```
**Option C: Patch at build time**
If runtime aliasing doesn't work, use a quick pre-build transform that replaces `from 'bun:bundle'` with `from '../shims/bun-bundle.js'` across all files, outputting to a temp directory.
**Try the options in order** and go with whichever works.
### Part C: Add npm scripts
Add to `package.json`:
```json
{
"scripts": {
"dev": "bun scripts/dev.ts",
"dev:repl": "bun scripts/dev.ts --repl",
"start": "bun scripts/dev.ts"
}
}
```
### Part D: Create a `.env` loader
If the dev script doesn't automatically load `.env`, add dotenv support:
```bash
bun add -d dotenv-cli
```
Then wrap the dev command:
```json
"dev": "dotenv -e .env -- bun scripts/dev.ts"
```
Or use Bun's built-in `.env` loading (Bun automatically reads `.env` files).
### Part E: Test the development runner
1. Set `ANTHROPIC_API_KEY` in `.env`
2. Run `bun run dev --version` → should print version
3. Run `bun run dev --help` → should print help text
4. Run `bun run dev` → should start the interactive REPL (will need working Ink UI)
5. Run `ANTHROPIC_API_KEY=sk-ant-... bun run dev -p "say hello"` → should make one API call and print response
### Part F: Add debug mode
Add a debug script that enables verbose logging:
```json
{
"scripts": {
"dev:debug": "CLAUDE_CODE_DEBUG_LOG_LEVEL=debug bun scripts/dev.ts"
}
}
```
## Verification
1. `bun run dev --version` prints the version
2. `bun run dev --help` prints help without errors
3. The `bun:bundle` import resolves correctly at runtime
4. `.env` variables are loaded
5. No module resolution errors on startup

View File

@@ -0,0 +1,123 @@
# Prompt 15: Production Bundle & Packaging
## Context
You are working in `/workspaces/claude-code`. By now you should have a working development runner (Prompt 14) and build system (Prompt 03). This prompt focuses on creating a production-quality bundle.
## Task
### Part A: Optimize the esbuild configuration
Update `scripts/build-bundle.ts` for production:
1. **Tree shaking** — esbuild does this by default, but verify:
- Feature-gated code with `if (feature('X'))` where X is `false` should be eliminated
- `process.env.USER_TYPE === 'ant'` branches should be eliminated (set `define` to replace with `false`)
2. **Define replacements** — Inline constants at build time:
```ts
define: {
'process.env.USER_TYPE': '"external"', // Not 'ant' (Anthropic internal)
'process.env.NODE_ENV': '"production"',
}
```
3. **Minification** — Enable for production (`--minify` flag)
4. **Source maps** — External source maps for production debugging
5. **Target** — Ensure compatibility with both Bun 1.1+ and Node.js 20+
### Part B: Handle chunking/splitting
The full bundle will be large (~2-5 MB minified). Consider:
1. **Single file** — Simplest, works everywhere (recommended for CLI tools)
2. **Code splitting** — Multiple chunks, only useful if we want lazy loading
Go with single file unless it causes issues.
### Part C: Create the executable
After bundling to `dist/cli.mjs`:
1. **Add shebang** — `#!/usr/bin/env node` (already in banner)
2. **Make executable** — `chmod +x dist/cli.mjs`
3. **Test it runs** — `./dist/cli.mjs --version`
### Part D: Platform packaging
Create packaging scripts for distribution:
**npm package** (`scripts/package-npm.ts`):
```ts
// Generate a publishable npm package in dist/npm/
// - package.json with bin, main, version
// - The bundled CLI file
// - README.md
```
**Standalone binary** (optional, via Bun):
```bash
bun build --compile src/entrypoints/cli.tsx --outfile dist/claude
```
This creates a single binary with Bun runtime embedded. Not all features will work, but it's worth testing.
### Part E: Docker build
Update the existing `Dockerfile` to produce a runnable container:
```dockerfile
FROM oven/bun:1-alpine AS builder
WORKDIR /app
COPY package.json bun.lockb* ./
RUN bun install --frozen-lockfile || bun install
COPY . .
RUN bun run build:prod
FROM oven/bun:1-alpine
WORKDIR /app
COPY --from=builder /app/dist/cli.mjs /app/
RUN apk add --no-cache git ripgrep
ENTRYPOINT ["bun", "/app/cli.mjs"]
```
### Part F: Verify production build
1. `bun run build:prod` succeeds
2. `ls -lh dist/cli.mjs` — check file size
3. `node dist/cli.mjs --version` — works with Node.js
4. `bun dist/cli.mjs --version` — works with Bun
5. `ANTHROPIC_API_KEY=... node dist/cli.mjs -p "hello"` — end-to-end works
### Part G: CI build script
Create `scripts/ci-build.sh`:
```bash
#!/bin/bash
set -euo pipefail
echo "=== Installing dependencies ==="
bun install
echo "=== Type checking ==="
bun run typecheck
echo "=== Linting ==="
bun run lint
echo "=== Building ==="
bun run build:prod
echo "=== Verifying build ==="
node dist/cli.mjs --version
echo "=== Done ==="
```
## Verification
1. `bun run build:prod` produces `dist/cli.mjs`
2. The bundle is < 10 MB (ideally < 5 MB)
3. `node dist/cli.mjs --version` works
4. `docker build .` succeeds (if Docker is available)
5. CI script runs end-to-end without errors

125
prompts/16-testing.md Normal file
View File

@@ -0,0 +1,125 @@
# Prompt 16: Add Test Infrastructure & Smoke Tests
## Context
You are working in `/workspaces/claude-code`. The leaked source does not include any test files or test configuration (they were presumably in a separate directory or repo). We need to add a test framework and write smoke tests for core subsystems.
## Task
### Part A: Set up Vitest
```bash
bun add -d vitest @types/node
```
Create `vitest.config.ts`:
```ts
import { defineConfig } from 'vitest/config'
import { resolve } from 'path'
export default defineConfig({
test: {
globals: true,
environment: 'node',
include: ['tests/**/*.test.ts'],
setupFiles: ['tests/setup.ts'],
testTimeout: 30000,
},
resolve: {
alias: {
'bun:bundle': resolve(__dirname, 'src/shims/bun-bundle.ts'),
},
},
})
```
Create `tests/setup.ts`:
```ts
// Global test setup
import '../src/shims/preload.js'
```
Add to `package.json`:
```json
{
"scripts": {
"test": "vitest run",
"test:watch": "vitest"
}
}
```
### Part B: Write unit tests for shims
`tests/shims/bun-bundle.test.ts`:
- Test `feature()` returns `false` for unknown flags
- Test `feature()` returns `false` for disabled flags
- Test `feature()` returns `true` when env var is set
- Test `feature('ABLATION_BASELINE')` always returns `false`
`tests/shims/macro.test.ts`:
- Test `MACRO.VERSION` is a string
- Test `MACRO.PACKAGE_URL` is set
- Test `MACRO.ISSUES_EXPLAINER` is set
### Part C: Write smoke tests for core modules
`tests/smoke/tools.test.ts`:
- Test that `getTools()` returns an array
- Test that each tool has: name, description, inputSchema
- Test that BashTool, FileReadTool, FileWriteTool are present
`tests/smoke/commands.test.ts`:
- Test that `getCommands()` returns an array
- Test that each command has: name, execute function
- Test that /help and /config commands exist
`tests/smoke/context.test.ts`:
- Test that `getSystemContext()` returns OS info
- Test that git status can be collected
- Test that platform detection works on Linux
`tests/smoke/prompt.test.ts`:
- Test that `getSystemPrompt()` returns a non-empty array
- Test that the prompt includes tool descriptions
- Test that MACRO references are resolved (no `undefined`)
### Part D: Write integration tests (if API key available)
`tests/integration/api.test.ts`:
- Skip if `ANTHROPIC_API_KEY` is not set
- Test API client creation
- Test a simple message (hello world)
- Test streaming works
- Test tool use (calculator-style tool call)
`tests/integration/mcp.test.ts`:
- Test MCP server starts
- Test MCP client connects
- Test tool listing
- Test tool execution roundtrip
### Part E: Write build tests
`tests/build/bundle.test.ts`:
- Test that `dist/cli.mjs` exists after build
- Test that it has a shebang
- Test that it's not empty
- Test that `node dist/cli.mjs --version` exits cleanly
### Part F: Add pre-commit hook (optional)
If the project uses git hooks, add:
```bash
# In package.json or a git hook
bun run typecheck && bun run test
```
## Verification
1. `bun run test` runs all tests
2. Shim tests pass
3. Smoke tests pass (tools, commands, context, prompts load)
4. Integration tests are skipped when no API key is set
5. Integration tests pass when API key is available
6. Test output is clear and readable

197
scripts/build-bundle.ts Normal file
View File

@@ -0,0 +1,197 @@
// scripts/build-bundle.ts
// Usage: bun scripts/build-bundle.ts [--watch] [--minify] [--no-sourcemap]
//
// Production build: bun scripts/build-bundle.ts --minify
// Dev build: bun scripts/build-bundle.ts
// Watch mode: bun scripts/build-bundle.ts --watch
import * as esbuild from 'esbuild'
import { resolve, dirname } from 'path'
import { chmodSync, readFileSync, existsSync } from 'fs'
import { fileURLToPath } from 'url'
// Bun: import.meta.dir — Node 21+: import.meta.dirname — fallback
const __dir: string =
(import.meta as any).dir ??
(import.meta as any).dirname ??
dirname(fileURLToPath(import.meta.url))
const ROOT = resolve(__dir, '..')
const watch = process.argv.includes('--watch')
const minify = process.argv.includes('--minify')
const noSourcemap = process.argv.includes('--no-sourcemap')
// Read version from package.json for MACRO injection
const pkg = JSON.parse(readFileSync(resolve(ROOT, 'package.json'), 'utf-8'))
const version = pkg.version || '0.0.0-dev'
// ── Plugin: resolve bare 'src/' imports (tsconfig baseUrl: ".") ──
// The codebase uses `import ... from 'src/foo/bar.js'` which relies on
// TypeScript's baseUrl resolution. This plugin maps those to real TS files.
const srcResolverPlugin: esbuild.Plugin = {
name: 'src-resolver',
setup(build) {
build.onResolve({ filter: /^src\// }, (args) => {
const basePath = resolve(ROOT, args.path)
// Already exists as-is
if (existsSync(basePath)) {
return { path: basePath }
}
// Strip .js/.jsx and try TypeScript extensions
const withoutExt = basePath.replace(/\.(js|jsx)$/, '')
for (const ext of ['.ts', '.tsx', '.js', '.jsx']) {
const candidate = withoutExt + ext
if (existsSync(candidate)) {
return { path: candidate }
}
}
// Try as directory with index file
const dirPath = basePath.replace(/\.(js|jsx)$/, '')
for (const ext of ['.ts', '.tsx', '.js', '.jsx']) {
const candidate = resolve(dirPath, 'index' + ext)
if (existsSync(candidate)) {
return { path: candidate }
}
}
// Let esbuild handle it (will error if truly missing)
return undefined
})
},
}
const buildOptions: esbuild.BuildOptions = {
entryPoints: [resolve(ROOT, 'src/entrypoints/cli.tsx')],
bundle: true,
platform: 'node',
target: ['node20', 'es2022'],
format: 'esm',
outdir: resolve(ROOT, 'dist'),
outExtension: { '.js': '.mjs' },
// Single-file output — no code splitting for CLI tools
splitting: false,
plugins: [srcResolverPlugin],
// Use tsconfig for baseUrl / paths resolution (complements plugin above)
tsconfig: resolve(ROOT, 'tsconfig.json'),
// Alias bun:bundle to our runtime shim
alias: {
'bun:bundle': resolve(ROOT, 'src/shims/bun-bundle.ts'),
},
// Don't bundle node built-ins or problematic native packages
external: [
// Node built-ins (with and without node: prefix)
'fs', 'path', 'os', 'crypto', 'child_process', 'http', 'https',
'net', 'tls', 'url', 'util', 'stream', 'events', 'buffer',
'querystring', 'readline', 'zlib', 'assert', 'tty', 'worker_threads',
'perf_hooks', 'async_hooks', 'dns', 'dgram', 'cluster',
'string_decoder', 'module', 'vm', 'constants', 'domain',
'console', 'process', 'v8', 'inspector',
'node:*',
// Native addons that can't be bundled
'fsevents',
'sharp',
'image-processor-napi',
// Anthropic-internal packages (not published externally)
'@anthropic-ai/sandbox-runtime',
'@anthropic-ai/claude-agent-sdk',
// Anthropic-internal (@ant/) packages — gated behind USER_TYPE === 'ant'
'@ant/*',
],
jsx: 'automatic',
// Source maps for production debugging (external .map files)
sourcemap: noSourcemap ? false : 'external',
// Minification for production
minify,
// Tree shaking (on by default, explicit for clarity)
treeShaking: true,
// Define replacements — inline constants at build time
// MACRO.* — originally inlined by Bun's bundler at compile time
// process.env.USER_TYPE — eliminates 'ant' (Anthropic-internal) code branches
define: {
'MACRO.VERSION': JSON.stringify(version),
'MACRO.PACKAGE_URL': JSON.stringify('@anthropic-ai/claude-code'),
'MACRO.ISSUES_EXPLAINER': JSON.stringify(
'report issues at https://github.com/anthropics/claude-code/issues'
),
'process.env.USER_TYPE': '"external"',
'process.env.NODE_ENV': minify ? '"production"' : '"development"',
},
// Banner: shebang for direct CLI execution
banner: {
js: '#!/usr/bin/env node\n',
},
// Handle the .js → .ts resolution that the codebase uses
resolveExtensions: ['.tsx', '.ts', '.jsx', '.js', '.json'],
logLevel: 'info',
// Metafile for bundle analysis
metafile: true,
}
async function main() {
if (watch) {
const ctx = await esbuild.context(buildOptions)
await ctx.watch()
console.log('Watching for changes...')
} else {
const startTime = Date.now()
const result = await esbuild.build(buildOptions)
if (result.errors.length > 0) {
console.error('Build failed')
process.exit(1)
}
// Make the output executable
const outPath = resolve(ROOT, 'dist/cli.mjs')
try {
chmodSync(outPath, 0o755)
} catch {
// chmod may fail on some platforms, non-fatal
}
const elapsed = Date.now() - startTime
// Print bundle size info
if (result.metafile) {
const text = await esbuild.analyzeMetafile(result.metafile, { verbose: false })
const outFiles = Object.entries(result.metafile.outputs)
for (const [file, info] of outFiles) {
if (file.endsWith('.mjs')) {
const sizeMB = ((info as { bytes: number }).bytes / 1024 / 1024).toFixed(2)
console.log(`\n ${file}: ${sizeMB} MB`)
}
}
console.log(`\nBuild complete in ${elapsed}ms → dist/`)
// Write metafile for further analysis
const { writeFileSync } = await import('fs')
writeFileSync(
resolve(ROOT, 'dist/meta.json'),
JSON.stringify(result.metafile),
)
console.log(' Metafile written to dist/meta.json')
}
}
}
main().catch(err => {
console.error(err)
process.exit(1)
})

58
scripts/build-web.ts Normal file
View File

@@ -0,0 +1,58 @@
// scripts/build-web.ts
// Bundles the browser-side terminal frontend.
//
// Usage:
// bun scripts/build-web.ts # dev build
// bun scripts/build-web.ts --watch # watch mode
// bun scripts/build-web.ts --minify # production (minified)
import * as esbuild from 'esbuild'
import { resolve, dirname } from 'path'
import { fileURLToPath } from 'url'
const __dir: string =
(import.meta as any).dir ??
(import.meta as any).dirname ??
dirname(fileURLToPath(import.meta.url))
const ROOT = resolve(__dir, '..')
const ENTRY = resolve(ROOT, 'src/server/web/terminal.ts')
const OUT_DIR = resolve(ROOT, 'src/server/web/public')
const watch = process.argv.includes('--watch')
const minify = process.argv.includes('--minify')
const buildOptions: esbuild.BuildOptions = {
entryPoints: [ENTRY],
bundle: true,
platform: 'browser',
target: ['es2020', 'chrome90', 'firefox90', 'safari14'],
format: 'esm',
outdir: OUT_DIR,
// CSS imported from JS is auto-emitted alongside the JS output
loader: { '.css': 'css' },
minify,
sourcemap: minify ? false : 'inline',
tsconfig: resolve(ROOT, 'src/server/web/tsconfig.json'),
logLevel: 'info',
}
async function main() {
if (watch) {
const ctx = await esbuild.context(buildOptions)
await ctx.watch()
console.log('Watching src/server/web/terminal.ts...')
} else {
const start = Date.now()
const result = await esbuild.build(buildOptions)
if (result.errors.length > 0) {
process.exit(1)
}
console.log(`Web build complete in ${Date.now() - start}ms → ${OUT_DIR}`)
}
}
main().catch(err => {
console.error(err)
process.exit(1)
})

58
scripts/build.sh Normal file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
# ─────────────────────────────────────────────────────────────
# build.sh — Minimal build / check script for the leaked source
# ─────────────────────────────────────────────────────────────
# Usage:
# ./scripts/build.sh # install + typecheck + lint
# ./scripts/build.sh install # install deps only
# ./scripts/build.sh check # typecheck + lint only
# ─────────────────────────────────────────────────────────────
set -euo pipefail
STEP="${1:-all}"
install_deps() {
echo "── Installing dependencies ──"
if command -v bun &>/dev/null; then
bun install
elif command -v npm &>/dev/null; then
npm install
else
echo "Error: neither bun nor npm found on PATH" >&2
exit 1
fi
}
typecheck() {
echo "── Running TypeScript type-check ──"
npx tsc --noEmit
}
lint() {
echo "── Running Biome lint ──"
npx @biomejs/biome check src/
}
case "$STEP" in
install)
install_deps
;;
check)
typecheck
lint
;;
all)
install_deps
typecheck
lint
;;
*)
echo "Unknown step: $STEP"
echo "Usage: $0 [install|check|all]"
exit 1
;;
esac
echo "── Done ──"

View File

@@ -0,0 +1,18 @@
// scripts/bun-plugin-shims.ts
// Bun preload plugin — intercepts `bun:bundle` imports at runtime
// and resolves them to our local shim so the CLI can run without
// the production Bun bundler pass.
import { plugin } from 'bun'
import { resolve } from 'path'
plugin({
name: 'bun-bundle-shim',
setup(build) {
const shimPath = resolve(import.meta.dir, '../src/shims/bun-bundle.ts')
build.onResolve({ filter: /^bun:bundle$/ }, () => ({
path: shimPath,
}))
},
})

49
scripts/ci-build.sh Normal file
View File

@@ -0,0 +1,49 @@
#!/bin/bash
# ─────────────────────────────────────────────────────────────
# ci-build.sh — CI/CD build pipeline
# ─────────────────────────────────────────────────────────────
# Runs the full build pipeline: install, typecheck, lint, build,
# and verify the output. Intended for CI environments.
#
# Usage:
# ./scripts/ci-build.sh
# ─────────────────────────────────────────────────────────────
set -euo pipefail
echo "=== Installing dependencies ==="
bun install
echo "=== Type checking ==="
bun run typecheck
echo "=== Linting ==="
bun run lint
echo "=== Building production bundle ==="
bun run build:prod
echo "=== Verifying build output ==="
# Check that the bundle was produced
if [ ! -f dist/cli.mjs ]; then
echo "ERROR: dist/cli.mjs not found"
exit 1
fi
# Print bundle size
SIZE=$(ls -lh dist/cli.mjs | awk '{print $5}')
echo " Bundle size: $SIZE"
# Verify the bundle runs with Node.js
if command -v node &>/dev/null; then
VERSION=$(node dist/cli.mjs --version 2>&1 || true)
echo " node dist/cli.mjs --version → $VERSION"
fi
# Verify the bundle runs with Bun
if command -v bun &>/dev/null; then
VERSION=$(bun dist/cli.mjs --version 2>&1 || true)
echo " bun dist/cli.mjs --version → $VERSION"
fi
echo "=== Done ==="

15
scripts/dev.ts Normal file
View File

@@ -0,0 +1,15 @@
// scripts/dev.ts
// Development launcher — runs the CLI directly via Bun's TS runtime.
//
// Usage:
// bun scripts/dev.ts [args...]
// bun run dev [args...]
//
// The bun:bundle shim is loaded automatically via bunfig.toml preload.
// Bun automatically reads .env files from the project root.
// Load MACRO global (version, package url, etc.) before any app code
import '../src/shims/macro.js'
// Launch the CLI entrypoint
await import('../src/entrypoints/cli.js')

91
scripts/package-npm.ts Normal file
View File

@@ -0,0 +1,91 @@
// scripts/package-npm.ts
// Generate a publishable npm package in dist/npm/
//
// Usage: bun scripts/package-npm.ts
//
// Prerequisites: run `bun run build:prod` first to generate dist/cli.mjs
import { readFileSync, writeFileSync, mkdirSync, copyFileSync, existsSync, chmodSync } from 'fs'
import { resolve } from 'path'
// Bun: import.meta.dir — Node 21+: import.meta.dirname — fallback
const __dir: string =
(import.meta as ImportMeta & { dir?: string; dirname?: string }).dir ??
(import.meta as ImportMeta & { dir?: string; dirname?: string }).dirname ??
new URL('.', import.meta.url).pathname
const ROOT = resolve(__dir, '..')
const DIST = resolve(ROOT, 'dist')
const NPM_DIR = resolve(DIST, 'npm')
const CLI_BUNDLE = resolve(DIST, 'cli.mjs')
function main() {
// Verify the bundle exists
if (!existsSync(CLI_BUNDLE)) {
console.error('Error: dist/cli.mjs not found. Run `bun run build:prod` first.')
process.exit(1)
}
// Read source package.json
const srcPkg = JSON.parse(readFileSync(resolve(ROOT, 'package.json'), 'utf-8'))
// Create npm output directory
mkdirSync(NPM_DIR, { recursive: true })
// Copy the bundled CLI
copyFileSync(CLI_BUNDLE, resolve(NPM_DIR, 'cli.mjs'))
chmodSync(resolve(NPM_DIR, 'cli.mjs'), 0o755)
// Copy source map if it exists
const sourceMap = resolve(DIST, 'cli.mjs.map')
if (existsSync(sourceMap)) {
copyFileSync(sourceMap, resolve(NPM_DIR, 'cli.mjs.map'))
}
// Generate a publishable package.json
const npmPkg = {
name: srcPkg.name || '@anthropic-ai/claude-code',
version: srcPkg.version || '0.0.0',
description: srcPkg.description || 'Anthropic Claude Code CLI',
license: 'MIT',
type: 'module',
main: './cli.mjs',
bin: {
claude: './cli.mjs',
},
engines: {
node: '>=20.0.0',
},
os: ['darwin', 'linux', 'win32'],
files: [
'cli.mjs',
'cli.mjs.map',
'README.md',
],
}
writeFileSync(
resolve(NPM_DIR, 'package.json'),
JSON.stringify(npmPkg, null, 2) + '\n',
)
// Copy README if it exists
const readme = resolve(ROOT, 'README.md')
if (existsSync(readme)) {
copyFileSync(readme, resolve(NPM_DIR, 'README.md'))
}
// Summary
const bundleSize = readFileSync(CLI_BUNDLE).byteLength
const sizeMB = (bundleSize / 1024 / 1024).toFixed(2)
console.log('npm package generated in dist/npm/')
console.log(` package: ${npmPkg.name}@${npmPkg.version}`)
console.log(` bundle: cli.mjs (${sizeMB} MB)`)
console.log(` bin: claude → ./cli.mjs`)
console.log('')
console.log('To publish:')
console.log(' cd dist/npm && npm publish')
}
main()

26
scripts/test-auth.ts Normal file
View File

@@ -0,0 +1,26 @@
// scripts/test-auth.ts
// Quick test that the API key is configured and can reach Anthropic
// Usage: bun scripts/test-auth.ts
import Anthropic from '@anthropic-ai/sdk'
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
})
async function main() {
try {
const msg = await client.messages.create({
model: process.env.ANTHROPIC_MODEL || 'claude-sonnet-4-20250514',
max_tokens: 50,
messages: [{ role: 'user', content: 'Say "hello" and nothing else.' }],
})
console.log('✅ API connection successful!')
console.log('Response:', msg.content[0].type === 'text' ? msg.content[0].text : msg.content[0])
} catch (err: any) {
console.error('❌ API connection failed:', err.message)
process.exit(1)
}
}
main()

64
scripts/test-commands.ts Normal file
View File

@@ -0,0 +1,64 @@
// scripts/test-commands.ts
// Verify all commands load without errors
// Usage: bun scripts/test-commands.ts
//
// The bun:bundle shim is loaded automatically via bunfig.toml preload.
// Load MACRO global before any app code
import '../src/shims/macro.js'
async function main() {
const { getCommands } = await import('../src/commands.js')
const cwd = process.cwd()
const commands = await getCommands(cwd)
console.log(`Loaded ${commands.length} commands:\n`)
// Group commands by type for readability
const byType: Record<string, typeof commands> = {}
for (const cmd of commands) {
const t = cmd.type
if (!byType[t]) byType[t] = []
byType[t]!.push(cmd)
}
for (const [type, cmds] of Object.entries(byType)) {
console.log(` [${type}] (${cmds.length} commands)`)
for (const cmd of cmds) {
const aliases = cmd.aliases?.length ? ` (aliases: ${cmd.aliases.join(', ')})` : ''
const hidden = cmd.isHidden ? ' [hidden]' : ''
const source = cmd.type === 'prompt' ? ` (source: ${cmd.source})` : ''
console.log(` /${cmd.name}${cmd.description || '(no description)'}${aliases}${hidden}${source}`)
}
console.log()
}
// Verify essential commands are present
const essential = ['help', 'config', 'init', 'commit', 'review']
const commandNames = new Set(commands.map(c => c.name))
const missing = essential.filter(n => !commandNames.has(n))
if (missing.length > 0) {
console.error(`❌ Missing essential commands: ${missing.join(', ')}`)
process.exit(1)
}
console.log(`✅ All ${essential.length} essential commands present: ${essential.join(', ')}`)
// Check moved-to-plugin commands
const movedToPlugin = commands.filter(
c => c.type === 'prompt' && c.description && c.name
).filter(c => ['security-review', 'pr-comments'].includes(c.name))
if (movedToPlugin.length > 0) {
console.log(`✅ Moved-to-plugin commands present and loadable: ${movedToPlugin.map(c => c.name).join(', ')}`)
}
console.log(`\n✅ Command system loaded successfully (${commands.length} commands)`)
}
main().catch(err => {
console.error('❌ Command loading failed:', err)
process.exit(1)
})

180
scripts/test-mcp.ts Normal file
View File

@@ -0,0 +1,180 @@
#!/usr/bin/env npx tsx
/**
* scripts/test-mcp.ts
* Test MCP client/server roundtrip using the standalone mcp-server sub-project.
*
* Usage:
* cd mcp-server && npm install && npm run build && cd ..
* npx tsx scripts/test-mcp.ts
*
* What it does:
* 1. Spawns mcp-server/dist/index.js as a child process (stdio transport)
* 2. Creates an MCP client using @modelcontextprotocol/sdk
* 3. Connects client to server
* 4. Lists available tools
* 5. Calls list_tools and read_source_file tools
* 6. Lists resources and reads one
* 7. Prints results and exits
*/
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { resolve, dirname } from "node:path";
import { fileURLToPath } from "node:url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const PROJECT_ROOT = resolve(__dirname, "..");
// ── Helpers ───────────────────────────────────────────────────────────────
function section(title: string) {
console.log(`\n${"─".repeat(60)}`);
console.log(` ${title}`);
console.log(`${"─".repeat(60)}`);
}
function jsonPretty(obj: unknown): string {
return JSON.stringify(obj, null, 2);
}
// ── Main ──────────────────────────────────────────────────────────────────
async function main() {
const serverScript = resolve(PROJECT_ROOT, "mcp-server", "dist", "index.js");
const srcRoot = resolve(PROJECT_ROOT, "src");
section("1. Spawning MCP server (stdio transport)");
console.log(` Server: ${serverScript}`);
console.log(` SRC_ROOT: ${srcRoot}`);
const transport = new StdioClientTransport({
command: "node",
args: [serverScript],
env: {
...process.env,
CLAUDE_CODE_SRC_ROOT: srcRoot,
} as Record<string, string>,
stderr: "pipe",
});
// Log stderr from the server process
if (transport.stderr) {
transport.stderr.on("data", (data: Buffer) => {
const msg = data.toString().trim();
if (msg) console.log(` [server stderr] ${msg}`);
});
}
section("2. Creating MCP client");
const client = new Client(
{
name: "test-mcp-client",
version: "1.0.0",
},
{
capabilities: {},
}
);
section("3. Connecting client → server");
await client.connect(transport);
console.log(" ✓ Connected successfully");
// ── List Tools ──────────────────────────────────────────────────────────
section("4. Listing available tools");
const toolsResult = await client.listTools();
console.log(` Found ${toolsResult.tools.length} tool(s):`);
for (const tool of toolsResult.tools) {
console.log(`${tool.name}${tool.description?.slice(0, 80)}`);
}
// ── Call list_tools ─────────────────────────────────────────────────────
section("5. Calling tool: list_tools");
const listToolsResult = await client.callTool({
name: "list_tools",
arguments: {},
});
const listToolsContent = listToolsResult.content as Array<{
type: string;
text: string;
}>;
const listToolsText = listToolsContent
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n");
// Show first 500 chars
console.log(
` Result (first 500 chars):\n${listToolsText.slice(0, 500)}${listToolsText.length > 500 ? "\n …(truncated)" : ""}`
);
// ── Call read_source_file ───────────────────────────────────────────────
section("6. Calling tool: read_source_file (path: 'main.tsx', lines 1-20)");
const readResult = await client.callTool({
name: "read_source_file",
arguments: { path: "main.tsx", startLine: 1, endLine: 20 },
});
const readContent = readResult.content as Array<{
type: string;
text: string;
}>;
const readText = readContent
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n");
console.log(` Result:\n${readText.slice(0, 600)}`);
// ── List Resources ──────────────────────────────────────────────────────
section("7. Listing resources");
try {
const resourcesResult = await client.listResources();
console.log(` Found ${resourcesResult.resources.length} resource(s):`);
for (const res of resourcesResult.resources.slice(0, 10)) {
console.log(`${res.name} (${res.uri})`);
}
if (resourcesResult.resources.length > 10) {
console.log(
` …and ${resourcesResult.resources.length - 10} more`
);
}
// Read the first resource
if (resourcesResult.resources.length > 0) {
const firstRes = resourcesResult.resources[0]!;
section(`8. Reading resource: ${firstRes.name}`);
const resContent = await client.readResource({ uri: firstRes.uri });
const resText = resContent.contents
.filter((c): c is { uri: string; text: string; mimeType?: string } => "text" in c)
.map((c) => c.text)
.join("\n");
console.log(
` Content (first 400 chars):\n${resText.slice(0, 400)}${resText.length > 400 ? "\n …(truncated)" : ""}`
);
}
} catch (err) {
console.log(` Resources not supported or error: ${err}`);
}
// ── List Prompts ────────────────────────────────────────────────────────
section("9. Listing prompts");
try {
const promptsResult = await client.listPrompts();
console.log(` Found ${promptsResult.prompts.length} prompt(s):`);
for (const p of promptsResult.prompts) {
console.log(`${p.name}${p.description?.slice(0, 80)}`);
}
} catch (err) {
console.log(` Prompts not supported or error: ${err}`);
}
// ── Cleanup ─────────────────────────────────────────────────────────────
section("Done ✓");
console.log(" All tests passed. Closing connection.");
await client.close();
process.exit(0);
}
main().catch((err) => {
console.error("\n✗ Test failed:", err);
process.exit(1);
});

233
scripts/test-services.ts Normal file
View File

@@ -0,0 +1,233 @@
// scripts/test-services.ts
// Test that all services initialize without crashing
// Usage: bun scripts/test-services.ts
import '../src/shims/preload.js'
// Ensure we don't accidentally talk to real servers
process.env.NODE_ENV = process.env.NODE_ENV || 'test'
type TestResult = { name: string; status: 'pass' | 'fail' | 'skip'; detail?: string }
const results: TestResult[] = []
function pass(name: string, detail?: string) {
results.push({ name, status: 'pass', detail })
console.log(`${name}${detail ? `${detail}` : ''}`)
}
function fail(name: string, detail: string) {
results.push({ name, status: 'fail', detail })
console.log(`${name}${detail}`)
}
function skip(name: string, detail: string) {
results.push({ name, status: 'skip', detail })
console.log(` ⏭️ ${name}${detail}`)
}
async function testGrowthBook() {
console.log('\n--- GrowthBook (Feature Flags) ---')
try {
const gb = await import('../src/services/analytics/growthbook.js')
// Test cached feature value returns default when GrowthBook is unavailable
const boolResult = gb.getFeatureValue_CACHED_MAY_BE_STALE('nonexistent_feature', false)
if (boolResult === false) {
pass('getFeatureValue_CACHED_MAY_BE_STALE (bool)', 'returns default false')
} else {
fail('getFeatureValue_CACHED_MAY_BE_STALE (bool)', `expected false, got ${boolResult}`)
}
const strResult = gb.getFeatureValue_CACHED_MAY_BE_STALE('nonexistent_str', 'default_val')
if (strResult === 'default_val') {
pass('getFeatureValue_CACHED_MAY_BE_STALE (str)', 'returns default string')
} else {
fail('getFeatureValue_CACHED_MAY_BE_STALE (str)', `expected "default_val", got "${strResult}"`)
}
// Test Statsig gate check returns false
const gateResult = gb.checkStatsigFeatureGate_CACHED_MAY_BE_STALE('nonexistent_gate')
if (gateResult === false) {
pass('checkStatsigFeatureGate_CACHED_MAY_BE_STALE', 'returns false for unknown gate')
} else {
fail('checkStatsigFeatureGate_CACHED_MAY_BE_STALE', `expected false, got ${gateResult}`)
}
} catch (err: any) {
fail('GrowthBook import', err.message)
}
}
async function testAnalyticsSink() {
console.log('\n--- Analytics Sink ---')
try {
const analytics = await import('../src/services/analytics/index.js')
// logEvent should queue without crashing when no sink is attached
analytics.logEvent('test_event', { test_key: 1 })
pass('logEvent (no sink)', 'queues without crash')
await analytics.logEventAsync('test_async_event', { test_key: 2 })
pass('logEventAsync (no sink)', 'queues without crash')
} catch (err: any) {
fail('Analytics sink', err.message)
}
}
async function testPolicyLimits() {
console.log('\n--- Policy Limits ---')
try {
const pl = await import('../src/services/policyLimits/index.js')
// isPolicyAllowed should return true (fail open) when no restrictions loaded
const result = pl.isPolicyAllowed('allow_remote_sessions')
if (result === true) {
pass('isPolicyAllowed (no cache)', 'fails open — returns true')
} else {
fail('isPolicyAllowed (no cache)', `expected true (fail open), got ${result}`)
}
// isPolicyLimitsEligible should return false without valid auth
const eligible = pl.isPolicyLimitsEligible()
pass('isPolicyLimitsEligible', `returns ${eligible} (expected false in test env)`)
} catch (err: any) {
fail('Policy limits', err.message)
}
}
async function testRemoteManagedSettings() {
console.log('\n--- Remote Managed Settings ---')
try {
const rms = await import('../src/services/remoteManagedSettings/index.js')
// isEligibleForRemoteManagedSettings should return false without auth
const eligible = rms.isEligibleForRemoteManagedSettings()
pass('isEligibleForRemoteManagedSettings', `returns ${eligible} (expected false in test env)`)
// waitForRemoteManagedSettingsToLoad should resolve immediately if not eligible
await rms.waitForRemoteManagedSettingsToLoad()
pass('waitForRemoteManagedSettingsToLoad', 'resolves immediately when not eligible')
} catch (err: any) {
fail('Remote managed settings', err.message)
}
}
async function testBootstrapData() {
console.log('\n--- Bootstrap Data ---')
try {
const bootstrap = await import('../src/services/api/bootstrap.js')
// fetchBootstrapData should not crash — just skip when no auth
await bootstrap.fetchBootstrapData()
pass('fetchBootstrapData', 'completes without crash (skips when no auth)')
} catch (err: any) {
// fetchBootstrapData catches its own errors, so this means an import-level issue
fail('Bootstrap data', err.message)
}
}
async function testSessionMemoryUtils() {
console.log('\n--- Session Memory ---')
try {
const smUtils = await import('../src/services/SessionMemory/sessionMemoryUtils.js')
// Default config should be sensible
const config = smUtils.DEFAULT_SESSION_MEMORY_CONFIG
if (config.minimumMessageTokensToInit > 0 && config.minimumTokensBetweenUpdate > 0) {
pass('DEFAULT_SESSION_MEMORY_CONFIG', `init=${config.minimumMessageTokensToInit} tokens, update=${config.minimumTokensBetweenUpdate} tokens`)
} else {
fail('DEFAULT_SESSION_MEMORY_CONFIG', 'unexpected config values')
}
// getLastSummarizedMessageId should return undefined initially
const lastId = smUtils.getLastSummarizedMessageId()
if (lastId === undefined) {
pass('getLastSummarizedMessageId', 'returns undefined initially')
} else {
fail('getLastSummarizedMessageId', `expected undefined, got ${lastId}`)
}
} catch (err: any) {
fail('Session memory utils', err.message)
}
}
async function testCostTracker() {
console.log('\n--- Cost Tracking ---')
try {
const ct = await import('../src/cost-tracker.js')
// Total cost should start at 0
const cost = ct.getTotalCost()
if (cost === 0) {
pass('getTotalCost', 'starts at $0.00')
} else {
pass('getTotalCost', `current: $${cost.toFixed(4)} (non-zero means restored session)`)
}
// Duration should be available
const duration = ct.getTotalDuration()
pass('getTotalDuration', `${duration}ms`)
// Token counters should be available
const inputTokens = ct.getTotalInputTokens()
const outputTokens = ct.getTotalOutputTokens()
pass('Token counters', `input=${inputTokens}, output=${outputTokens}`)
// Lines changed
const added = ct.getTotalLinesAdded()
const removed = ct.getTotalLinesRemoved()
pass('Lines changed', `+${added} -${removed}`)
} catch (err: any) {
fail('Cost tracker', err.message)
}
}
async function testInit() {
console.log('\n--- Init (entrypoint) ---')
try {
const { init } = await import('../src/entrypoints/init.js')
await init()
pass('init()', 'completed successfully')
} catch (err: any) {
fail('init()', err.message)
}
}
async function main() {
console.log('=== Services Layer Smoke Test ===')
console.log(`Environment: NODE_ENV=${process.env.NODE_ENV}`)
console.log(`Auth: ANTHROPIC_API_KEY=${process.env.ANTHROPIC_API_KEY ? '(set)' : '(not set)'}`)
// Test individual services first (order: least-dependent → most-dependent)
await testAnalyticsSink()
await testGrowthBook()
await testPolicyLimits()
await testRemoteManagedSettings()
await testBootstrapData()
await testSessionMemoryUtils()
await testCostTracker()
// Then test the full init sequence
await testInit()
// Summary
console.log('\n=== Summary ===')
const passed = results.filter(r => r.status === 'pass').length
const failed = results.filter(r => r.status === 'fail').length
const skipped = results.filter(r => r.status === 'skip').length
console.log(` ${passed} passed, ${failed} failed, ${skipped} skipped`)
if (failed > 0) {
console.log('\nFailed tests:')
for (const r of results.filter(r => r.status === 'fail')) {
console.log(`${r.name}: ${r.detail}`)
}
process.exit(1)
}
console.log('\n✅ All services handle graceful degradation correctly')
}
main().catch(err => {
console.error('Fatal error in smoke test:', err)
process.exit(1)
})

15
scripts/tsconfig.json Normal file
View File

@@ -0,0 +1,15 @@
{
"compilerOptions": {
"target": "ESNext",
"module": "ESNext",
"moduleResolution": "bundler",
"esModuleInterop": true,
"strict": true,
"skipLibCheck": true,
"noEmit": true,
"resolveJsonModule": true,
"isolatedModules": true,
"types": ["node"]
},
"include": ["./**/*.ts", "./types.d.ts"]
}

86
scripts/types.d.ts vendored Normal file
View File

@@ -0,0 +1,86 @@
// Local type declarations for scripts/ — avoids depending on installed packages
// for type checking in build scripts.
// ── esbuild (minimal surface used by build-bundle.ts) ──
declare module 'esbuild' {
export interface Plugin {
name: string
setup(build: PluginBuild): void
}
export interface PluginBuild {
onResolve(
options: { filter: RegExp },
callback: (args: OnResolveArgs) => OnResolveResult | undefined | null,
): void
}
export interface OnResolveArgs {
path: string
importer: string
namespace: string
resolveDir: string
kind: string
pluginData: unknown
}
export interface OnResolveResult {
path?: string
external?: boolean
namespace?: string
pluginData?: unknown
}
export interface BuildOptions {
entryPoints?: string[]
bundle?: boolean
platform?: string
target?: string[]
format?: string
outdir?: string
outExtension?: Record<string, string>
splitting?: boolean
plugins?: Plugin[]
tsconfig?: string
alias?: Record<string, string>
external?: string[]
jsx?: string
sourcemap?: boolean | string
minify?: boolean
treeShaking?: boolean
define?: Record<string, string>
banner?: Record<string, string>
resolveExtensions?: string[]
logLevel?: string
metafile?: boolean
[key: string]: unknown
}
export interface Metafile {
inputs: Record<string, { bytes: number; imports: unknown[] }>
outputs: Record<string, { bytes: number; inputs: unknown[]; exports: string[] }>
}
export interface BuildResult {
errors: { text: string }[]
warnings: { text: string }[]
metafile?: Metafile
}
export interface BuildContext {
watch(): Promise<void>
serve(options?: unknown): Promise<unknown>
rebuild(): Promise<BuildResult>
dispose(): Promise<void>
}
export function build(options: BuildOptions): Promise<BuildResult>
export function context(options: BuildOptions): Promise<BuildContext>
export function analyzeMetafile(metafile: Metafile, options?: { verbose?: boolean }): Promise<string>
}
// ── Bun's ImportMeta extensions ──
interface ImportMeta {
dir: string
dirname: string
}

24
server.json Normal file
View File

@@ -0,0 +1,24 @@
cl{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.codeaashu/claude-code-explorer-mcp",
"title": "Claude Code Explorer MCP",
"description": "Explore the Claude Code CLI source — browse tools, commands, search the 512K-line codebase.",
"repository": {
"url": "https://github.com/codeaashu/claude-code",
"source": "github",
"subfolder": "mcp-server"
},
"version": "1.1.0",
"packages": [
{
"registryType": "npm",
"registryBaseUrl": "https://registry.npmjs.org",
"identifier": "claude-code-explorer-mcp",
"version": "1.1.0",
"transport": {
"type": "stdio"
},
"runtimeHint": "node"
}
]
}

1297
src/QueryEngine.ts Normal file

File diff suppressed because it is too large Load Diff

127
src/Task.ts Normal file
View File

@@ -0,0 +1,127 @@
import { randomBytes } from 'crypto'
import type { AppState } from './state/AppState.js'
import type { AgentId } from './types/ids.js'
import { getTaskOutputPath } from './utils/task/diskOutput.js'
export type TaskType =
| 'local_bash'
| 'local_agent'
| 'remote_agent'
| 'in_process_teammate'
| 'local_workflow'
| 'monitor_mcp'
| 'dream'
export type TaskStatus =
| 'pending'
| 'running'
| 'completed'
| 'failed'
| 'killed'
/**
* True when a task is in a terminal state and will not transition further.
* Used to guard against injecting messages into dead teammates, evicting
* finished tasks from AppState, and orphan-cleanup paths.
*/
export function isTerminalTaskStatus(status: TaskStatus): boolean {
return status === 'completed' || status === 'failed' || status === 'killed'
}
export type TaskHandle = {
taskId: string
cleanup?: () => void
}
export type SetAppState = (f: (prev: AppState) => AppState) => void
export type TaskContext = {
abortController: AbortController
getAppState: () => AppState
setAppState: SetAppState
}
// Base fields shared by all task states
export type TaskStateBase = {
id: string
type: TaskType
status: TaskStatus
description: string
toolUseId?: string
startTime: number
endTime?: number
totalPausedMs?: number
outputFile: string
outputOffset: number
notified: boolean
}
export type LocalShellSpawnInput = {
command: string
description: string
timeout?: number
toolUseId?: string
agentId?: AgentId
/** UI display variant: description-as-label, dialog title, status bar pill. */
kind?: 'bash' | 'monitor'
}
// What getTaskByType dispatches for: kill. spawn/render were never
// called polymorphically (removed in #22546). All six kill implementations
// use only setAppState — getAppState/abortController were dead weight.
export type Task = {
name: string
type: TaskType
kill(taskId: string, setAppState: SetAppState): Promise<void>
}
// Task ID prefixes
const TASK_ID_PREFIXES: Record<string, string> = {
local_bash: 'b', // Keep as 'b' for backward compatibility
local_agent: 'a',
remote_agent: 'r',
in_process_teammate: 't',
local_workflow: 'w',
monitor_mcp: 'm',
dream: 'd',
}
// Get task ID prefix
function getTaskIdPrefix(type: TaskType): string {
return TASK_ID_PREFIXES[type] ?? 'x'
}
// Case-insensitive-safe alphabet (digits + lowercase) for task IDs.
// 36^8 ≈ 2.8 trillion combinations, sufficient to resist brute-force symlink attacks.
const TASK_ID_ALPHABET = '0123456789abcdefghijklmnopqrstuvwxyz'
export function generateTaskId(type: TaskType): string {
const prefix = getTaskIdPrefix(type)
const bytes = randomBytes(8)
let id = prefix
for (let i = 0; i < 8; i++) {
id += TASK_ID_ALPHABET[bytes[i]! % TASK_ID_ALPHABET.length]
}
return id
}
export function createTaskStateBase(
id: string,
type: TaskType,
description: string,
toolUseId?: string,
): TaskStateBase {
return {
id,
type,
status: 'pending',
description,
toolUseId,
startTime: Date.now(),
outputFile: getTaskOutputPath(id),
outputOffset: 0,
notified: false,
}
}

794
src/Tool.ts Normal file
View File

@@ -0,0 +1,794 @@
import type {
ToolResultBlockParam,
ToolUseBlockParam,
} from '@anthropic-ai/sdk/resources/index.mjs'
import type {
ElicitRequestURLParams,
ElicitResult,
} from '@modelcontextprotocol/sdk/types.js'
import type { UUID } from 'crypto'
import type { z } from 'zod/v4'
import type { Command } from './commands.js'
import type { CanUseToolFn } from './hooks/useCanUseTool.js'
import type { ThinkingConfig } from './utils/thinking.js'
export type ToolInputJSONSchema = {
[x: string]: unknown
type: 'object'
properties?: {
[x: string]: unknown
}
}
import type { Notification } from './context/notifications.js'
import type {
MCPServerConnection,
ServerResource,
} from './services/mcp/types.js'
import type {
AgentDefinition,
AgentDefinitionsResult,
} from './tools/AgentTool/loadAgentsDir.js'
import type {
AssistantMessage,
AttachmentMessage,
Message,
ProgressMessage,
SystemLocalCommandMessage,
SystemMessage,
UserMessage,
} from './types/message.js'
// Import permission types from centralized location to break import cycles
// Import PermissionResult from centralized location to break import cycles
import type {
AdditionalWorkingDirectory,
PermissionMode,
PermissionResult,
} from './types/permissions.js'
// Import tool progress types from centralized location to break import cycles
import type {
AgentToolProgress,
BashProgress,
MCPProgress,
REPLToolProgress,
SkillToolProgress,
TaskOutputProgress,
ToolProgressData,
WebSearchProgress,
} from './types/tools.js'
import type { FileStateCache } from './utils/fileStateCache.js'
import type { DenialTrackingState } from './utils/permissions/denialTracking.js'
import type { SystemPrompt } from './utils/systemPromptType.js'
import type { ContentReplacementState } from './utils/toolResultStorage.js'
// Re-export progress types for backwards compatibility
export type {
AgentToolProgress,
BashProgress,
MCPProgress,
REPLToolProgress,
SkillToolProgress,
TaskOutputProgress,
WebSearchProgress,
}
import type { SpinnerMode } from './components/Spinner.js'
import type { QuerySource } from './constants/querySource.js'
import type { SDKStatus } from './entrypoints/agentSdkTypes.js'
import type { AppState } from './state/AppState.js'
import type {
HookProgress,
PromptRequest,
PromptResponse,
} from './types/hooks.js'
import type { AgentId } from './types/ids.js'
import type { DeepImmutable } from './types/utils.js'
import type { AttributionState } from './utils/commitAttribution.js'
import type { FileHistoryState } from './utils/fileHistory.js'
import type { Theme, ThemeName } from './utils/theme.js'
export type QueryChainTracking = {
chainId: string
depth: number
}
export type ValidationResult =
| { result: true }
| {
result: false
message: string
errorCode: number
}
export type SetToolJSXFn = (
args: {
jsx: React.ReactNode | null
shouldHidePromptInput: boolean
shouldContinueAnimation?: true
showSpinner?: boolean
isLocalJSXCommand?: boolean
isImmediate?: boolean
/** Set to true to clear a local JSX command (e.g., from its onDone callback) */
clearLocalJSX?: boolean
} | null,
) => void
// Import tool permission types from centralized location to break import cycles
import type { ToolPermissionRulesBySource } from './types/permissions.js'
// Re-export for backwards compatibility
export type { ToolPermissionRulesBySource }
// Apply DeepImmutable to the imported type
export type ToolPermissionContext = DeepImmutable<{
mode: PermissionMode
additionalWorkingDirectories: Map<string, AdditionalWorkingDirectory>
alwaysAllowRules: ToolPermissionRulesBySource
alwaysDenyRules: ToolPermissionRulesBySource
alwaysAskRules: ToolPermissionRulesBySource
isBypassPermissionsModeAvailable: boolean
isAutoModeAvailable?: boolean
strippedDangerousRules?: ToolPermissionRulesBySource
/** When true, permission prompts are auto-denied (e.g., background agents that can't show UI) */
shouldAvoidPermissionPrompts?: boolean
/** When true, automated checks (classifier, hooks) are awaited before showing the permission dialog (coordinator workers) */
awaitAutomatedChecksBeforeDialog?: boolean
/** Stores the permission mode before model-initiated plan mode entry, so it can be restored on exit */
prePlanMode?: PermissionMode
}>
export const getEmptyToolPermissionContext: () => ToolPermissionContext =
() => ({
mode: 'default',
additionalWorkingDirectories: new Map(),
alwaysAllowRules: {},
alwaysDenyRules: {},
alwaysAskRules: {},
isBypassPermissionsModeAvailable: false,
})
export type CompactProgressEvent =
| {
type: 'hooks_start'
hookType: 'pre_compact' | 'post_compact' | 'session_start'
}
| { type: 'compact_start' }
| { type: 'compact_end' }
export type ToolUseContext = {
options: {
commands: Command[]
debug: boolean
mainLoopModel: string
tools: Tools
verbose: boolean
thinkingConfig: ThinkingConfig
mcpClients: MCPServerConnection[]
mcpResources: Record<string, ServerResource[]>
isNonInteractiveSession: boolean
agentDefinitions: AgentDefinitionsResult
maxBudgetUsd?: number
/** Custom system prompt that replaces the default system prompt */
customSystemPrompt?: string
/** Additional system prompt appended after the main system prompt */
appendSystemPrompt?: string
/** Override querySource for analytics tracking */
querySource?: QuerySource
/** Optional callback to get the latest tools (e.g., after MCP servers connect mid-query) */
refreshTools?: () => Tools
}
abortController: AbortController
readFileState: FileStateCache
getAppState(): AppState
setAppState(f: (prev: AppState) => AppState): void
/**
* Always-shared setAppState for session-scoped infrastructure (background
* tasks, session hooks). Unlike setAppState, which is no-op for async agents
* (see createSubagentContext), this always reaches the root store so agents
* at any nesting depth can register/clean up infrastructure that outlives
* a single turn. Only set by createSubagentContext; main-thread contexts
* fall back to setAppState.
*/
setAppStateForTasks?: (f: (prev: AppState) => AppState) => void
/**
* Optional handler for URL elicitations triggered by tool call errors (-32042).
* In print/SDK mode, this delegates to structuredIO.handleElicitation.
* In REPL mode, this is undefined and the queue-based UI path is used.
*/
handleElicitation?: (
serverName: string,
params: ElicitRequestURLParams,
signal: AbortSignal,
) => Promise<ElicitResult>
setToolJSX?: SetToolJSXFn
addNotification?: (notif: Notification) => void
/** Append a UI-only system message to the REPL message list. Stripped at the
* normalizeMessagesForAPI boundary — the Exclude<> makes that type-enforced. */
appendSystemMessage?: (
msg: Exclude<SystemMessage, SystemLocalCommandMessage>,
) => void
/** Send an OS-level notification (iTerm2, Kitty, Ghostty, bell, etc.) */
sendOSNotification?: (opts: {
message: string
notificationType: string
}) => void
nestedMemoryAttachmentTriggers?: Set<string>
/**
* CLAUDE.md paths already injected as nested_memory attachments this
* session. Dedup for memoryFilesToAttachments — readFileState is an LRU
* that evicts entries in busy sessions, so its .has() check alone can
* re-inject the same CLAUDE.md dozens of times.
*/
loadedNestedMemoryPaths?: Set<string>
dynamicSkillDirTriggers?: Set<string>
/** Skill names surfaced via skill_discovery this session. Telemetry only (feeds was_discovered). */
discoveredSkillNames?: Set<string>
userModified?: boolean
setInProgressToolUseIDs: (f: (prev: Set<string>) => Set<string>) => void
/** Only wired in interactive (REPL) contexts; SDK/QueryEngine don't set this. */
setHasInterruptibleToolInProgress?: (v: boolean) => void
setResponseLength: (f: (prev: number) => number) => void
/** Ant-only: push a new API metrics entry for OTPS tracking.
* Called by subagent streaming when a new API request starts. */
pushApiMetricsEntry?: (ttftMs: number) => void
setStreamMode?: (mode: SpinnerMode) => void
onCompactProgress?: (event: CompactProgressEvent) => void
setSDKStatus?: (status: SDKStatus) => void
openMessageSelector?: () => void
updateFileHistoryState: (
updater: (prev: FileHistoryState) => FileHistoryState,
) => void
updateAttributionState: (
updater: (prev: AttributionState) => AttributionState,
) => void
setConversationId?: (id: UUID) => void
agentId?: AgentId // Only set for subagents; use getSessionId() for session ID. Hooks use this to distinguish subagent calls.
agentType?: string // Subagent type name. For the main thread's --agent type, hooks fall back to getMainThreadAgentType().
/** When true, canUseTool must always be called even when hooks auto-approve.
* Used by speculation for overlay file path rewriting. */
requireCanUseTool?: boolean
messages: Message[]
fileReadingLimits?: {
maxTokens?: number
maxSizeBytes?: number
}
globLimits?: {
maxResults?: number
}
toolDecisions?: Map<
string,
{
source: string
decision: 'accept' | 'reject'
timestamp: number
}
>
queryTracking?: QueryChainTracking
/** Callback factory for requesting interactive prompts from the user.
* Returns a prompt callback bound to the given source name.
* Only available in interactive (REPL) contexts. */
requestPrompt?: (
sourceName: string,
toolInputSummary?: string | null,
) => (request: PromptRequest) => Promise<PromptResponse>
toolUseId?: string
criticalSystemReminder_EXPERIMENTAL?: string
/** When true, preserve toolUseResult on messages even for subagents.
* Used by in-process teammates whose transcripts are viewable by the user. */
preserveToolUseResults?: boolean
/** Local denial tracking state for async subagents whose setAppState is a
* no-op. Without this, the denial counter never accumulates and the
* fallback-to-prompting threshold is never reached. Mutable — the
* permissions code updates it in place. */
localDenialTracking?: DenialTrackingState
/**
* Per-conversation-thread content replacement state for the tool result
* budget. When present, query.ts applies the aggregate tool result budget.
* Main thread: REPL provisions once (never resets — stale UUID keys
* are inert). Subagents: createSubagentContext clones the parent's state
* by default (cache-sharing forks need identical decisions), or
* resumeAgentBackground threads one reconstructed from sidechain records.
*/
contentReplacementState?: ContentReplacementState
/**
* Parent's rendered system prompt bytes, frozen at turn start.
* Used by fork subagents to share the parent's prompt cache — re-calling
* getSystemPrompt() at fork-spawn time can diverge (GrowthBook cold→warm)
* and bust the cache. See forkSubagent.ts.
*/
renderedSystemPrompt?: SystemPrompt
}
// Re-export ToolProgressData from centralized location
export type { ToolProgressData }
export type Progress = ToolProgressData | HookProgress
export type ToolProgress<P extends ToolProgressData> = {
toolUseID: string
data: P
}
export function filterToolProgressMessages(
progressMessagesForMessage: ProgressMessage[],
): ProgressMessage<ToolProgressData>[] {
return progressMessagesForMessage.filter(
(msg): msg is ProgressMessage<ToolProgressData> =>
msg.data?.type !== 'hook_progress',
)
}
export type ToolResult<T> = {
data: T
newMessages?: (
| UserMessage
| AssistantMessage
| AttachmentMessage
| SystemMessage
)[]
// contextModifier is only honored for tools that aren't concurrency safe.
contextModifier?: (context: ToolUseContext) => ToolUseContext
/** MCP protocol metadata (structuredContent, _meta) to pass through to SDK consumers */
mcpMeta?: {
_meta?: Record<string, unknown>
structuredContent?: Record<string, unknown>
}
}
export type ToolCallProgress<P extends ToolProgressData = ToolProgressData> = (
progress: ToolProgress<P>,
) => void
// Type for any schema that outputs an object with string keys
export type AnyObject = z.ZodType<{ [key: string]: unknown }>
/**
* Checks if a tool matches the given name (primary name or alias).
*/
export function toolMatchesName(
tool: { name: string; aliases?: string[] },
name: string,
): boolean {
return tool.name === name || (tool.aliases?.includes(name) ?? false)
}
/**
* Finds a tool by name or alias from a list of tools.
*/
export function findToolByName(tools: Tools, name: string): Tool | undefined {
return tools.find(t => toolMatchesName(t, name))
}
export type Tool<
Input extends AnyObject = AnyObject,
Output = unknown,
P extends ToolProgressData = ToolProgressData,
> = {
/**
* Optional aliases for backwards compatibility when a tool is renamed.
* The tool can be looked up by any of these names in addition to its primary name.
*/
aliases?: string[]
/**
* One-line capability phrase used by ToolSearch for keyword matching.
* Helps the model find this tool via keyword search when it's deferred.
* 310 words, no trailing period.
* Prefer terms not already in the tool name (e.g. 'jupyter' for NotebookEdit).
*/
searchHint?: string
call(
args: z.infer<Input>,
context: ToolUseContext,
canUseTool: CanUseToolFn,
parentMessage: AssistantMessage,
onProgress?: ToolCallProgress<P>,
): Promise<ToolResult<Output>>
description(
input: z.infer<Input>,
options: {
isNonInteractiveSession: boolean
toolPermissionContext: ToolPermissionContext
tools: Tools
},
): Promise<string>
readonly inputSchema: Input
// Type for MCP tools that can specify their input schema directly in JSON Schema format
// rather than converting from Zod schema
readonly inputJSONSchema?: ToolInputJSONSchema
// Optional because TungstenTool doesn't define this. TODO: Make it required.
// When we do that, we can also go through and make this a bit more type-safe.
outputSchema?: z.ZodType<unknown>
inputsEquivalent?(a: z.infer<Input>, b: z.infer<Input>): boolean
isConcurrencySafe(input: z.infer<Input>): boolean
isEnabled(): boolean
isReadOnly(input: z.infer<Input>): boolean
/** Defaults to false. Only set when the tool performs irreversible operations (delete, overwrite, send). */
isDestructive?(input: z.infer<Input>): boolean
/**
* What should happen when the user submits a new message while this tool
* is running.
*
* - `'cancel'` — stop the tool and discard its result
* - `'block'` — keep running; the new message waits
*
* Defaults to `'block'` when not implemented.
*/
interruptBehavior?(): 'cancel' | 'block'
/**
* Returns information about whether this tool use is a search or read operation
* that should be collapsed into a condensed display in the UI. Examples include
* file searching (Grep, Glob), file reading (Read), and bash commands like find,
* grep, wc, etc.
*
* Returns an object indicating whether the operation is a search or read operation:
* - `isSearch: true` for search operations (grep, find, glob patterns)
* - `isRead: true` for read operations (cat, head, tail, file read)
* - `isList: true` for directory-listing operations (ls, tree, du)
* - All can be false if the operation shouldn't be collapsed
*/
isSearchOrReadCommand?(input: z.infer<Input>): {
isSearch: boolean
isRead: boolean
isList?: boolean
}
isOpenWorld?(input: z.infer<Input>): boolean
requiresUserInteraction?(): boolean
isMcp?: boolean
isLsp?: boolean
/**
* When true, this tool is deferred (sent with defer_loading: true) and requires
* ToolSearch to be used before it can be called.
*/
readonly shouldDefer?: boolean
/**
* When true, this tool is never deferred — its full schema appears in the
* initial prompt even when ToolSearch is enabled. For MCP tools, set via
* `_meta['anthropic/alwaysLoad']`. Use for tools the model must see on
* turn 1 without a ToolSearch round-trip.
*/
readonly alwaysLoad?: boolean
/**
* For MCP tools: the server and tool names as received from the MCP server (unnormalized).
* Present on all MCP tools regardless of whether `name` is prefixed (mcp__server__tool)
* or unprefixed (CLAUDE_AGENT_SDK_MCP_NO_PREFIX mode).
*/
mcpInfo?: { serverName: string; toolName: string }
readonly name: string
/**
* Maximum size in characters for tool result before it gets persisted to disk.
* When exceeded, the result is saved to a file and Claude receives a preview
* with the file path instead of the full content.
*
* Set to Infinity for tools whose output must never be persisted (e.g. Read,
* where persisting creates a circular Read→file→Read loop and the tool
* already self-bounds via its own limits).
*/
maxResultSizeChars: number
/**
* When true, enables strict mode for this tool, which causes the API to
* more strictly adhere to tool instructions and parameter schemas.
* Only applied when the tengu_tool_pear is enabled.
*/
readonly strict?: boolean
/**
* Called on copies of tool_use input before observers see it (SDK stream,
* transcript, canUseTool, PreToolUse/PostToolUse hooks). Mutate in place
* to add legacy/derived fields. Must be idempotent. The original API-bound
* input is never mutated (preserves prompt cache). Not re-applied when a
* hook/permission returns a fresh updatedInput — those own their shape.
*/
backfillObservableInput?(input: Record<string, unknown>): void
/**
* Determines if this tool is allowed to run with this input in the current context.
* It informs the model of why the tool use failed, and does not directly display any UI.
* @param input
* @param context
*/
validateInput?(
input: z.infer<Input>,
context: ToolUseContext,
): Promise<ValidationResult>
/**
* Determines if the user is asked for permission. Only called after validateInput() passes.
* General permission logic is in permissions.ts. This method contains tool-specific logic.
* @param input
* @param context
*/
checkPermissions(
input: z.infer<Input>,
context: ToolUseContext,
): Promise<PermissionResult>
// Optional method for tools that operate on a file path
getPath?(input: z.infer<Input>): string
/**
* Prepare a matcher for hook `if` conditions (permission-rule patterns like
* "git *" from "Bash(git *)"). Called once per hook-input pair; any
* expensive parsing happens here. Returns a closure that is called per
* hook pattern. If not implemented, only tool-name-level matching works.
*/
preparePermissionMatcher?(
input: z.infer<Input>,
): Promise<(pattern: string) => boolean>
prompt(options: {
getToolPermissionContext: () => Promise<ToolPermissionContext>
tools: Tools
agents: AgentDefinition[]
allowedAgentTypes?: string[]
}): Promise<string>
userFacingName(input: Partial<z.infer<Input>> | undefined): string
userFacingNameBackgroundColor?(
input: Partial<z.infer<Input>> | undefined,
): keyof Theme | undefined
/**
* Transparent wrappers (e.g. REPL) delegate all rendering to their progress
* handler, which emits native-looking blocks for each inner tool call.
* The wrapper itself shows nothing.
*/
isTransparentWrapper?(): boolean
/**
* Returns a short string summary of this tool use for display in compact views.
* @param input The tool input
* @returns A short string summary, or null to not display
*/
getToolUseSummary?(input: Partial<z.infer<Input>> | undefined): string | null
/**
* Returns a human-readable present-tense activity description for spinner display.
* Example: "Reading src/foo.ts", "Running bun test", "Searching for pattern"
* @param input The tool input
* @returns Activity description string, or null to fall back to tool name
*/
getActivityDescription?(
input: Partial<z.infer<Input>> | undefined,
): string | null
/**
* Returns a compact representation of this tool use for the auto-mode
* security classifier. Examples: `ls -la` for Bash, `/tmp/x: new content`
* for Edit. Return '' to skip this tool in the classifier transcript
* (e.g. tools with no security relevance). May return an object to avoid
* double-encoding when the caller JSON-wraps the value.
*/
toAutoClassifierInput(input: z.infer<Input>): unknown
mapToolResultToToolResultBlockParam(
content: Output,
toolUseID: string,
): ToolResultBlockParam
/**
* Optional. When omitted, the tool result renders nothing (same as returning
* null). Omit for tools whose results are surfaced elsewhere (e.g., TodoWrite
* updates the todo panel, not the transcript).
*/
renderToolResultMessage?(
content: Output,
progressMessagesForMessage: ProgressMessage<P>[],
options: {
style?: 'condensed'
theme: ThemeName
tools: Tools
verbose: boolean
isTranscriptMode?: boolean
isBriefOnly?: boolean
/** Original tool_use input, when available. Useful for compact result
* summaries that reference what was requested (e.g. "Sent to #foo"). */
input?: unknown
},
): React.ReactNode
/**
* Flattened text of what renderToolResultMessage shows IN TRANSCRIPT
* MODE (verbose=true, isTranscriptMode=true). For transcript search
* indexing: the index counts occurrences in this string, the highlight
* overlay scans the actual screen buffer. For count ≡ highlight, this
* must return the text that ends up visible — not the model-facing
* serialization from mapToolResultToToolResultBlockParam (which adds
* system-reminders, persisted-output wrappers).
*
* Chrome can be skipped (under-count is fine). "Found 3 files in 12ms"
* isn't worth indexing. Phantoms are not fine — text that's claimed
* here but doesn't render is a count≠highlight bug.
*
* Optional: omitted → field-name heuristic in transcriptSearch.ts.
* Drift caught by test/utils/transcriptSearch.renderFidelity.test.tsx
* which renders sample outputs and flags text that's indexed-but-not-
* rendered (phantom) or rendered-but-not-indexed (under-count warning).
*/
extractSearchText?(out: Output): string
/**
* Render the tool use message. Note that `input` is partial because we render
* the message as soon as possible, possibly before tool parameters have fully
* streamed in.
*/
renderToolUseMessage(
input: Partial<z.infer<Input>>,
options: { theme: ThemeName; verbose: boolean; commands?: Command[] },
): React.ReactNode
/**
* Returns true when the non-verbose rendering of this output is truncated
* (i.e., clicking to expand would reveal more content). Gates
* click-to-expand in fullscreen — only messages where verbose actually
* shows more get a hover/click affordance. Unset means never truncated.
*/
isResultTruncated?(output: Output): boolean
/**
* Renders an optional tag to display after the tool use message.
* Used for additional metadata like timeout, model, resume ID, etc.
* Returns null to not display anything.
*/
renderToolUseTag?(input: Partial<z.infer<Input>>): React.ReactNode
/**
* Optional. When omitted, no progress UI is shown while the tool runs.
*/
renderToolUseProgressMessage?(
progressMessagesForMessage: ProgressMessage<P>[],
options: {
tools: Tools
verbose: boolean
terminalSize?: { columns: number; rows: number }
inProgressToolCallCount?: number
isTranscriptMode?: boolean
},
): React.ReactNode
renderToolUseQueuedMessage?(): React.ReactNode
/**
* Optional. When omitted, falls back to <FallbackToolUseRejectedMessage />.
* Only define this for tools that need custom rejection UI (e.g., file edits
* that show the rejected diff).
*/
renderToolUseRejectedMessage?(
input: z.infer<Input>,
options: {
columns: number
messages: Message[]
style?: 'condensed'
theme: ThemeName
tools: Tools
verbose: boolean
progressMessagesForMessage: ProgressMessage<P>[]
isTranscriptMode?: boolean
},
): React.ReactNode
/**
* Optional. When omitted, falls back to <FallbackToolUseErrorMessage />.
* Only define this for tools that need custom error UI (e.g., search tools
* that show "File not found" instead of the raw error).
*/
renderToolUseErrorMessage?(
result: ToolResultBlockParam['content'],
options: {
progressMessagesForMessage: ProgressMessage<P>[]
tools: Tools
verbose: boolean
isTranscriptMode?: boolean
},
): React.ReactNode
/**
* Renders multiple parallel instances of this tool as a group.
* @returns React node to render, or null to fall back to individual rendering
*/
/**
* Renders multiple tool uses as a group (non-verbose mode only).
* In verbose mode, individual tool uses render at their original positions.
* @returns React node to render, or null to fall back to individual rendering
*/
renderGroupedToolUse?(
toolUses: Array<{
param: ToolUseBlockParam
isResolved: boolean
isError: boolean
isInProgress: boolean
progressMessages: ProgressMessage<P>[]
result?: {
param: ToolResultBlockParam
output: unknown
}
}>,
options: {
shouldAnimate: boolean
tools: Tools
},
): React.ReactNode | null
}
/**
* A collection of tools. Use this type instead of `Tool[]` to make it easier
* to track where tool sets are assembled, passed, and filtered across the codebase.
*/
export type Tools = readonly Tool[]
/**
* Methods that `buildTool` supplies a default for. A `ToolDef` may omit these;
* the resulting `Tool` always has them.
*/
type DefaultableToolKeys =
| 'isEnabled'
| 'isConcurrencySafe'
| 'isReadOnly'
| 'isDestructive'
| 'checkPermissions'
| 'toAutoClassifierInput'
| 'userFacingName'
/**
* Tool definition accepted by `buildTool`. Same shape as `Tool` but with the
* defaultable methods optional — `buildTool` fills them in so callers always
* see a complete `Tool`.
*/
export type ToolDef<
Input extends AnyObject = AnyObject,
Output = unknown,
P extends ToolProgressData = ToolProgressData,
> = Omit<Tool<Input, Output, P>, DefaultableToolKeys> &
Partial<Pick<Tool<Input, Output, P>, DefaultableToolKeys>>
/**
* Type-level spread mirroring `{ ...TOOL_DEFAULTS, ...def }`. For each
* defaultable key: if D provides it (required), D's type wins; if D omits
* it or has it optional (inherited from Partial<> in the constraint), the
* default fills in. All other keys come from D verbatim — preserving arity,
* optional presence, and literal types exactly as `satisfies Tool` did.
*/
type BuiltTool<D> = Omit<D, DefaultableToolKeys> & {
[K in DefaultableToolKeys]-?: K extends keyof D
? undefined extends D[K]
? ToolDefaults[K]
: D[K]
: ToolDefaults[K]
}
/**
* Build a complete `Tool` from a partial definition, filling in safe defaults
* for the commonly-stubbed methods. All tool exports should go through this so
* that defaults live in one place and callers never need `?.() ?? default`.
*
* Defaults (fail-closed where it matters):
* - `isEnabled` → `true`
* - `isConcurrencySafe` → `false` (assume not safe)
* - `isReadOnly` → `false` (assume writes)
* - `isDestructive` → `false`
* - `checkPermissions` → `{ behavior: 'allow', updatedInput }` (defer to general permission system)
* - `toAutoClassifierInput` → `''` (skip classifier — security-relevant tools must override)
* - `userFacingName` → `name`
*/
const TOOL_DEFAULTS = {
isEnabled: () => true,
isConcurrencySafe: (_input?: unknown) => false,
isReadOnly: (_input?: unknown) => false,
isDestructive: (_input?: unknown) => false,
checkPermissions: (
input: { [key: string]: unknown },
_ctx?: ToolUseContext,
): Promise<PermissionResult> =>
Promise.resolve({ behavior: 'allow', updatedInput: input }),
toAutoClassifierInput: (_input?: unknown) => '',
userFacingName: (_input?: unknown) => '',
}
// The defaults type is the ACTUAL shape of TOOL_DEFAULTS (optional params so
// both 0-arg and full-arg call sites type-check — stubs varied in arity and
// tests relied on that), not the interface's strict signatures.
type ToolDefaults = typeof TOOL_DEFAULTS
// D infers the concrete object-literal type from the call site. The
// constraint provides contextual typing for method parameters; `any` in
// constraint position is structural and never leaks into the return type.
// BuiltTool<D> mirrors runtime `{...TOOL_DEFAULTS, ...def}` at the type level.
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyToolDef = ToolDef<any, any, any>
export function buildTool<D extends AnyToolDef>(def: D): BuiltTool<D> {
// The runtime spread is straightforward; the `as` bridges the gap between
// the structural-any constraint and the precise BuiltTool<D> return. The
// type semantics are proven by the 0-error typecheck across all 60+ tools.
return {
...TOOL_DEFAULTS,
userFacingName: () => def.name,
...def,
} as BuiltTool<D>
}

View File

@@ -0,0 +1,89 @@
import axios from 'axios'
import { getOauthConfig } from '../constants/oauth.js'
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
import { logForDebugging } from '../utils/debug.js'
import { getOAuthHeaders, prepareApiRequest } from '../utils/teleport/api.js'
export const HISTORY_PAGE_SIZE = 100
export type HistoryPage = {
/** Chronological order within the page. */
events: SDKMessage[]
/** Oldest event ID in this page → before_id cursor for next-older page. */
firstId: string | null
/** true = older events exist. */
hasMore: boolean
}
type SessionEventsResponse = {
data: SDKMessage[]
has_more: boolean
first_id: string | null
last_id: string | null
}
export type HistoryAuthCtx = {
baseUrl: string
headers: Record<string, string>
}
/** Prepare auth + headers + base URL once, reuse across pages. */
export async function createHistoryAuthCtx(
sessionId: string,
): Promise<HistoryAuthCtx> {
const { accessToken, orgUUID } = await prepareApiRequest()
return {
baseUrl: `${getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}/events`,
headers: {
...getOAuthHeaders(accessToken),
'anthropic-beta': 'ccr-byoc-2025-07-29',
'x-organization-uuid': orgUUID,
},
}
}
async function fetchPage(
ctx: HistoryAuthCtx,
params: Record<string, string | number | boolean>,
label: string,
): Promise<HistoryPage | null> {
const resp = await axios
.get<SessionEventsResponse>(ctx.baseUrl, {
headers: ctx.headers,
params,
timeout: 15000,
validateStatus: () => true,
})
.catch(() => null)
if (!resp || resp.status !== 200) {
logForDebugging(`[${label}] HTTP ${resp?.status ?? 'error'}`)
return null
}
return {
events: Array.isArray(resp.data.data) ? resp.data.data : [],
firstId: resp.data.first_id,
hasMore: resp.data.has_more,
}
}
/**
* Newest page: last `limit` events, chronological, via anchor_to_latest.
* has_more=true means older events exist.
*/
export async function fetchLatestEvents(
ctx: HistoryAuthCtx,
limit = HISTORY_PAGE_SIZE,
): Promise<HistoryPage | null> {
return fetchPage(ctx, { limit, anchor_to_latest: true }, 'fetchLatestEvents')
}
/** Older page: events immediately before `beforeId` cursor. */
export async function fetchOlderEvents(
ctx: HistoryAuthCtx,
beforeId: string,
limit = HISTORY_PAGE_SIZE,
): Promise<HistoryPage | null> {
return fetchPage(ctx, { limit, before_id: beforeId }, 'fetchOlderEvents')
}

1760
src/bootstrap/state.ts Normal file

File diff suppressed because it is too large Load Diff

541
src/bridge/bridgeApi.ts Normal file
View File

@@ -0,0 +1,541 @@
import axios from 'axios'
import { debugBody, extractErrorDetail } from './debugUtils.js'
import {
BRIDGE_LOGIN_INSTRUCTION,
type BridgeApiClient,
type BridgeConfig,
type PermissionResponseEvent,
type WorkResponse,
} from './types.js'
type BridgeApiDeps = {
baseUrl: string
getAccessToken: () => string | undefined
runnerVersion: string
onDebug?: (msg: string) => void
/**
* Called on 401 to attempt OAuth token refresh. Returns true if refreshed,
* in which case the request is retried once. Injected because
* handleOAuth401Error from utils/auth.ts transitively pulls in config.ts →
* file.ts → permissions/filesystem.ts → sessionStorage.ts → commands.ts
* (~1300 modules). Daemon callers using env-var tokens omit this — their
* tokens don't refresh, so 401 goes straight to BridgeFatalError.
*/
onAuth401?: (staleAccessToken: string) => Promise<boolean>
/**
* Returns the trusted device token to send as X-Trusted-Device-Token on
* bridge API calls. Bridge sessions have SecurityTier=ELEVATED on the
* server (CCR v2); when the server's enforcement flag is on,
* ConnectBridgeWorker requires a trusted device at JWT-issuance.
* Optional — when absent or returning undefined, the header is omitted
* and the server falls through to its flag-off/no-op path. The CLI-side
* gate is tengu_sessions_elevated_auth_enforcement (see trustedDevice.ts).
*/
getTrustedDeviceToken?: () => string | undefined
}
const BETA_HEADER = 'environments-2025-11-01'
/** Allowlist pattern for server-provided IDs used in URL path segments. */
const SAFE_ID_PATTERN = /^[a-zA-Z0-9_-]+$/
/**
* Validate that a server-provided ID is safe to interpolate into a URL path.
* Prevents path traversal (e.g. `../../admin`) and injection via IDs that
* contain slashes, dots, or other special characters.
*/
export function validateBridgeId(id: string, label: string): string {
if (!id || !SAFE_ID_PATTERN.test(id)) {
throw new Error(`Invalid ${label}: contains unsafe characters`)
}
return id
}
/** Fatal bridge errors that should not be retried (e.g. auth failures). */
export class BridgeFatalError extends Error {
readonly status: number
/** Server-provided error type, e.g. "environment_expired". */
readonly errorType: string | undefined
constructor(message: string, status: number, errorType?: string) {
super(message)
this.name = 'BridgeFatalError'
this.status = status
this.errorType = errorType
}
}
export function createBridgeApiClient(deps: BridgeApiDeps): BridgeApiClient {
function debug(msg: string): void {
deps.onDebug?.(msg)
}
let consecutiveEmptyPolls = 0
const EMPTY_POLL_LOG_INTERVAL = 100
function getHeaders(accessToken: string): Record<string, string> {
const headers: Record<string, string> = {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json',
'anthropic-version': '2023-06-01',
'anthropic-beta': BETA_HEADER,
'x-environment-runner-version': deps.runnerVersion,
}
const deviceToken = deps.getTrustedDeviceToken?.()
if (deviceToken) {
headers['X-Trusted-Device-Token'] = deviceToken
}
return headers
}
function resolveAuth(): string {
const accessToken = deps.getAccessToken()
if (!accessToken) {
throw new Error(BRIDGE_LOGIN_INSTRUCTION)
}
return accessToken
}
/**
* Execute an OAuth-authenticated request with a single retry on 401.
* On 401, attempts token refresh via handleOAuth401Error (same pattern as
* withRetry.ts for v1/messages). If refresh succeeds, retries the request
* once with the new token. If refresh fails or the retry also returns 401,
* the 401 response is returned for handleErrorStatus to throw BridgeFatalError.
*/
async function withOAuthRetry<T>(
fn: (accessToken: string) => Promise<{ status: number; data: T }>,
context: string,
): Promise<{ status: number; data: T }> {
const accessToken = resolveAuth()
const response = await fn(accessToken)
if (response.status !== 401) {
return response
}
if (!deps.onAuth401) {
debug(`[bridge:api] ${context}: 401 received, no refresh handler`)
return response
}
// Attempt token refresh — matches the pattern in withRetry.ts
debug(`[bridge:api] ${context}: 401 received, attempting token refresh`)
const refreshed = await deps.onAuth401(accessToken)
if (refreshed) {
debug(`[bridge:api] ${context}: Token refreshed, retrying request`)
const newToken = resolveAuth()
const retryResponse = await fn(newToken)
if (retryResponse.status !== 401) {
return retryResponse
}
debug(`[bridge:api] ${context}: Retry after refresh also got 401`)
} else {
debug(`[bridge:api] ${context}: Token refresh failed`)
}
// Refresh failed — return 401 for handleErrorStatus to throw
return response
}
return {
async registerBridgeEnvironment(
config: BridgeConfig,
): Promise<{ environment_id: string; environment_secret: string }> {
debug(
`[bridge:api] POST /v1/environments/bridge bridgeId=${config.bridgeId}`,
)
const response = await withOAuthRetry(
(token: string) =>
axios.post<{
environment_id: string
environment_secret: string
}>(
`${deps.baseUrl}/v1/environments/bridge`,
{
machine_name: config.machineName,
directory: config.dir,
branch: config.branch,
git_repo_url: config.gitRepoUrl,
// Advertise session capacity so claude.ai/code can show
// "2/4 sessions" badges and only block the picker when
// actually at capacity. Backends that don't yet accept
// this field will silently ignore it.
max_sessions: config.maxSessions,
// worker_type lets claude.ai filter environments by origin
// (e.g. assistant picker only shows assistant-mode workers).
// Desktop cowork app sends "cowork"; we send a distinct value.
metadata: { worker_type: config.workerType },
// Idempotent re-registration: if we have a backend-issued
// environment_id from a prior session (--session-id resume),
// send it back so the backend reattaches instead of creating
// a new env. The backend may still hand back a fresh ID if
// the old one expired — callers must compare the response.
...(config.reuseEnvironmentId && {
environment_id: config.reuseEnvironmentId,
}),
},
{
headers: getHeaders(token),
timeout: 15_000,
validateStatus: status => status < 500,
},
),
'Registration',
)
handleErrorStatus(response.status, response.data, 'Registration')
debug(
`[bridge:api] POST /v1/environments/bridge -> ${response.status} environment_id=${response.data.environment_id}`,
)
debug(
`[bridge:api] >>> ${debugBody({ machine_name: config.machineName, directory: config.dir, branch: config.branch, git_repo_url: config.gitRepoUrl, max_sessions: config.maxSessions, metadata: { worker_type: config.workerType } })}`,
)
debug(`[bridge:api] <<< ${debugBody(response.data)}`)
return response.data
},
async pollForWork(
environmentId: string,
environmentSecret: string,
signal?: AbortSignal,
reclaimOlderThanMs?: number,
): Promise<WorkResponse | null> {
validateBridgeId(environmentId, 'environmentId')
// Save and reset so errors break the "consecutive empty" streak.
// Restored below when the response is truly empty.
const prevEmptyPolls = consecutiveEmptyPolls
consecutiveEmptyPolls = 0
const response = await axios.get<WorkResponse | null>(
`${deps.baseUrl}/v1/environments/${environmentId}/work/poll`,
{
headers: getHeaders(environmentSecret),
params:
reclaimOlderThanMs !== undefined
? { reclaim_older_than_ms: reclaimOlderThanMs }
: undefined,
timeout: 10_000,
signal,
validateStatus: status => status < 500,
},
)
handleErrorStatus(response.status, response.data, 'Poll')
// Empty body or null = no work available
if (!response.data) {
consecutiveEmptyPolls = prevEmptyPolls + 1
if (
consecutiveEmptyPolls === 1 ||
consecutiveEmptyPolls % EMPTY_POLL_LOG_INTERVAL === 0
) {
debug(
`[bridge:api] GET .../work/poll -> ${response.status} (no work, ${consecutiveEmptyPolls} consecutive empty polls)`,
)
}
return null
}
debug(
`[bridge:api] GET .../work/poll -> ${response.status} workId=${response.data.id} type=${response.data.data?.type}${response.data.data?.id ? ` sessionId=${response.data.data.id}` : ''}`,
)
debug(`[bridge:api] <<< ${debugBody(response.data)}`)
return response.data
},
async acknowledgeWork(
environmentId: string,
workId: string,
sessionToken: string,
): Promise<void> {
validateBridgeId(environmentId, 'environmentId')
validateBridgeId(workId, 'workId')
debug(`[bridge:api] POST .../work/${workId}/ack`)
const response = await axios.post(
`${deps.baseUrl}/v1/environments/${environmentId}/work/${workId}/ack`,
{},
{
headers: getHeaders(sessionToken),
timeout: 10_000,
validateStatus: s => s < 500,
},
)
handleErrorStatus(response.status, response.data, 'Acknowledge')
debug(`[bridge:api] POST .../work/${workId}/ack -> ${response.status}`)
},
async stopWork(
environmentId: string,
workId: string,
force: boolean,
): Promise<void> {
validateBridgeId(environmentId, 'environmentId')
validateBridgeId(workId, 'workId')
debug(`[bridge:api] POST .../work/${workId}/stop force=${force}`)
const response = await withOAuthRetry(
(token: string) =>
axios.post(
`${deps.baseUrl}/v1/environments/${environmentId}/work/${workId}/stop`,
{ force },
{
headers: getHeaders(token),
timeout: 10_000,
validateStatus: s => s < 500,
},
),
'StopWork',
)
handleErrorStatus(response.status, response.data, 'StopWork')
debug(`[bridge:api] POST .../work/${workId}/stop -> ${response.status}`)
},
async deregisterEnvironment(environmentId: string): Promise<void> {
validateBridgeId(environmentId, 'environmentId')
debug(`[bridge:api] DELETE /v1/environments/bridge/${environmentId}`)
const response = await withOAuthRetry(
(token: string) =>
axios.delete(
`${deps.baseUrl}/v1/environments/bridge/${environmentId}`,
{
headers: getHeaders(token),
timeout: 10_000,
validateStatus: s => s < 500,
},
),
'Deregister',
)
handleErrorStatus(response.status, response.data, 'Deregister')
debug(
`[bridge:api] DELETE /v1/environments/bridge/${environmentId} -> ${response.status}`,
)
},
async archiveSession(sessionId: string): Promise<void> {
validateBridgeId(sessionId, 'sessionId')
debug(`[bridge:api] POST /v1/sessions/${sessionId}/archive`)
const response = await withOAuthRetry(
(token: string) =>
axios.post(
`${deps.baseUrl}/v1/sessions/${sessionId}/archive`,
{},
{
headers: getHeaders(token),
timeout: 10_000,
validateStatus: s => s < 500,
},
),
'ArchiveSession',
)
// 409 = already archived (idempotent, not an error)
if (response.status === 409) {
debug(
`[bridge:api] POST /v1/sessions/${sessionId}/archive -> 409 (already archived)`,
)
return
}
handleErrorStatus(response.status, response.data, 'ArchiveSession')
debug(
`[bridge:api] POST /v1/sessions/${sessionId}/archive -> ${response.status}`,
)
},
async reconnectSession(
environmentId: string,
sessionId: string,
): Promise<void> {
validateBridgeId(environmentId, 'environmentId')
validateBridgeId(sessionId, 'sessionId')
debug(
`[bridge:api] POST /v1/environments/${environmentId}/bridge/reconnect session_id=${sessionId}`,
)
const response = await withOAuthRetry(
(token: string) =>
axios.post(
`${deps.baseUrl}/v1/environments/${environmentId}/bridge/reconnect`,
{ session_id: sessionId },
{
headers: getHeaders(token),
timeout: 10_000,
validateStatus: s => s < 500,
},
),
'ReconnectSession',
)
handleErrorStatus(response.status, response.data, 'ReconnectSession')
debug(`[bridge:api] POST .../bridge/reconnect -> ${response.status}`)
},
async heartbeatWork(
environmentId: string,
workId: string,
sessionToken: string,
): Promise<{ lease_extended: boolean; state: string }> {
validateBridgeId(environmentId, 'environmentId')
validateBridgeId(workId, 'workId')
debug(`[bridge:api] POST .../work/${workId}/heartbeat`)
const response = await axios.post<{
lease_extended: boolean
state: string
last_heartbeat: string
ttl_seconds: number
}>(
`${deps.baseUrl}/v1/environments/${environmentId}/work/${workId}/heartbeat`,
{},
{
headers: getHeaders(sessionToken),
timeout: 10_000,
validateStatus: s => s < 500,
},
)
handleErrorStatus(response.status, response.data, 'Heartbeat')
debug(
`[bridge:api] POST .../work/${workId}/heartbeat -> ${response.status} lease_extended=${response.data.lease_extended} state=${response.data.state}`,
)
return response.data
},
async sendPermissionResponseEvent(
sessionId: string,
event: PermissionResponseEvent,
sessionToken: string,
): Promise<void> {
validateBridgeId(sessionId, 'sessionId')
debug(
`[bridge:api] POST /v1/sessions/${sessionId}/events type=${event.type}`,
)
const response = await axios.post(
`${deps.baseUrl}/v1/sessions/${sessionId}/events`,
{ events: [event] },
{
headers: getHeaders(sessionToken),
timeout: 10_000,
validateStatus: s => s < 500,
},
)
handleErrorStatus(
response.status,
response.data,
'SendPermissionResponseEvent',
)
debug(
`[bridge:api] POST /v1/sessions/${sessionId}/events -> ${response.status}`,
)
debug(`[bridge:api] >>> ${debugBody({ events: [event] })}`)
debug(`[bridge:api] <<< ${debugBody(response.data)}`)
},
}
}
function handleErrorStatus(
status: number,
data: unknown,
context: string,
): void {
if (status === 200 || status === 204) {
return
}
const detail = extractErrorDetail(data)
const errorType = extractErrorTypeFromData(data)
switch (status) {
case 401:
throw new BridgeFatalError(
`${context}: Authentication failed (401)${detail ? `: ${detail}` : ''}. ${BRIDGE_LOGIN_INSTRUCTION}`,
401,
errorType,
)
case 403:
throw new BridgeFatalError(
isExpiredErrorType(errorType)
? 'Remote Control session has expired. Please restart with `claude remote-control` or /remote-control.'
: `${context}: Access denied (403)${detail ? `: ${detail}` : ''}. Check your organization permissions.`,
403,
errorType,
)
case 404:
throw new BridgeFatalError(
detail ??
`${context}: Not found (404). Remote Control may not be available for this organization.`,
404,
errorType,
)
case 410:
throw new BridgeFatalError(
detail ??
'Remote Control session has expired. Please restart with `claude remote-control` or /remote-control.',
410,
errorType ?? 'environment_expired',
)
case 429:
throw new Error(`${context}: Rate limited (429). Polling too frequently.`)
default:
throw new Error(
`${context}: Failed with status ${status}${detail ? `: ${detail}` : ''}`,
)
}
}
/** Check whether an error type string indicates a session/environment expiry. */
export function isExpiredErrorType(errorType: string | undefined): boolean {
if (!errorType) {
return false
}
return errorType.includes('expired') || errorType.includes('lifetime')
}
/**
* Check whether a BridgeFatalError is a suppressible 403 permission error.
* These are 403 errors for scopes like 'external_poll_sessions' or operations
* like StopWork that fail because the user's role lacks 'environments:manage'.
* They don't affect core functionality and shouldn't be shown to users.
*/
export function isSuppressible403(err: BridgeFatalError): boolean {
if (err.status !== 403) {
return false
}
return (
err.message.includes('external_poll_sessions') ||
err.message.includes('environments:manage')
)
}
function extractErrorTypeFromData(data: unknown): string | undefined {
if (data && typeof data === 'object') {
if (
'error' in data &&
data.error &&
typeof data.error === 'object' &&
'type' in data.error &&
typeof data.error.type === 'string'
) {
return data.error.type
}
}
return undefined
}

View File

@@ -0,0 +1,50 @@
/**
* Shared bridge auth/URL resolution. Consolidates the ant-only
* CLAUDE_BRIDGE_* dev overrides that were previously copy-pasted across
* a dozen files — inboundAttachments, BriefTool/upload, bridgeMain,
* initReplBridge, remoteBridgeCore, daemon workers, /rename,
* /remote-control.
*
* Two layers: *Override() returns the ant-only env var (or undefined);
* the non-Override versions fall through to the real OAuth store/config.
* Callers that compose with a different auth source (e.g. daemon workers
* using IPC auth) use the Override getters directly.
*/
import { getOauthConfig } from '../constants/oauth.js'
import { getClaudeAIOAuthTokens } from '../utils/auth.js'
/** Ant-only dev override: CLAUDE_BRIDGE_OAUTH_TOKEN, else undefined. */
export function getBridgeTokenOverride(): string | undefined {
return (
(process.env.USER_TYPE === 'ant' &&
process.env.CLAUDE_BRIDGE_OAUTH_TOKEN) ||
undefined
)
}
/** Ant-only dev override: CLAUDE_BRIDGE_BASE_URL, else undefined. */
export function getBridgeBaseUrlOverride(): string | undefined {
return (
(process.env.USER_TYPE === 'ant' && process.env.CLAUDE_BRIDGE_BASE_URL) ||
undefined
)
}
/**
* Access token for bridge API calls: dev override first, then the OAuth
* keychain. Undefined means "not logged in".
*/
export function getBridgeAccessToken(): string | undefined {
return getBridgeTokenOverride() ?? getClaudeAIOAuthTokens()?.accessToken
}
/**
* Base URL for bridge API calls: dev override first, then the production
* OAuth config. Always returns a URL.
*/
export function getBridgeBaseUrl(): string {
return getBridgeBaseUrlOverride() ?? getOauthConfig().BASE_API_URL
}

137
src/bridge/bridgeDebug.ts Normal file
View File

@@ -0,0 +1,137 @@
import { logForDebugging } from '../utils/debug.js'
import { BridgeFatalError } from './bridgeApi.js'
import type { BridgeApiClient } from './types.js'
/**
* Ant-only fault injection for manually testing bridge recovery paths.
*
* Real failure modes this targets (BQ 2026-03-12, 7-day window):
* poll 404 not_found_error — 147K sessions/week, dead onEnvironmentLost gate
* ws_closed 1002/1006 — 22K sessions/week, zombie poll after close
* register transient failure — residual: network blips during doReconnect
*
* Usage: /bridge-kick <subcommand> from the REPL while Remote Control is
* connected, then tail debug.log to watch the recovery machinery react.
*
* Module-level state is intentional here: one bridge per REPL process, the
* /bridge-kick slash command has no other way to reach into initBridgeCore's
* closures, and teardown clears the slot.
*/
/** One-shot fault to inject on the next matching api call. */
type BridgeFault = {
method:
| 'pollForWork'
| 'registerBridgeEnvironment'
| 'reconnectSession'
| 'heartbeatWork'
/** Fatal errors go through handleErrorStatus → BridgeFatalError. Transient
* errors surface as plain axios rejections (5xx / network). Recovery code
* distinguishes the two: fatal → teardown, transient → retry/backoff. */
kind: 'fatal' | 'transient'
status: number
errorType?: string
/** Remaining injections. Decremented on consume; removed at 0. */
count: number
}
export type BridgeDebugHandle = {
/** Invoke the transport's permanent-close handler directly. Tests the
* ws_closed → reconnectEnvironmentWithSession escalation (#22148). */
fireClose: (code: number) => void
/** Call reconnectEnvironmentWithSession() — same as SIGUSR2 but
* reachable from the slash command. */
forceReconnect: () => void
/** Queue a fault for the next N calls to the named api method. */
injectFault: (fault: BridgeFault) => void
/** Abort the at-capacity sleep so an injected poll fault lands
* immediately instead of up to 10min later. */
wakePollLoop: () => void
/** env/session IDs for the debug.log grep. */
describe: () => string
}
let debugHandle: BridgeDebugHandle | null = null
const faultQueue: BridgeFault[] = []
export function registerBridgeDebugHandle(h: BridgeDebugHandle): void {
debugHandle = h
}
export function clearBridgeDebugHandle(): void {
debugHandle = null
faultQueue.length = 0
}
export function getBridgeDebugHandle(): BridgeDebugHandle | null {
return debugHandle
}
export function injectBridgeFault(fault: BridgeFault): void {
faultQueue.push(fault)
logForDebugging(
`[bridge:debug] Queued fault: ${fault.method} ${fault.kind}/${fault.status}${fault.errorType ? `/${fault.errorType}` : ''} ×${fault.count}`,
)
}
/**
* Wrap a BridgeApiClient so each call first checks the fault queue. If a
* matching fault is queued, throw the specified error instead of calling
* through. Delegates everything else to the real client.
*
* Only called when USER_TYPE === 'ant' — zero overhead in external builds.
*/
export function wrapApiForFaultInjection(
api: BridgeApiClient,
): BridgeApiClient {
function consume(method: BridgeFault['method']): BridgeFault | null {
const idx = faultQueue.findIndex(f => f.method === method)
if (idx === -1) return null
const fault = faultQueue[idx]!
fault.count--
if (fault.count <= 0) faultQueue.splice(idx, 1)
return fault
}
function throwFault(fault: BridgeFault, context: string): never {
logForDebugging(
`[bridge:debug] Injecting ${fault.kind} fault into ${context}: status=${fault.status} errorType=${fault.errorType ?? 'none'}`,
)
if (fault.kind === 'fatal') {
throw new BridgeFatalError(
`[injected] ${context} ${fault.status}`,
fault.status,
fault.errorType,
)
}
// Transient: mimic an axios rejection (5xx / network). No .status on
// the error itself — that's how the catch blocks distinguish.
throw new Error(`[injected transient] ${context} ${fault.status}`)
}
return {
...api,
async pollForWork(envId, secret, signal, reclaimMs) {
const f = consume('pollForWork')
if (f) throwFault(f, 'Poll')
return api.pollForWork(envId, secret, signal, reclaimMs)
},
async registerBridgeEnvironment(config) {
const f = consume('registerBridgeEnvironment')
if (f) throwFault(f, 'Registration')
return api.registerBridgeEnvironment(config)
},
async reconnectSession(envId, sessionId) {
const f = consume('reconnectSession')
if (f) throwFault(f, 'ReconnectSession')
return api.reconnectSession(envId, sessionId)
},
async heartbeatWork(envId, workId, token) {
const f = consume('heartbeatWork')
if (f) throwFault(f, 'Heartbeat')
return api.heartbeatWork(envId, workId, token)
},
}
}

204
src/bridge/bridgeEnabled.ts Normal file
View File

@@ -0,0 +1,204 @@
import { feature } from 'bun:bundle'
import {
checkGate_CACHED_OR_BLOCKING,
getDynamicConfig_CACHED_MAY_BE_STALE,
getFeatureValue_CACHED_MAY_BE_STALE,
} from '../services/analytics/growthbook.js'
// Namespace import breaks the bridgeEnabled → auth → config → bridgeEnabled
// cycle — authModule.foo is a live binding, so by the time the helpers below
// call it, auth.js is fully loaded. Previously used require() for the same
// deferral, but require() hits a CJS cache that diverges from the ESM
// namespace after mock.module() (daemon/auth.test.ts), breaking spyOn.
import * as authModule from '../utils/auth.js'
import { isEnvTruthy } from '../utils/envUtils.js'
import { lt } from '../utils/semver.js'
/**
* Runtime check for bridge mode entitlement.
*
* Remote Control requires a claude.ai subscription (the bridge auths to CCR
* with the claude.ai OAuth token). isClaudeAISubscriber() excludes
* Bedrock/Vertex/Foundry, apiKeyHelper/gateway deployments, env-var API keys,
* and Console API logins — none of which have the OAuth token CCR needs.
* See github.com/deshaw/anthropic-issues/issues/24.
*
* The `feature('BRIDGE_MODE')` guard ensures the GrowthBook string literal
* is only referenced when bridge mode is enabled at build time.
*/
export function isBridgeEnabled(): boolean {
// Positive ternary pattern — see docs/feature-gating.md.
// Negative pattern (if (!feature(...)) return) does not eliminate
// inline string literals from external builds.
return feature('BRIDGE_MODE')
? isClaudeAISubscriber() &&
getFeatureValue_CACHED_MAY_BE_STALE('tengu_ccr_bridge', false)
: false
}
/**
* Blocking entitlement check for Remote Control.
*
* Returns cached `true` immediately (fast path). If the disk cache says
* `false` or is missing, awaits GrowthBook init and fetches the fresh
* server value (slow path, max ~5s), then writes it to disk.
*
* Use at entitlement gates where a stale `false` would unfairly block access.
* For user-facing error paths, prefer `getBridgeDisabledReason()` which gives
* a specific diagnostic. For render-body UI visibility checks, use
* `isBridgeEnabled()` instead.
*/
export async function isBridgeEnabledBlocking(): Promise<boolean> {
return feature('BRIDGE_MODE')
? isClaudeAISubscriber() &&
(await checkGate_CACHED_OR_BLOCKING('tengu_ccr_bridge'))
: false
}
/**
* Diagnostic message for why Remote Control is unavailable, or null if
* it's enabled. Call this instead of a bare `isBridgeEnabledBlocking()`
* check when you need to show the user an actionable error.
*
* The GrowthBook gate targets on organizationUUID, which comes from
* config.oauthAccount — populated by /api/oauth/profile during login.
* That endpoint requires the user:profile scope. Tokens without it
* (setup-token, CLAUDE_CODE_OAUTH_TOKEN env var, or pre-scope-expansion
* logins) leave oauthAccount unpopulated, so the gate falls back to
* false and users see a dead-end "not enabled" message with no hint
* that re-login would fix it. See CC-1165 / gh-33105.
*/
export async function getBridgeDisabledReason(): Promise<string | null> {
if (feature('BRIDGE_MODE')) {
if (!isClaudeAISubscriber()) {
return 'Remote Control requires a claude.ai subscription. Run `claude auth login` to sign in with your claude.ai account.'
}
if (!hasProfileScope()) {
return 'Remote Control requires a full-scope login token. Long-lived tokens (from `claude setup-token` or CLAUDE_CODE_OAUTH_TOKEN) are limited to inference-only for security reasons. Run `claude auth login` to use Remote Control.'
}
if (!getOauthAccountInfo()?.organizationUuid) {
return 'Unable to determine your organization for Remote Control eligibility. Run `claude auth login` to refresh your account information.'
}
if (!(await checkGate_CACHED_OR_BLOCKING('tengu_ccr_bridge'))) {
return 'Remote Control is not yet enabled for your account.'
}
return null
}
return 'Remote Control is not available in this build.'
}
// try/catch: main.tsx:5698 calls isBridgeEnabled() while defining the Commander
// program, before enableConfigs() runs. isClaudeAISubscriber() → getGlobalConfig()
// throws "Config accessed before allowed" there. Pre-config, no OAuth token can
// exist anyway — false is correct. Same swallow getFeatureValue_CACHED_MAY_BE_STALE
// already does at growthbook.ts:775-780.
function isClaudeAISubscriber(): boolean {
try {
return authModule.isClaudeAISubscriber()
} catch {
return false
}
}
function hasProfileScope(): boolean {
try {
return authModule.hasProfileScope()
} catch {
return false
}
}
function getOauthAccountInfo(): ReturnType<
typeof authModule.getOauthAccountInfo
> {
try {
return authModule.getOauthAccountInfo()
} catch {
return undefined
}
}
/**
* Runtime check for the env-less (v2) REPL bridge path.
* Returns true when the GrowthBook flag `tengu_bridge_repl_v2` is enabled.
*
* This gates which implementation initReplBridge uses — NOT whether bridge
* is available at all (see isBridgeEnabled above). Daemon/print paths stay
* on the env-based implementation regardless of this gate.
*/
export function isEnvLessBridgeEnabled(): boolean {
return feature('BRIDGE_MODE')
? getFeatureValue_CACHED_MAY_BE_STALE('tengu_bridge_repl_v2', false)
: false
}
/**
* Kill-switch for the `cse_*` → `session_*` client-side retag shim.
*
* The shim exists because compat/convert.go:27 validates TagSession and the
* claude.ai frontend routes on `session_*`, while v2 worker endpoints hand out
* `cse_*`. Once the server tags by environment_kind and the frontend accepts
* `cse_*` directly, flip this to false to make toCompatSessionId a no-op.
* Defaults to true — the shim stays active until explicitly disabled.
*/
export function isCseShimEnabled(): boolean {
return feature('BRIDGE_MODE')
? getFeatureValue_CACHED_MAY_BE_STALE(
'tengu_bridge_repl_v2_cse_shim_enabled',
true,
)
: true
}
/**
* Returns an error message if the current CLI version is below the
* minimum required for the v1 (env-based) Remote Control path, or null if the
* version is fine. The v2 (env-less) path uses checkEnvLessBridgeMinVersion()
* in envLessBridgeConfig.ts instead — the two implementations have independent
* version floors.
*
* Uses cached (non-blocking) GrowthBook config. If GrowthBook hasn't
* loaded yet, the default '0.0.0' means the check passes — a safe fallback.
*/
export function checkBridgeMinVersion(): string | null {
// Positive pattern — see docs/feature-gating.md.
// Negative pattern (if (!feature(...)) return) does not eliminate
// inline string literals from external builds.
if (feature('BRIDGE_MODE')) {
const config = getDynamicConfig_CACHED_MAY_BE_STALE<{
minVersion: string
}>('tengu_bridge_min_version', { minVersion: '0.0.0' })
if (config.minVersion && lt(MACRO.VERSION, config.minVersion)) {
return `Your version of Claude Code (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${config.minVersion} or higher is required. Run \`claude update\` to update.`
}
}
return null
}
/**
* Default for remoteControlAtStartup when the user hasn't explicitly set it.
* When the CCR_AUTO_CONNECT build flag is present (ant-only) and the
* tengu_cobalt_harbor GrowthBook gate is on, all sessions connect to CCR by
* default — the user can still opt out by setting remoteControlAtStartup=false
* in config (explicit settings always win over this default).
*
* Defined here rather than in config.ts to avoid a direct
* config.ts → growthbook.ts import cycle (growthbook.ts → user.ts → config.ts).
*/
export function getCcrAutoConnectDefault(): boolean {
return feature('CCR_AUTO_CONNECT')
? getFeatureValue_CACHED_MAY_BE_STALE('tengu_cobalt_harbor', false)
: false
}
/**
* Opt-in CCR mirror mode — every local session spawns an outbound-only
* Remote Control session that receives forwarded events. Separate from
* getCcrAutoConnectDefault (bidirectional Remote Control). Env var wins for
* local opt-in; GrowthBook controls rollout.
*/
export function isCcrMirrorEnabled(): boolean {
return feature('CCR_MIRROR')
? isEnvTruthy(process.env.CLAUDE_CODE_CCR_MIRROR) ||
getFeatureValue_CACHED_MAY_BE_STALE('tengu_ccr_mirror', false)
: false
}

3001
src/bridge/bridgeMain.ts Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,463 @@
/**
* Shared transport-layer helpers for bridge message handling.
*
* Extracted from replBridge.ts so both the env-based core (initBridgeCore)
* and the env-less core (initEnvLessBridgeCore) can use the same ingress
* parsing, control-request handling, and echo-dedup machinery.
*
* Everything here is pure — no closure over bridge-specific state. All
* collaborators (transport, sessionId, UUID sets, callbacks) are passed
* as params.
*/
import { randomUUID } from 'crypto'
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
import type {
SDKControlRequest,
SDKControlResponse,
} from '../entrypoints/sdk/controlTypes.js'
import type { SDKResultSuccess } from '../entrypoints/sdk/coreTypes.js'
import { logEvent } from '../services/analytics/index.js'
import { EMPTY_USAGE } from '../services/api/emptyUsage.js'
import type { Message } from '../types/message.js'
import { normalizeControlMessageKeys } from '../utils/controlMessageCompat.js'
import { logForDebugging } from '../utils/debug.js'
import { stripDisplayTagsAllowEmpty } from '../utils/displayTags.js'
import { errorMessage } from '../utils/errors.js'
import type { PermissionMode } from '../utils/permissions/PermissionMode.js'
import { jsonParse } from '../utils/slowOperations.js'
import type { ReplBridgeTransport } from './replBridgeTransport.js'
// ─── Type guards ─────────────────────────────────────────────────────────────
/** Type predicate for parsed WebSocket messages. SDKMessage is a
* discriminated union on `type` — validating the discriminant is
* sufficient for the predicate; callers narrow further via the union. */
export function isSDKMessage(value: unknown): value is SDKMessage {
return (
value !== null &&
typeof value === 'object' &&
'type' in value &&
typeof value.type === 'string'
)
}
/** Type predicate for control_response messages from the server. */
export function isSDKControlResponse(
value: unknown,
): value is SDKControlResponse {
return (
value !== null &&
typeof value === 'object' &&
'type' in value &&
value.type === 'control_response' &&
'response' in value
)
}
/** Type predicate for control_request messages from the server. */
export function isSDKControlRequest(
value: unknown,
): value is SDKControlRequest {
return (
value !== null &&
typeof value === 'object' &&
'type' in value &&
value.type === 'control_request' &&
'request_id' in value &&
'request' in value
)
}
/**
* True for message types that should be forwarded to the bridge transport.
* The server only wants user/assistant turns and slash-command system events;
* everything else (tool_result, progress, etc.) is internal REPL chatter.
*/
export function isEligibleBridgeMessage(m: Message): boolean {
// Virtual messages (REPL inner calls) are display-only — bridge/SDK
// consumers see the REPL tool_use/result which summarizes the work.
if ((m.type === 'user' || m.type === 'assistant') && m.isVirtual) {
return false
}
return (
m.type === 'user' ||
m.type === 'assistant' ||
(m.type === 'system' && m.subtype === 'local_command')
)
}
/**
* Extract title-worthy text from a Message for onUserMessage. Returns
* undefined for messages that shouldn't title the session: non-user, meta
* (nudges), tool results, compact summaries, non-human origins (task
* notifications, channel messages), or pure display-tag content
* (<ide_opened_file>, <session-start-hook>, etc.).
*
* Synthetic interrupts ([Request interrupted by user]) are NOT filtered here —
* isSyntheticMessage lives in messages.ts (heavy import, pulls command
* registry). The initialMessages path in initReplBridge checks it; the
* writeMessages path reaching an interrupt as the *first* message is
* implausible (an interrupt implies a prior prompt already flowed through).
*/
export function extractTitleText(m: Message): string | undefined {
if (m.type !== 'user' || m.isMeta || m.toolUseResult || m.isCompactSummary)
return undefined
if (m.origin && m.origin.kind !== 'human') return undefined
const content = m.message.content
let raw: string | undefined
if (typeof content === 'string') {
raw = content
} else {
for (const block of content) {
if (block.type === 'text') {
raw = block.text
break
}
}
}
if (!raw) return undefined
const clean = stripDisplayTagsAllowEmpty(raw)
return clean || undefined
}
// ─── Ingress routing ─────────────────────────────────────────────────────────
/**
* Parse an ingress WebSocket message and route it to the appropriate handler.
* Ignores messages whose UUID is in recentPostedUUIDs (echoes of what we sent)
* or in recentInboundUUIDs (re-deliveries we've already forwarded — e.g.
* server replayed history after a transport swap lost the seq-num cursor).
*/
export function handleIngressMessage(
data: string,
recentPostedUUIDs: BoundedUUIDSet,
recentInboundUUIDs: BoundedUUIDSet,
onInboundMessage: ((msg: SDKMessage) => void | Promise<void>) | undefined,
onPermissionResponse?: ((response: SDKControlResponse) => void) | undefined,
onControlRequest?: ((request: SDKControlRequest) => void) | undefined,
): void {
try {
const parsed: unknown = normalizeControlMessageKeys(jsonParse(data))
// control_response is not an SDKMessage — check before the type guard
if (isSDKControlResponse(parsed)) {
logForDebugging('[bridge:repl] Ingress message type=control_response')
onPermissionResponse?.(parsed)
return
}
// control_request from the server (initialize, set_model, can_use_tool).
// Must respond promptly or the server kills the WS (~10-14s timeout).
if (isSDKControlRequest(parsed)) {
logForDebugging(
`[bridge:repl] Inbound control_request subtype=${parsed.request.subtype}`,
)
onControlRequest?.(parsed)
return
}
if (!isSDKMessage(parsed)) return
// Check for UUID to detect echoes of our own messages
const uuid =
'uuid' in parsed && typeof parsed.uuid === 'string'
? parsed.uuid
: undefined
if (uuid && recentPostedUUIDs.has(uuid)) {
logForDebugging(
`[bridge:repl] Ignoring echo: type=${parsed.type} uuid=${uuid}`,
)
return
}
// Defensive dedup: drop inbound prompts we've already forwarded. The
// SSE seq-num carryover (lastTransportSequenceNum) is the primary fix
// for history-replay; this catches edge cases where that negotiation
// fails (server ignores from_sequence_num, transport died before
// receiving any frames, etc).
if (uuid && recentInboundUUIDs.has(uuid)) {
logForDebugging(
`[bridge:repl] Ignoring re-delivered inbound: type=${parsed.type} uuid=${uuid}`,
)
return
}
logForDebugging(
`[bridge:repl] Ingress message type=${parsed.type}${uuid ? ` uuid=${uuid}` : ''}`,
)
if (parsed.type === 'user') {
if (uuid) recentInboundUUIDs.add(uuid)
logEvent('tengu_bridge_message_received', {
is_repl: true,
})
// Fire-and-forget — handler may be async (attachment resolution).
void onInboundMessage?.(parsed)
} else {
logForDebugging(
`[bridge:repl] Ignoring non-user inbound message: type=${parsed.type}`,
)
}
} catch (err) {
logForDebugging(
`[bridge:repl] Failed to parse ingress message: ${errorMessage(err)}`,
)
}
}
// ─── Server-initiated control requests ───────────────────────────────────────
export type ServerControlRequestHandlers = {
transport: ReplBridgeTransport | null
sessionId: string
/**
* When true, all mutable requests (interrupt, set_model, set_permission_mode,
* set_max_thinking_tokens) reply with an error instead of false-success.
* initialize still replies success — the server kills the connection otherwise.
* Used by the outbound-only bridge mode and the SDK's /bridge subpath so claude.ai sees a
* proper error instead of "action succeeded but nothing happened locally".
*/
outboundOnly?: boolean
onInterrupt?: () => void
onSetModel?: (model: string | undefined) => void
onSetMaxThinkingTokens?: (maxTokens: number | null) => void
onSetPermissionMode?: (
mode: PermissionMode,
) => { ok: true } | { ok: false; error: string }
}
const OUTBOUND_ONLY_ERROR =
'This session is outbound-only. Enable Remote Control locally to allow inbound control.'
/**
* Respond to inbound control_request messages from the server. The server
* sends these for session lifecycle events (initialize, set_model) and
* for turn-level coordination (interrupt, set_max_thinking_tokens). If we
* don't respond, the server hangs and kills the WS after ~10-14s.
*
* Previously a closure inside initBridgeCore's onWorkReceived; now takes
* collaborators as params so both cores can use it.
*/
export function handleServerControlRequest(
request: SDKControlRequest,
handlers: ServerControlRequestHandlers,
): void {
const {
transport,
sessionId,
outboundOnly,
onInterrupt,
onSetModel,
onSetMaxThinkingTokens,
onSetPermissionMode,
} = handlers
if (!transport) {
logForDebugging(
'[bridge:repl] Cannot respond to control_request: transport not configured',
)
return
}
let response: SDKControlResponse
// Outbound-only: reply error for mutable requests so claude.ai doesn't show
// false success. initialize must still succeed (server kills the connection
// if it doesn't — see comment above).
if (outboundOnly && request.request.subtype !== 'initialize') {
response = {
type: 'control_response',
response: {
subtype: 'error',
request_id: request.request_id,
error: OUTBOUND_ONLY_ERROR,
},
}
const event = { ...response, session_id: sessionId }
void transport.write(event)
logForDebugging(
`[bridge:repl] Rejected ${request.request.subtype} (outbound-only) request_id=${request.request_id}`,
)
return
}
switch (request.request.subtype) {
case 'initialize':
// Respond with minimal capabilities — the REPL handles
// commands, models, and account info itself.
response = {
type: 'control_response',
response: {
subtype: 'success',
request_id: request.request_id,
response: {
commands: [],
output_style: 'normal',
available_output_styles: ['normal'],
models: [],
account: {},
pid: process.pid,
},
},
}
break
case 'set_model':
onSetModel?.(request.request.model)
response = {
type: 'control_response',
response: {
subtype: 'success',
request_id: request.request_id,
},
}
break
case 'set_max_thinking_tokens':
onSetMaxThinkingTokens?.(request.request.max_thinking_tokens)
response = {
type: 'control_response',
response: {
subtype: 'success',
request_id: request.request_id,
},
}
break
case 'set_permission_mode': {
// The callback returns a policy verdict so we can send an error
// control_response without importing isAutoModeGateEnabled /
// isBypassPermissionsModeDisabled here (bootstrap-isolation). If no
// callback is registered (daemon context, which doesn't wire this —
// see daemonBridge.ts), return an error verdict rather than a silent
// false-success: the mode is never actually applied in that context,
// so success would lie to the client.
const verdict = onSetPermissionMode?.(request.request.mode) ?? {
ok: false,
error:
'set_permission_mode is not supported in this context (onSetPermissionMode callback not registered)',
}
if (verdict.ok) {
response = {
type: 'control_response',
response: {
subtype: 'success',
request_id: request.request_id,
},
}
} else {
response = {
type: 'control_response',
response: {
subtype: 'error',
request_id: request.request_id,
error: verdict.error,
},
}
}
break
}
case 'interrupt':
onInterrupt?.()
response = {
type: 'control_response',
response: {
subtype: 'success',
request_id: request.request_id,
},
}
break
default:
// Unknown subtype — respond with error so the server doesn't
// hang waiting for a reply that never comes.
response = {
type: 'control_response',
response: {
subtype: 'error',
request_id: request.request_id,
error: `REPL bridge does not handle control_request subtype: ${request.request.subtype}`,
},
}
}
const event = { ...response, session_id: sessionId }
void transport.write(event)
logForDebugging(
`[bridge:repl] Sent control_response for ${request.request.subtype} request_id=${request.request_id} result=${response.response.subtype}`,
)
}
// ─── Result message (for session archival on teardown) ───────────────────────
/**
* Build a minimal `SDKResultSuccess` message for session archival.
* The server needs this event before a WS close to trigger archival.
*/
export function makeResultMessage(sessionId: string): SDKResultSuccess {
return {
type: 'result',
subtype: 'success',
duration_ms: 0,
duration_api_ms: 0,
is_error: false,
num_turns: 0,
result: '',
stop_reason: null,
total_cost_usd: 0,
usage: { ...EMPTY_USAGE },
modelUsage: {},
permission_denials: [],
session_id: sessionId,
uuid: randomUUID(),
}
}
// ─── BoundedUUIDSet (echo-dedup ring buffer) ─────────────────────────────────
/**
* FIFO-bounded set backed by a circular buffer. Evicts the oldest entry
* when capacity is reached, keeping memory usage constant at O(capacity).
*
* Messages are added in chronological order, so evicted entries are always
* the oldest. The caller relies on external ordering (the hook's
* lastWrittenIndexRef) as the primary dedup — this set is a secondary
* safety net for echo filtering and race-condition dedup.
*/
export class BoundedUUIDSet {
private readonly capacity: number
private readonly ring: (string | undefined)[]
private readonly set = new Set<string>()
private writeIdx = 0
constructor(capacity: number) {
this.capacity = capacity
this.ring = new Array<string | undefined>(capacity)
}
add(uuid: string): void {
if (this.set.has(uuid)) return
// Evict the entry at the current write position (if occupied)
const evicted = this.ring[this.writeIdx]
if (evicted !== undefined) {
this.set.delete(evicted)
}
this.ring[this.writeIdx] = uuid
this.set.add(uuid)
this.writeIdx = (this.writeIdx + 1) % this.capacity
}
has(uuid: string): boolean {
return this.set.has(uuid)
}
clear(): void {
this.set.clear()
this.ring.fill(undefined)
this.writeIdx = 0
}
}

View File

@@ -0,0 +1,45 @@
import type { PermissionUpdate } from '../utils/permissions/PermissionUpdateSchema.js'
type BridgePermissionResponse = {
behavior: 'allow' | 'deny'
updatedInput?: Record<string, unknown>
updatedPermissions?: PermissionUpdate[]
message?: string
}
type BridgePermissionCallbacks = {
sendRequest(
requestId: string,
toolName: string,
input: Record<string, unknown>,
toolUseId: string,
description: string,
permissionSuggestions?: PermissionUpdate[],
blockedPath?: string,
): void
sendResponse(requestId: string, response: BridgePermissionResponse): void
/** Cancel a pending control_request so the web app can dismiss its prompt. */
cancelRequest(requestId: string): void
onResponse(
requestId: string,
handler: (response: BridgePermissionResponse) => void,
): () => void // returns unsubscribe
}
/** Type predicate for validating a parsed control_response payload
* as a BridgePermissionResponse. Checks the required `behavior`
* discriminant rather than using an unsafe `as` cast. */
function isBridgePermissionResponse(
value: unknown,
): value is BridgePermissionResponse {
if (!value || typeof value !== 'object') return false
return (
'behavior' in value &&
(value.behavior === 'allow' || value.behavior === 'deny')
)
}
export { isBridgePermissionResponse }
export type { BridgePermissionCallbacks, BridgePermissionResponse }

212
src/bridge/bridgePointer.ts Normal file
View File

@@ -0,0 +1,212 @@
import { mkdir, readFile, stat, unlink, writeFile } from 'fs/promises'
import { dirname, join } from 'path'
import { z } from 'zod/v4'
import { logForDebugging } from '../utils/debug.js'
import { isENOENT } from '../utils/errors.js'
import { getWorktreePathsPortable } from '../utils/getWorktreePathsPortable.js'
import { lazySchema } from '../utils/lazySchema.js'
import {
getProjectsDir,
sanitizePath,
} from '../utils/sessionStoragePortable.js'
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
/**
* Upper bound on worktree fanout. git worktree list is naturally bounded
* (50 is a LOT), but this caps the parallel stat() burst and guards against
* pathological setups. Above this, --continue falls back to current-dir-only.
*/
const MAX_WORKTREE_FANOUT = 50
/**
* Crash-recovery pointer for Remote Control sessions.
*
* Written immediately after a bridge session is created, periodically
* refreshed during the session, and cleared on clean shutdown. If the
* process dies unclean (crash, kill -9, terminal closed), the pointer
* persists. On next startup, `claude remote-control` detects it and offers
* to resume via the --session-id flow from #20460.
*
* Staleness is checked against the file's mtime (not an embedded timestamp)
* so that a periodic re-write with the same content serves as a refresh —
* matches the backend's rolling BRIDGE_LAST_POLL_TTL (4h) semantics. A
* bridge that's been polling for 5+ hours and then crashes still has a
* fresh pointer as long as the refresh ran within the window.
*
* Scoped per working directory (alongside transcript JSONL files) so two
* concurrent bridges in different repos don't clobber each other.
*/
export const BRIDGE_POINTER_TTL_MS = 4 * 60 * 60 * 1000
const BridgePointerSchema = lazySchema(() =>
z.object({
sessionId: z.string(),
environmentId: z.string(),
source: z.enum(['standalone', 'repl']),
}),
)
export type BridgePointer = z.infer<ReturnType<typeof BridgePointerSchema>>
export function getBridgePointerPath(dir: string): string {
return join(getProjectsDir(), sanitizePath(dir), 'bridge-pointer.json')
}
/**
* Write the pointer. Also used to refresh mtime during long sessions —
* calling with the same IDs is a cheap no-content-change write that bumps
* the staleness clock. Best-effort — a crash-recovery file must never
* itself cause a crash. Logs and swallows on error.
*/
export async function writeBridgePointer(
dir: string,
pointer: BridgePointer,
): Promise<void> {
const path = getBridgePointerPath(dir)
try {
await mkdir(dirname(path), { recursive: true })
await writeFile(path, jsonStringify(pointer), 'utf8')
logForDebugging(`[bridge:pointer] wrote ${path}`)
} catch (err: unknown) {
logForDebugging(`[bridge:pointer] write failed: ${err}`, { level: 'warn' })
}
}
/**
* Read the pointer and its age (ms since last write). Operates directly
* and handles errors — no existence check (CLAUDE.md TOCTOU rule). Returns
* null on any failure: missing file, corrupted JSON, schema mismatch, or
* stale (mtime > 4h ago). Stale/invalid pointers are deleted so they don't
* keep re-prompting after the backend has already GC'd the env.
*/
export async function readBridgePointer(
dir: string,
): Promise<(BridgePointer & { ageMs: number }) | null> {
const path = getBridgePointerPath(dir)
let raw: string
let mtimeMs: number
try {
// stat for mtime (staleness anchor), then read. Two syscalls, but both
// are needed — mtime IS the data we return, not a TOCTOU guard.
mtimeMs = (await stat(path)).mtimeMs
raw = await readFile(path, 'utf8')
} catch {
return null
}
const parsed = BridgePointerSchema().safeParse(safeJsonParse(raw))
if (!parsed.success) {
logForDebugging(`[bridge:pointer] invalid schema, clearing: ${path}`)
await clearBridgePointer(dir)
return null
}
const ageMs = Math.max(0, Date.now() - mtimeMs)
if (ageMs > BRIDGE_POINTER_TTL_MS) {
logForDebugging(`[bridge:pointer] stale (>4h mtime), clearing: ${path}`)
await clearBridgePointer(dir)
return null
}
return { ...parsed.data, ageMs }
}
/**
* Worktree-aware read for `--continue`. The REPL bridge writes its pointer
* to `getOriginalCwd()` which EnterWorktreeTool/activeWorktreeSession can
* mutate to a worktree path — but `claude remote-control --continue` runs
* with `resolve('.')` = shell CWD. This fans out across git worktree
* siblings to find the freshest pointer, matching /resume's semantics.
*
* Fast path: checks `dir` first. Only shells out to `git worktree list` if
* that misses — the common case (pointer in launch dir) is one stat, zero
* exec. Fanout reads run in parallel; capped at MAX_WORKTREE_FANOUT.
*
* Returns the pointer AND the dir it was found in, so the caller can clear
* the right file on resume failure.
*/
export async function readBridgePointerAcrossWorktrees(
dir: string,
): Promise<{ pointer: BridgePointer & { ageMs: number }; dir: string } | null> {
// Fast path: current dir. Covers standalone bridge (always matches) and
// REPL bridge when no worktree mutation happened.
const here = await readBridgePointer(dir)
if (here) {
return { pointer: here, dir }
}
// Fanout: scan worktree siblings. getWorktreePathsPortable has a 5s
// timeout and returns [] on any error (not a git repo, git not installed).
const worktrees = await getWorktreePathsPortable(dir)
if (worktrees.length <= 1) return null
if (worktrees.length > MAX_WORKTREE_FANOUT) {
logForDebugging(
`[bridge:pointer] ${worktrees.length} worktrees exceeds fanout cap ${MAX_WORKTREE_FANOUT}, skipping`,
)
return null
}
// Dedupe against `dir` so we don't re-stat it. sanitizePath normalizes
// case/separators so worktree-list output matches our fast-path key even
// on Windows where git may emit C:/ vs stored c:/.
const dirKey = sanitizePath(dir)
const candidates = worktrees.filter(wt => sanitizePath(wt) !== dirKey)
// Parallel stat+read. Each readBridgePointer is a stat() that ENOENTs
// for worktrees with no pointer (cheap) plus a ~100-byte read for the
// rare ones that have one. Promise.all → latency ≈ slowest single stat.
const results = await Promise.all(
candidates.map(async wt => {
const p = await readBridgePointer(wt)
return p ? { pointer: p, dir: wt } : null
}),
)
// Pick freshest (lowest ageMs). The pointer stores environmentId so
// resume reconnects to the right env regardless of which worktree
// --continue was invoked from.
let freshest: {
pointer: BridgePointer & { ageMs: number }
dir: string
} | null = null
for (const r of results) {
if (r && (!freshest || r.pointer.ageMs < freshest.pointer.ageMs)) {
freshest = r
}
}
if (freshest) {
logForDebugging(
`[bridge:pointer] fanout found pointer in worktree ${freshest.dir} (ageMs=${freshest.pointer.ageMs})`,
)
}
return freshest
}
/**
* Delete the pointer. Idempotent — ENOENT is expected when the process
* shut down clean previously.
*/
export async function clearBridgePointer(dir: string): Promise<void> {
const path = getBridgePointerPath(dir)
try {
await unlink(path)
logForDebugging(`[bridge:pointer] cleared ${path}`)
} catch (err: unknown) {
if (!isENOENT(err)) {
logForDebugging(`[bridge:pointer] clear failed: ${err}`, {
level: 'warn',
})
}
}
}
function safeJsonParse(raw: string): unknown {
try {
return jsonParse(raw)
} catch {
return null
}
}

View File

@@ -0,0 +1,165 @@
import {
getClaudeAiBaseUrl,
getRemoteSessionUrl,
} from '../constants/product.js'
import { stringWidth } from '../ink/stringWidth.js'
import { formatDuration, truncateToWidth } from '../utils/format.js'
import { getGraphemeSegmenter } from '../utils/intl.js'
/** Bridge status state machine states. */
export type StatusState =
| 'idle'
| 'attached'
| 'titled'
| 'reconnecting'
| 'failed'
/** How long a tool activity line stays visible after last tool_start (ms). */
export const TOOL_DISPLAY_EXPIRY_MS = 30_000
/** Interval for the shimmer animation tick (ms). */
export const SHIMMER_INTERVAL_MS = 150
export function timestamp(): string {
const now = new Date()
const h = String(now.getHours()).padStart(2, '0')
const m = String(now.getMinutes()).padStart(2, '0')
const s = String(now.getSeconds()).padStart(2, '0')
return `${h}:${m}:${s}`
}
export { formatDuration, truncateToWidth as truncatePrompt }
/** Abbreviate a tool activity summary for the trail display. */
export function abbreviateActivity(summary: string): string {
return truncateToWidth(summary, 30)
}
/** Build the connect URL shown when the bridge is idle. */
export function buildBridgeConnectUrl(
environmentId: string,
ingressUrl?: string,
): string {
const baseUrl = getClaudeAiBaseUrl(undefined, ingressUrl)
return `${baseUrl}/code?bridge=${environmentId}`
}
/**
* Build the session URL shown when a session is attached. Delegates to
* getRemoteSessionUrl for the cse_→session_ prefix translation, then appends
* the v1-specific ?bridge={environmentId} query.
*/
export function buildBridgeSessionUrl(
sessionId: string,
environmentId: string,
ingressUrl?: string,
): string {
return `${getRemoteSessionUrl(sessionId, ingressUrl)}?bridge=${environmentId}`
}
/** Compute the glimmer index for a reverse-sweep shimmer animation. */
export function computeGlimmerIndex(
tick: number,
messageWidth: number,
): number {
const cycleLength = messageWidth + 20
return messageWidth + 10 - (tick % cycleLength)
}
/**
* Split text into three segments by visual column position for shimmer rendering.
*
* Uses grapheme segmentation and `stringWidth` so the split is correct for
* multi-byte characters, emoji, and CJK glyphs.
*
* Returns `{ before, shimmer, after }` strings. Both renderers (chalk in
* bridgeUI.ts and React/Ink in bridge.tsx) apply their own coloring to
* these segments.
*/
export function computeShimmerSegments(
text: string,
glimmerIndex: number,
): { before: string; shimmer: string; after: string } {
const messageWidth = stringWidth(text)
const shimmerStart = glimmerIndex - 1
const shimmerEnd = glimmerIndex + 1
// When shimmer is offscreen, return all text as "before"
if (shimmerStart >= messageWidth || shimmerEnd < 0) {
return { before: text, shimmer: '', after: '' }
}
// Split into at most 3 segments by visual column position
const clampedStart = Math.max(0, shimmerStart)
let colPos = 0
let before = ''
let shimmer = ''
let after = ''
for (const { segment } of getGraphemeSegmenter().segment(text)) {
const segWidth = stringWidth(segment)
if (colPos + segWidth <= clampedStart) {
before += segment
} else if (colPos > shimmerEnd) {
after += segment
} else {
shimmer += segment
}
colPos += segWidth
}
return { before, shimmer, after }
}
/** Computed bridge status label and color from connection state. */
export type BridgeStatusInfo = {
label:
| 'Remote Control failed'
| 'Remote Control reconnecting'
| 'Remote Control active'
| 'Remote Control connecting\u2026'
color: 'error' | 'warning' | 'success'
}
/** Derive a status label and color from the bridge connection state. */
export function getBridgeStatus({
error,
connected,
sessionActive,
reconnecting,
}: {
error: string | undefined
connected: boolean
sessionActive: boolean
reconnecting: boolean
}): BridgeStatusInfo {
if (error) return { label: 'Remote Control failed', color: 'error' }
if (reconnecting)
return { label: 'Remote Control reconnecting', color: 'warning' }
if (sessionActive || connected)
return { label: 'Remote Control active', color: 'success' }
return { label: 'Remote Control connecting\u2026', color: 'warning' }
}
/** Footer text shown when bridge is idle (Ready state). */
export function buildIdleFooterText(url: string): string {
return `Code everywhere with the Claude app or ${url}`
}
/** Footer text shown when a session is active (Connected state). */
export function buildActiveFooterText(url: string): string {
return `Continue coding in the Claude app or ${url}`
}
/** Footer text shown when the bridge has failed. */
export const FAILED_FOOTER_TEXT = 'Something went wrong, please try again'
/**
* Wrap text in an OSC 8 terminal hyperlink. Zero visual width for layout purposes.
* strip-ansi (used by stringWidth) correctly strips these sequences, so
* countVisualLines in bridgeUI.ts remains accurate.
*/
export function wrapWithOsc8Link(text: string, url: string): string {
return `\x1b]8;;${url}\x07${text}\x1b]8;;\x07`
}

532
src/bridge/bridgeUI.ts Normal file
View File

@@ -0,0 +1,532 @@
import chalk from 'chalk'
import { toString as qrToString } from 'qrcode'
import {
BRIDGE_FAILED_INDICATOR,
BRIDGE_READY_INDICATOR,
BRIDGE_SPINNER_FRAMES,
} from '../constants/figures.js'
import { stringWidth } from '../ink/stringWidth.js'
import { logForDebugging } from '../utils/debug.js'
import {
buildActiveFooterText,
buildBridgeConnectUrl,
buildBridgeSessionUrl,
buildIdleFooterText,
FAILED_FOOTER_TEXT,
formatDuration,
type StatusState,
TOOL_DISPLAY_EXPIRY_MS,
timestamp,
truncatePrompt,
wrapWithOsc8Link,
} from './bridgeStatusUtil.js'
import type {
BridgeConfig,
BridgeLogger,
SessionActivity,
SpawnMode,
} from './types.js'
const QR_OPTIONS = {
type: 'utf8' as const,
errorCorrectionLevel: 'L' as const,
small: true,
}
/** Generate a QR code and return its lines. */
async function generateQr(url: string): Promise<string[]> {
const qr = await qrToString(url, QR_OPTIONS)
return qr.split('\n').filter((line: string) => line.length > 0)
}
export function createBridgeLogger(options: {
verbose: boolean
write?: (s: string) => void
}): BridgeLogger {
const write = options.write ?? ((s: string) => process.stdout.write(s))
const verbose = options.verbose
// Track how many status lines are currently displayed at the bottom
let statusLineCount = 0
// Status state machine
let currentState: StatusState = 'idle'
let currentStateText = 'Ready'
let repoName = ''
let branch = ''
let debugLogPath = ''
// Connect URL (built in printBanner with correct base for staging/prod)
let connectUrl = ''
let cachedIngressUrl = ''
let cachedEnvironmentId = ''
let activeSessionUrl: string | null = null
// QR code lines for the current URL
let qrLines: string[] = []
let qrVisible = false
// Tool activity for the second status line
let lastToolSummary: string | null = null
let lastToolTime = 0
// Session count indicator (shown when multi-session mode is enabled)
let sessionActive = 0
let sessionMax = 1
// Spawn mode shown in the session-count line + gates the `w` hint
let spawnModeDisplay: 'same-dir' | 'worktree' | null = null
let spawnMode: SpawnMode = 'single-session'
// Per-session display info for the multi-session bullet list (keyed by compat sessionId)
const sessionDisplayInfo = new Map<
string,
{ title?: string; url: string; activity?: SessionActivity }
>()
// Connecting spinner state
let connectingTimer: ReturnType<typeof setInterval> | null = null
let connectingTick = 0
/**
* Count how many visual terminal rows a string occupies, accounting for
* line wrapping. Each `\n` is one row, and content wider than the terminal
* wraps to additional rows.
*/
function countVisualLines(text: string): number {
// eslint-disable-next-line custom-rules/prefer-use-terminal-size
const cols = process.stdout.columns || 80 // non-React CLI context
let count = 0
// Split on newlines to get logical lines
for (const logical of text.split('\n')) {
if (logical.length === 0) {
// Empty segment between consecutive \n — counts as 1 row
count++
continue
}
const width = stringWidth(logical)
count += Math.max(1, Math.ceil(width / cols))
}
// The trailing \n in "line\n" produces an empty last element — don't count it
// because the cursor sits at the start of the next line, not a new visual row.
if (text.endsWith('\n')) {
count--
}
return count
}
/** Write a status line and track its visual line count. */
function writeStatus(text: string): void {
write(text)
statusLineCount += countVisualLines(text)
}
/** Clear any currently displayed status lines. */
function clearStatusLines(): void {
if (statusLineCount <= 0) return
logForDebugging(`[bridge:ui] clearStatusLines count=${statusLineCount}`)
// Move cursor up to the start of the status block, then erase everything below
write(`\x1b[${statusLineCount}A`) // cursor up N lines
write('\x1b[J') // erase from cursor to end of screen
statusLineCount = 0
}
/** Print a permanent log line, clearing status first and restoring after. */
function printLog(line: string): void {
clearStatusLines()
write(line)
}
/** Regenerate the QR code with the given URL. */
function regenerateQr(url: string): void {
generateQr(url)
.then(lines => {
qrLines = lines
renderStatusLine()
})
.catch(e => {
logForDebugging(`QR code generation failed: ${e}`, { level: 'error' })
})
}
/** Render the connecting spinner line (shown before first updateIdleStatus). */
function renderConnectingLine(): void {
clearStatusLines()
const frame =
BRIDGE_SPINNER_FRAMES[connectingTick % BRIDGE_SPINNER_FRAMES.length]!
let suffix = ''
if (repoName) {
suffix += chalk.dim(' \u00b7 ') + chalk.dim(repoName)
}
if (branch) {
suffix += chalk.dim(' \u00b7 ') + chalk.dim(branch)
}
writeStatus(
`${chalk.yellow(frame)} ${chalk.yellow('Connecting')}${suffix}\n`,
)
}
/** Start the connecting spinner. Stopped by first updateIdleStatus(). */
function startConnecting(): void {
stopConnecting()
renderConnectingLine()
connectingTimer = setInterval(() => {
connectingTick++
renderConnectingLine()
}, 150)
}
/** Stop the connecting spinner. */
function stopConnecting(): void {
if (connectingTimer) {
clearInterval(connectingTimer)
connectingTimer = null
}
}
/** Render and write the current status lines based on state. */
function renderStatusLine(): void {
if (currentState === 'reconnecting' || currentState === 'failed') {
// These states are handled separately (updateReconnectingStatus /
// updateFailedStatus). Return before clearing so callers like toggleQr
// and setSpawnModeDisplay don't blank the display during these states.
return
}
clearStatusLines()
const isIdle = currentState === 'idle'
// QR code above the status line
if (qrVisible) {
for (const line of qrLines) {
writeStatus(`${chalk.dim(line)}\n`)
}
}
// Determine indicator and colors based on state
const indicator = BRIDGE_READY_INDICATOR
const indicatorColor = isIdle ? chalk.green : chalk.cyan
const baseColor = isIdle ? chalk.green : chalk.cyan
const stateText = baseColor(currentStateText)
// Build the suffix with repo and branch
let suffix = ''
if (repoName) {
suffix += chalk.dim(' \u00b7 ') + chalk.dim(repoName)
}
// In worktree mode each session gets its own branch, so showing the
// bridge's branch would be misleading.
if (branch && spawnMode !== 'worktree') {
suffix += chalk.dim(' \u00b7 ') + chalk.dim(branch)
}
if (process.env.USER_TYPE === 'ant' && debugLogPath) {
writeStatus(
`${chalk.yellow('[ANT-ONLY] Logs:')} ${chalk.dim(debugLogPath)}\n`,
)
}
writeStatus(`${indicatorColor(indicator)} ${stateText}${suffix}\n`)
// Session count and per-session list (multi-session mode only)
if (sessionMax > 1) {
const modeHint =
spawnMode === 'worktree'
? 'New sessions will be created in an isolated worktree'
: 'New sessions will be created in the current directory'
writeStatus(
` ${chalk.dim(`Capacity: ${sessionActive}/${sessionMax} \u00b7 ${modeHint}`)}\n`,
)
for (const [, info] of sessionDisplayInfo) {
const titleText = info.title
? truncatePrompt(info.title, 35)
: chalk.dim('Attached')
const titleLinked = wrapWithOsc8Link(titleText, info.url)
const act = info.activity
const showAct = act && act.type !== 'result' && act.type !== 'error'
const actText = showAct
? chalk.dim(` ${truncatePrompt(act.summary, 40)}`)
: ''
writeStatus(` ${titleLinked}${actText}
`)
}
}
// Mode line for spawn modes with a single slot (or true single-session mode)
if (sessionMax === 1) {
const modeText =
spawnMode === 'single-session'
? 'Single session \u00b7 exits when complete'
: spawnMode === 'worktree'
? `Capacity: ${sessionActive}/1 \u00b7 New sessions will be created in an isolated worktree`
: `Capacity: ${sessionActive}/1 \u00b7 New sessions will be created in the current directory`
writeStatus(` ${chalk.dim(modeText)}\n`)
}
// Tool activity line for single-session mode
if (
sessionMax === 1 &&
!isIdle &&
lastToolSummary &&
Date.now() - lastToolTime < TOOL_DISPLAY_EXPIRY_MS
) {
writeStatus(` ${chalk.dim(truncatePrompt(lastToolSummary, 60))}\n`)
}
// Blank line separator before footer
const url = activeSessionUrl ?? connectUrl
if (url) {
writeStatus('\n')
const footerText = isIdle
? buildIdleFooterText(url)
: buildActiveFooterText(url)
const qrHint = qrVisible
? chalk.dim.italic('space to hide QR code')
: chalk.dim.italic('space to show QR code')
const toggleHint = spawnModeDisplay
? chalk.dim.italic(' \u00b7 w to toggle spawn mode')
: ''
writeStatus(`${chalk.dim(footerText)}\n`)
writeStatus(`${qrHint}${toggleHint}\n`)
}
}
return {
printBanner(config: BridgeConfig, environmentId: string): void {
cachedIngressUrl = config.sessionIngressUrl
cachedEnvironmentId = environmentId
connectUrl = buildBridgeConnectUrl(environmentId, cachedIngressUrl)
regenerateQr(connectUrl)
if (verbose) {
write(chalk.dim(`Remote Control`) + ` v${MACRO.VERSION}\n`)
}
if (verbose) {
if (config.spawnMode !== 'single-session') {
write(chalk.dim(`Spawn mode: `) + `${config.spawnMode}\n`)
write(
chalk.dim(`Max concurrent sessions: `) + `${config.maxSessions}\n`,
)
}
write(chalk.dim(`Environment ID: `) + `${environmentId}\n`)
}
if (config.sandbox) {
write(chalk.dim(`Sandbox: `) + `${chalk.green('Enabled')}\n`)
}
write('\n')
// Start connecting spinner — first updateIdleStatus() will stop it
startConnecting()
},
logSessionStart(sessionId: string, prompt: string): void {
if (verbose) {
const short = truncatePrompt(prompt, 80)
printLog(
chalk.dim(`[${timestamp()}]`) +
` Session started: ${chalk.white(`"${short}"`)} (${chalk.dim(sessionId)})\n`,
)
}
},
logSessionComplete(sessionId: string, durationMs: number): void {
printLog(
chalk.dim(`[${timestamp()}]`) +
` Session ${chalk.green('completed')} (${formatDuration(durationMs)}) ${chalk.dim(sessionId)}\n`,
)
},
logSessionFailed(sessionId: string, error: string): void {
printLog(
chalk.dim(`[${timestamp()}]`) +
` Session ${chalk.red('failed')}: ${error} ${chalk.dim(sessionId)}\n`,
)
},
logStatus(message: string): void {
printLog(chalk.dim(`[${timestamp()}]`) + ` ${message}\n`)
},
logVerbose(message: string): void {
if (verbose) {
printLog(chalk.dim(`[${timestamp()}] ${message}`) + '\n')
}
},
logError(message: string): void {
printLog(chalk.red(`[${timestamp()}] Error: ${message}`) + '\n')
},
logReconnected(disconnectedMs: number): void {
printLog(
chalk.dim(`[${timestamp()}]`) +
` ${chalk.green('Reconnected')} after ${formatDuration(disconnectedMs)}\n`,
)
},
setRepoInfo(repo: string, branchName: string): void {
repoName = repo
branch = branchName
},
setDebugLogPath(path: string): void {
debugLogPath = path
},
updateIdleStatus(): void {
stopConnecting()
currentState = 'idle'
currentStateText = 'Ready'
lastToolSummary = null
lastToolTime = 0
activeSessionUrl = null
regenerateQr(connectUrl)
renderStatusLine()
},
setAttached(sessionId: string): void {
stopConnecting()
currentState = 'attached'
currentStateText = 'Connected'
lastToolSummary = null
lastToolTime = 0
// Multi-session: keep footer/QR on the environment connect URL so users
// can spawn more sessions. Per-session links are in the bullet list.
if (sessionMax <= 1) {
activeSessionUrl = buildBridgeSessionUrl(
sessionId,
cachedEnvironmentId,
cachedIngressUrl,
)
regenerateQr(activeSessionUrl)
}
renderStatusLine()
},
updateReconnectingStatus(delayStr: string, elapsedStr: string): void {
stopConnecting()
clearStatusLines()
currentState = 'reconnecting'
// QR code above the status line
if (qrVisible) {
for (const line of qrLines) {
writeStatus(`${chalk.dim(line)}\n`)
}
}
const frame =
BRIDGE_SPINNER_FRAMES[connectingTick % BRIDGE_SPINNER_FRAMES.length]!
connectingTick++
writeStatus(
`${chalk.yellow(frame)} ${chalk.yellow('Reconnecting')} ${chalk.dim('\u00b7')} ${chalk.dim(`retrying in ${delayStr}`)} ${chalk.dim('\u00b7')} ${chalk.dim(`disconnected ${elapsedStr}`)}\n`,
)
},
updateFailedStatus(error: string): void {
stopConnecting()
clearStatusLines()
currentState = 'failed'
let suffix = ''
if (repoName) {
suffix += chalk.dim(' \u00b7 ') + chalk.dim(repoName)
}
if (branch) {
suffix += chalk.dim(' \u00b7 ') + chalk.dim(branch)
}
writeStatus(
`${chalk.red(BRIDGE_FAILED_INDICATOR)} ${chalk.red('Remote Control Failed')}${suffix}\n`,
)
writeStatus(`${chalk.dim(FAILED_FOOTER_TEXT)}\n`)
if (error) {
writeStatus(`${chalk.red(error)}\n`)
}
},
updateSessionStatus(
_sessionId: string,
_elapsed: string,
activity: SessionActivity,
_trail: string[],
): void {
// Cache tool activity for the second status line
if (activity.type === 'tool_start') {
lastToolSummary = activity.summary
lastToolTime = Date.now()
}
renderStatusLine()
},
clearStatus(): void {
stopConnecting()
clearStatusLines()
},
toggleQr(): void {
qrVisible = !qrVisible
renderStatusLine()
},
updateSessionCount(active: number, max: number, mode: SpawnMode): void {
if (sessionActive === active && sessionMax === max && spawnMode === mode)
return
sessionActive = active
sessionMax = max
spawnMode = mode
// Don't re-render here — the status ticker calls renderStatusLine
// on its own cadence, and the next tick will pick up the new values.
},
setSpawnModeDisplay(mode: 'same-dir' | 'worktree' | null): void {
if (spawnModeDisplay === mode) return
spawnModeDisplay = mode
// Also sync the #21118-added spawnMode so the next render shows correct
// mode hint + branch visibility. Don't render here — matches
// updateSessionCount: called before printBanner (initial setup) and
// again from the `w` handler (which follows with refreshDisplay).
if (mode) spawnMode = mode
},
addSession(sessionId: string, url: string): void {
sessionDisplayInfo.set(sessionId, { url })
},
updateSessionActivity(sessionId: string, activity: SessionActivity): void {
const info = sessionDisplayInfo.get(sessionId)
if (!info) return
info.activity = activity
},
setSessionTitle(sessionId: string, title: string): void {
const info = sessionDisplayInfo.get(sessionId)
if (!info) return
info.title = title
// Guard against reconnecting/failed — renderStatusLine clears then returns
// early for those states, which would erase the spinner/error.
if (currentState === 'reconnecting' || currentState === 'failed') return
if (sessionMax === 1) {
// Single-session: show title in the main status line too.
currentState = 'titled'
currentStateText = truncatePrompt(title, 40)
}
renderStatusLine()
},
removeSession(sessionId: string): void {
sessionDisplayInfo.delete(sessionId)
},
refreshDisplay(): void {
// Skip during reconnecting/failed — renderStatusLine clears then returns
// early for those states, which would erase the spinner/error.
if (currentState === 'reconnecting' || currentState === 'failed') return
renderStatusLine()
},
}
}

View File

@@ -0,0 +1,58 @@
/**
* Shared capacity-wake primitive for bridge poll loops.
*
* Both replBridge.ts and bridgeMain.ts need to sleep while "at capacity"
* but wake early when either (a) the outer loop signal aborts (shutdown),
* or (b) capacity frees up (session done / transport lost). This module
* encapsulates the mutable wake-controller + two-signal merger that both
* poll loops previously duplicated byte-for-byte.
*/
export type CapacitySignal = { signal: AbortSignal; cleanup: () => void }
export type CapacityWake = {
/**
* Create a signal that aborts when either the outer loop signal or the
* capacity-wake controller fires. Returns the merged signal and a cleanup
* function that removes listeners when the sleep resolves normally
* (without abort).
*/
signal(): CapacitySignal
/**
* Abort the current at-capacity sleep and arm a fresh controller so the
* poll loop immediately re-checks for new work.
*/
wake(): void
}
export function createCapacityWake(outerSignal: AbortSignal): CapacityWake {
let wakeController = new AbortController()
function wake(): void {
wakeController.abort()
wakeController = new AbortController()
}
function signal(): CapacitySignal {
const merged = new AbortController()
const abort = (): void => merged.abort()
if (outerSignal.aborted || wakeController.signal.aborted) {
merged.abort()
return { signal: merged.signal, cleanup: () => {} }
}
outerSignal.addEventListener('abort', abort, { once: true })
const capSig = wakeController.signal
capSig.addEventListener('abort', abort, { once: true })
return {
signal: merged.signal,
cleanup: () => {
outerSignal.removeEventListener('abort', abort)
capSig.removeEventListener('abort', abort)
},
}
}
return { signal, wake }
}

View File

@@ -0,0 +1,170 @@
/**
* Thin HTTP wrappers for the CCR v2 code-session API.
*
* Separate file from remoteBridgeCore.ts so the SDK /bridge subpath can
* export createCodeSession + fetchRemoteCredentials without bundling the
* heavy CLI tree (analytics, transport, etc.). Callers supply explicit
* accessToken + baseUrl — no implicit auth or config reads.
*/
import axios from 'axios'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { jsonStringify } from '../utils/slowOperations.js'
import { extractErrorDetail } from './debugUtils.js'
const ANTHROPIC_VERSION = '2023-06-01'
function oauthHeaders(accessToken: string): Record<string, string> {
return {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json',
'anthropic-version': ANTHROPIC_VERSION,
}
}
export async function createCodeSession(
baseUrl: string,
accessToken: string,
title: string,
timeoutMs: number,
tags?: string[],
): Promise<string | null> {
const url = `${baseUrl}/v1/code/sessions`
let response
try {
response = await axios.post(
url,
// bridge: {} is the positive signal for the oneof runner — omitting it
// (or sending environment_id: "") now 400s. BridgeRunner is an empty
// message today; it's a placeholder for future bridge-specific options.
{ title, bridge: {}, ...(tags?.length ? { tags } : {}) },
{
headers: oauthHeaders(accessToken),
timeout: timeoutMs,
validateStatus: s => s < 500,
},
)
} catch (err: unknown) {
logForDebugging(
`[code-session] Session create request failed: ${errorMessage(err)}`,
)
return null
}
if (response.status !== 200 && response.status !== 201) {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[code-session] Session create failed ${response.status}${detail ? `: ${detail}` : ''}`,
)
return null
}
const data: unknown = response.data
if (
!data ||
typeof data !== 'object' ||
!('session' in data) ||
!data.session ||
typeof data.session !== 'object' ||
!('id' in data.session) ||
typeof data.session.id !== 'string' ||
!data.session.id.startsWith('cse_')
) {
logForDebugging(
`[code-session] No session.id (cse_*) in response: ${jsonStringify(data).slice(0, 200)}`,
)
return null
}
return data.session.id
}
/**
* Credentials from POST /bridge. JWT is opaque — do not decode.
* Each /bridge call bumps worker_epoch server-side (it IS the register).
*/
export type RemoteCredentials = {
worker_jwt: string
api_base_url: string
expires_in: number
worker_epoch: number
}
export async function fetchRemoteCredentials(
sessionId: string,
baseUrl: string,
accessToken: string,
timeoutMs: number,
trustedDeviceToken?: string,
): Promise<RemoteCredentials | null> {
const url = `${baseUrl}/v1/code/sessions/${sessionId}/bridge`
const headers = oauthHeaders(accessToken)
if (trustedDeviceToken) {
headers['X-Trusted-Device-Token'] = trustedDeviceToken
}
let response
try {
response = await axios.post(
url,
{},
{
headers,
timeout: timeoutMs,
validateStatus: s => s < 500,
},
)
} catch (err: unknown) {
logForDebugging(
`[code-session] /bridge request failed: ${errorMessage(err)}`,
)
return null
}
if (response.status !== 200) {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[code-session] /bridge failed ${response.status}${detail ? `: ${detail}` : ''}`,
)
return null
}
const data: unknown = response.data
if (
data === null ||
typeof data !== 'object' ||
!('worker_jwt' in data) ||
typeof data.worker_jwt !== 'string' ||
!('expires_in' in data) ||
typeof data.expires_in !== 'number' ||
!('api_base_url' in data) ||
typeof data.api_base_url !== 'string' ||
!('worker_epoch' in data)
) {
logForDebugging(
`[code-session] /bridge response malformed (need worker_jwt, expires_in, api_base_url, worker_epoch): ${jsonStringify(data).slice(0, 200)}`,
)
return null
}
// protojson serializes int64 as a string to avoid JS precision loss;
// Go may also return a number depending on encoder settings.
const rawEpoch = data.worker_epoch
const epoch = typeof rawEpoch === 'string' ? Number(rawEpoch) : rawEpoch
if (
typeof epoch !== 'number' ||
!Number.isFinite(epoch) ||
!Number.isSafeInteger(epoch)
) {
logForDebugging(
`[code-session] /bridge worker_epoch invalid: ${jsonStringify(rawEpoch)}`,
)
return null
}
return {
worker_jwt: data.worker_jwt,
api_base_url: data.api_base_url,
expires_in: data.expires_in,
worker_epoch: epoch,
}
}

386
src/bridge/createSession.ts Normal file
View File

@@ -0,0 +1,386 @@
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { extractErrorDetail } from './debugUtils.js'
import { toCompatSessionId } from './sessionIdCompat.js'
type GitSource = {
type: 'git_repository'
url: string
revision?: string
}
type GitOutcome = {
type: 'git_repository'
git_info: { type: 'github'; repo: string; branches: string[] }
}
// Events must be wrapped in { type: 'event', data: <sdk_message> } for the
// POST /v1/sessions endpoint (discriminated union format).
type SessionEvent = {
type: 'event'
data: SDKMessage
}
/**
* Create a session on a bridge environment via POST /v1/sessions.
*
* Used by both `claude remote-control` (empty session so the user has somewhere to
* type immediately) and `/remote-control` (session pre-populated with conversation
* history).
*
* Returns the session ID on success, or null if creation fails (non-fatal).
*/
export async function createBridgeSession({
environmentId,
title,
events,
gitRepoUrl,
branch,
signal,
baseUrl: baseUrlOverride,
getAccessToken,
permissionMode,
}: {
environmentId: string
title?: string
events: SessionEvent[]
gitRepoUrl: string | null
branch: string
signal: AbortSignal
baseUrl?: string
getAccessToken?: () => string | undefined
permissionMode?: string
}): Promise<string | null> {
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
const { getOrganizationUUID } = await import('../services/oauth/client.js')
const { getOauthConfig } = await import('../constants/oauth.js')
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
const { parseGitHubRepository } = await import('../utils/detectRepository.js')
const { getDefaultBranch } = await import('../utils/git.js')
const { getMainLoopModel } = await import('../utils/model/model.js')
const { default: axios } = await import('axios')
const accessToken =
getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
if (!accessToken) {
logForDebugging('[bridge] No access token for session creation')
return null
}
const orgUUID = await getOrganizationUUID()
if (!orgUUID) {
logForDebugging('[bridge] No org UUID for session creation')
return null
}
// Build git source and outcome context
let gitSource: GitSource | null = null
let gitOutcome: GitOutcome | null = null
if (gitRepoUrl) {
const { parseGitRemote } = await import('../utils/detectRepository.js')
const parsed = parseGitRemote(gitRepoUrl)
if (parsed) {
const { host, owner, name } = parsed
const revision = branch || (await getDefaultBranch()) || undefined
gitSource = {
type: 'git_repository',
url: `https://${host}/${owner}/${name}`,
revision,
}
gitOutcome = {
type: 'git_repository',
git_info: {
type: 'github',
repo: `${owner}/${name}`,
branches: [`claude/${branch || 'task'}`],
},
}
} else {
// Fallback: try parseGitHubRepository for owner/repo format
const ownerRepo = parseGitHubRepository(gitRepoUrl)
if (ownerRepo) {
const [owner, name] = ownerRepo.split('/')
if (owner && name) {
const revision = branch || (await getDefaultBranch()) || undefined
gitSource = {
type: 'git_repository',
url: `https://github.com/${owner}/${name}`,
revision,
}
gitOutcome = {
type: 'git_repository',
git_info: {
type: 'github',
repo: `${owner}/${name}`,
branches: [`claude/${branch || 'task'}`],
},
}
}
}
}
}
const requestBody = {
...(title !== undefined && { title }),
events,
session_context: {
sources: gitSource ? [gitSource] : [],
outcomes: gitOutcome ? [gitOutcome] : [],
model: getMainLoopModel(),
},
environment_id: environmentId,
source: 'remote-control',
...(permissionMode && { permission_mode: permissionMode }),
}
const headers = {
...getOAuthHeaders(accessToken),
'anthropic-beta': 'ccr-byoc-2025-07-29',
'x-organization-uuid': orgUUID,
}
const url = `${baseUrlOverride ?? getOauthConfig().BASE_API_URL}/v1/sessions`
let response
try {
response = await axios.post(url, requestBody, {
headers,
signal,
validateStatus: s => s < 500,
})
} catch (err: unknown) {
logForDebugging(
`[bridge] Session creation request failed: ${errorMessage(err)}`,
)
return null
}
const isSuccess = response.status === 200 || response.status === 201
if (!isSuccess) {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[bridge] Session creation failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
)
return null
}
const sessionData: unknown = response.data
if (
!sessionData ||
typeof sessionData !== 'object' ||
!('id' in sessionData) ||
typeof sessionData.id !== 'string'
) {
logForDebugging('[bridge] No session ID in response')
return null
}
return sessionData.id
}
/**
* Fetch a bridge session via GET /v1/sessions/{id}.
*
* Returns the session's environment_id (for `--session-id` resume) and title.
* Uses the same org-scoped headers as create/archive — the environments-level
* client in bridgeApi.ts uses a different beta header and no org UUID, which
* makes the Sessions API return 404.
*/
export async function getBridgeSession(
sessionId: string,
opts?: { baseUrl?: string; getAccessToken?: () => string | undefined },
): Promise<{ environment_id?: string; title?: string } | null> {
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
const { getOrganizationUUID } = await import('../services/oauth/client.js')
const { getOauthConfig } = await import('../constants/oauth.js')
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
const { default: axios } = await import('axios')
const accessToken =
opts?.getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
if (!accessToken) {
logForDebugging('[bridge] No access token for session fetch')
return null
}
const orgUUID = await getOrganizationUUID()
if (!orgUUID) {
logForDebugging('[bridge] No org UUID for session fetch')
return null
}
const headers = {
...getOAuthHeaders(accessToken),
'anthropic-beta': 'ccr-byoc-2025-07-29',
'x-organization-uuid': orgUUID,
}
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}`
logForDebugging(`[bridge] Fetching session ${sessionId}`)
let response
try {
response = await axios.get<{ environment_id?: string; title?: string }>(
url,
{ headers, timeout: 10_000, validateStatus: s => s < 500 },
)
} catch (err: unknown) {
logForDebugging(
`[bridge] Session fetch request failed: ${errorMessage(err)}`,
)
return null
}
if (response.status !== 200) {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[bridge] Session fetch failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
)
return null
}
return response.data
}
/**
* Archive a bridge session via POST /v1/sessions/{id}/archive.
*
* The CCR server never auto-archives sessions — archival is always an
* explicit client action. Both `claude remote-control` (standalone bridge) and the
* always-on `/remote-control` REPL bridge call this during shutdown to archive any
* sessions that are still alive.
*
* The archive endpoint accepts sessions in any status (running, idle,
* requires_action, pending) and returns 409 if already archived, making
* it safe to call even if the server-side runner already archived the
* session.
*
* Callers must handle errors — this function has no try/catch; 5xx,
* timeouts, and network errors throw. Archival is best-effort during
* cleanup; call sites wrap with .catch().
*/
export async function archiveBridgeSession(
sessionId: string,
opts?: {
baseUrl?: string
getAccessToken?: () => string | undefined
timeoutMs?: number
},
): Promise<void> {
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
const { getOrganizationUUID } = await import('../services/oauth/client.js')
const { getOauthConfig } = await import('../constants/oauth.js')
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
const { default: axios } = await import('axios')
const accessToken =
opts?.getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
if (!accessToken) {
logForDebugging('[bridge] No access token for session archive')
return
}
const orgUUID = await getOrganizationUUID()
if (!orgUUID) {
logForDebugging('[bridge] No org UUID for session archive')
return
}
const headers = {
...getOAuthHeaders(accessToken),
'anthropic-beta': 'ccr-byoc-2025-07-29',
'x-organization-uuid': orgUUID,
}
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}/archive`
logForDebugging(`[bridge] Archiving session ${sessionId}`)
const response = await axios.post(
url,
{},
{
headers,
timeout: opts?.timeoutMs ?? 10_000,
validateStatus: s => s < 500,
},
)
if (response.status === 200) {
logForDebugging(`[bridge] Session ${sessionId} archived successfully`)
} else {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[bridge] Session archive failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
)
}
}
/**
* Update the title of a bridge session via PATCH /v1/sessions/{id}.
*
* Called when the user renames a session via /rename while a bridge
* connection is active, so the title stays in sync on claude.ai/code.
*
* Errors are swallowed — title sync is best-effort.
*/
export async function updateBridgeSessionTitle(
sessionId: string,
title: string,
opts?: { baseUrl?: string; getAccessToken?: () => string | undefined },
): Promise<void> {
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
const { getOrganizationUUID } = await import('../services/oauth/client.js')
const { getOauthConfig } = await import('../constants/oauth.js')
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
const { default: axios } = await import('axios')
const accessToken =
opts?.getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
if (!accessToken) {
logForDebugging('[bridge] No access token for session title update')
return
}
const orgUUID = await getOrganizationUUID()
if (!orgUUID) {
logForDebugging('[bridge] No org UUID for session title update')
return
}
const headers = {
...getOAuthHeaders(accessToken),
'anthropic-beta': 'ccr-byoc-2025-07-29',
'x-organization-uuid': orgUUID,
}
// Compat gateway only accepts session_* (compat/convert.go:27). v2 callers
// pass raw cse_*; retag here so all callers can pass whatever they hold.
// Idempotent for v1's session_* and bridgeMain's pre-converted compatSessionId.
const compatId = toCompatSessionId(sessionId)
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${compatId}`
logForDebugging(`[bridge] Updating session title: ${compatId}${title}`)
try {
const response = await axios.patch(
url,
{ title },
{ headers, timeout: 10_000, validateStatus: s => s < 500 },
)
if (response.status === 200) {
logForDebugging(`[bridge] Session title updated successfully`)
} else {
const detail = extractErrorDetail(response.data)
logForDebugging(
`[bridge] Session title update failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
)
}
} catch (err: unknown) {
logForDebugging(
`[bridge] Session title update request failed: ${errorMessage(err)}`,
)
}
}

143
src/bridge/debugUtils.ts Normal file
View File

@@ -0,0 +1,143 @@
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
logEvent,
} from '../services/analytics/index.js'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { jsonStringify } from '../utils/slowOperations.js'
const DEBUG_MSG_LIMIT = 2000
const SECRET_FIELD_NAMES = [
'session_ingress_token',
'environment_secret',
'access_token',
'secret',
'token',
]
const SECRET_PATTERN = new RegExp(
`"(${SECRET_FIELD_NAMES.join('|')})"\\s*:\\s*"([^"]*)"`,
'g',
)
const REDACT_MIN_LENGTH = 16
export function redactSecrets(s: string): string {
return s.replace(SECRET_PATTERN, (_match, field: string, value: string) => {
if (value.length < REDACT_MIN_LENGTH) {
return `"${field}":"[REDACTED]"`
}
const redacted = `${value.slice(0, 8)}...${value.slice(-4)}`
return `"${field}":"${redacted}"`
})
}
/** Truncate a string for debug logging, collapsing newlines. */
export function debugTruncate(s: string): string {
const flat = s.replace(/\n/g, '\\n')
if (flat.length <= DEBUG_MSG_LIMIT) {
return flat
}
return flat.slice(0, DEBUG_MSG_LIMIT) + `... (${flat.length} chars)`
}
/** Truncate a JSON-serializable value for debug logging. */
export function debugBody(data: unknown): string {
const raw = typeof data === 'string' ? data : jsonStringify(data)
const s = redactSecrets(raw)
if (s.length <= DEBUG_MSG_LIMIT) {
return s
}
return s.slice(0, DEBUG_MSG_LIMIT) + `... (${s.length} chars)`
}
/**
* Extract a descriptive error message from an axios error (or any error).
* For HTTP errors, appends the server's response body message if available,
* since axios's default message only includes the status code.
*/
export function describeAxiosError(err: unknown): string {
const msg = errorMessage(err)
if (err && typeof err === 'object' && 'response' in err) {
const response = (err as { response?: { data?: unknown } }).response
if (response?.data && typeof response.data === 'object') {
const data = response.data as Record<string, unknown>
const detail =
typeof data.message === 'string'
? data.message
: typeof data.error === 'object' &&
data.error &&
'message' in data.error &&
typeof (data.error as Record<string, unknown>).message ===
'string'
? (data.error as Record<string, unknown>).message
: undefined
if (detail) {
return `${msg}: ${detail}`
}
}
}
return msg
}
/**
* Extract the HTTP status code from an axios error, if present.
* Returns undefined for non-HTTP errors (e.g. network failures).
*/
export function extractHttpStatus(err: unknown): number | undefined {
if (
err &&
typeof err === 'object' &&
'response' in err &&
(err as { response?: { status?: unknown } }).response &&
typeof (err as { response: { status?: unknown } }).response.status ===
'number'
) {
return (err as { response: { status: number } }).response.status
}
return undefined
}
/**
* Pull a human-readable message out of an API error response body.
* Checks `data.message` first, then `data.error.message`.
*/
export function extractErrorDetail(data: unknown): string | undefined {
if (!data || typeof data !== 'object') return undefined
if ('message' in data && typeof data.message === 'string') {
return data.message
}
if (
'error' in data &&
data.error !== null &&
typeof data.error === 'object' &&
'message' in data.error &&
typeof data.error.message === 'string'
) {
return data.error.message
}
return undefined
}
/**
* Log a bridge init skip — debug message + `tengu_bridge_repl_skipped`
* analytics event. Centralizes the event name and the AnalyticsMetadata
* cast so call sites don't each repeat the 5-line boilerplate.
*/
export function logBridgeSkip(
reason: string,
debugMsg?: string,
v2?: boolean,
): void {
if (debugMsg) {
logForDebugging(debugMsg)
}
logEvent('tengu_bridge_repl_skipped', {
reason:
reason as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
...(v2 !== undefined && { v2 }),
})
}

View File

@@ -0,0 +1,167 @@
import { z } from 'zod/v4'
import { getFeatureValue_DEPRECATED } from '../services/analytics/growthbook.js'
import { lazySchema } from '../utils/lazySchema.js'
import { lt } from '../utils/semver.js'
import { isEnvLessBridgeEnabled } from './bridgeEnabled.js'
export type EnvLessBridgeConfig = {
// withRetry — init-phase backoff (createSession, POST /bridge, recovery /bridge)
init_retry_max_attempts: number
init_retry_base_delay_ms: number
init_retry_jitter_fraction: number
init_retry_max_delay_ms: number
// axios timeout for POST /sessions, POST /bridge, POST /archive
http_timeout_ms: number
// BoundedUUIDSet ring size (echo + re-delivery dedup)
uuid_dedup_buffer_size: number
// CCRClient worker heartbeat cadence. Server TTL is 60s — 20s gives 3× margin.
heartbeat_interval_ms: number
// ±fraction of interval — per-beat jitter to spread fleet load.
heartbeat_jitter_fraction: number
// Fire proactive JWT refresh this long before expires_in. Larger buffer =
// more frequent refresh (refresh cadence ≈ expires_in - buffer).
token_refresh_buffer_ms: number
// Archive POST timeout in teardown(). Distinct from http_timeout_ms because
// gracefulShutdown races runCleanupFunctions() against a 2s cap — a 10s
// axios timeout on a slow/stalled archive burns the whole budget on a
// request that forceExit will kill anyway.
teardown_archive_timeout_ms: number
// Deadline for onConnect after transport.connect(). If neither onConnect
// nor onClose fires before this, emit tengu_bridge_repl_connect_timeout
// — the only telemetry for the ~1% of sessions that emit `started` then
// go silent (no error, no event, just nothing).
connect_timeout_ms: number
// Semver floor for the env-less bridge path. Separate from the v1
// tengu_bridge_min_version config so a v2-specific bug can force upgrades
// without blocking v1 (env-based) clients, and vice versa.
min_version: string
// When true, tell users their claude.ai app may be too old to see v2
// sessions — lets us roll the v2 bridge before the app ships the new
// session-list query.
should_show_app_upgrade_message: boolean
}
export const DEFAULT_ENV_LESS_BRIDGE_CONFIG: EnvLessBridgeConfig = {
init_retry_max_attempts: 3,
init_retry_base_delay_ms: 500,
init_retry_jitter_fraction: 0.25,
init_retry_max_delay_ms: 4000,
http_timeout_ms: 10_000,
uuid_dedup_buffer_size: 2000,
heartbeat_interval_ms: 20_000,
heartbeat_jitter_fraction: 0.1,
token_refresh_buffer_ms: 300_000,
teardown_archive_timeout_ms: 1500,
connect_timeout_ms: 15_000,
min_version: '0.0.0',
should_show_app_upgrade_message: false,
}
// Floors reject the whole object on violation (fall back to DEFAULT) rather
// than partially trusting — same defense-in-depth as pollConfig.ts.
const envLessBridgeConfigSchema = lazySchema(() =>
z.object({
init_retry_max_attempts: z.number().int().min(1).max(10).default(3),
init_retry_base_delay_ms: z.number().int().min(100).default(500),
init_retry_jitter_fraction: z.number().min(0).max(1).default(0.25),
init_retry_max_delay_ms: z.number().int().min(500).default(4000),
http_timeout_ms: z.number().int().min(2000).default(10_000),
uuid_dedup_buffer_size: z.number().int().min(100).max(50_000).default(2000),
// Server TTL is 60s. Floor 5s prevents thrash; cap 30s keeps ≥2× margin.
heartbeat_interval_ms: z
.number()
.int()
.min(5000)
.max(30_000)
.default(20_000),
// ±fraction per beat. Cap 0.5: at max interval (30s) × 1.5 = 45s worst case,
// still under the 60s TTL.
heartbeat_jitter_fraction: z.number().min(0).max(0.5).default(0.1),
// Floor 30s prevents tight-looping. Cap 30min rejects buffer-vs-delay
// semantic inversion: ops entering expires_in-5min (the *delay until
// refresh*) instead of 5min (the *buffer before expiry*) yields
// delayMs = expires_in - buffer ≈ 5min instead of ≈4h. Both are positive
// durations so .min() alone can't distinguish; .max() catches the
// inverted value since buffer ≥ 30min is nonsensical for a multi-hour JWT.
token_refresh_buffer_ms: z
.number()
.int()
.min(30_000)
.max(1_800_000)
.default(300_000),
// Cap 2000 keeps this under gracefulShutdown's 2s cleanup race — a higher
// timeout just lies to axios since forceExit kills the socket regardless.
teardown_archive_timeout_ms: z
.number()
.int()
.min(500)
.max(2000)
.default(1500),
// Observed p99 connect is ~2-3s; 15s is ~5× headroom. Floor 5s bounds
// false-positive rate under transient slowness; cap 60s bounds how long
// a truly-stalled session stays dark.
connect_timeout_ms: z.number().int().min(5_000).max(60_000).default(15_000),
min_version: z
.string()
.refine(v => {
try {
lt(v, '0.0.0')
return true
} catch {
return false
}
})
.default('0.0.0'),
should_show_app_upgrade_message: z.boolean().default(false),
}),
)
/**
* Fetch the env-less bridge timing config from GrowthBook. Read once per
* initEnvLessBridgeCore call — config is fixed for the lifetime of a bridge
* session.
*
* Uses the blocking getter (not _CACHED_MAY_BE_STALE) because /remote-control
* runs well after GrowthBook init — initializeGrowthBook() resolves instantly,
* so there's no startup penalty, and we get the fresh in-memory remoteEval
* value instead of the stale-on-first-read disk cache. The _DEPRECATED suffix
* warns against startup-path usage, which this isn't.
*/
export async function getEnvLessBridgeConfig(): Promise<EnvLessBridgeConfig> {
const raw = await getFeatureValue_DEPRECATED<unknown>(
'tengu_bridge_repl_v2_config',
DEFAULT_ENV_LESS_BRIDGE_CONFIG,
)
const parsed = envLessBridgeConfigSchema().safeParse(raw)
return parsed.success ? parsed.data : DEFAULT_ENV_LESS_BRIDGE_CONFIG
}
/**
* Returns an error message if the current CLI version is below the minimum
* required for the env-less (v2) bridge path, or null if the version is fine.
*
* v2 analogue of checkBridgeMinVersion() — reads from tengu_bridge_repl_v2_config
* instead of tengu_bridge_min_version so the two implementations can enforce
* independent floors.
*/
export async function checkEnvLessBridgeMinVersion(): Promise<string | null> {
const cfg = await getEnvLessBridgeConfig()
if (cfg.min_version && lt(MACRO.VERSION, cfg.min_version)) {
return `Your version of Claude Code (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${cfg.min_version} or higher is required. Run \`claude update\` to update.`
}
return null
}
/**
* Whether to nudge users toward upgrading their claude.ai app when a
* Remote Control session starts. True only when the v2 bridge is active
* AND the should_show_app_upgrade_message config bit is set — lets us
* roll the v2 bridge before the app ships the new session-list query.
*/
export async function shouldShowAppUpgradeMessage(): Promise<boolean> {
if (!isEnvLessBridgeEnabled()) return false
const cfg = await getEnvLessBridgeConfig()
return cfg.should_show_app_upgrade_message
}

73
src/bridge/flushGate.ts Normal file
View File

@@ -0,0 +1,73 @@
/**
* State machine for gating message writes during an initial flush.
*
* When a bridge session starts, historical messages are flushed to the
* server via a single HTTP POST. During that flush, new messages must
* be queued to prevent them from arriving at the server interleaved
* with the historical messages.
*
* Lifecycle:
* start() → enqueue() returns true, items are queued
* end() → returns queued items for draining, enqueue() returns false
* drop() → discards queued items (permanent transport close)
* deactivate() → clears active flag without dropping items
* (transport replacement — new transport will drain)
*/
export class FlushGate<T> {
private _active = false
private _pending: T[] = []
get active(): boolean {
return this._active
}
get pendingCount(): number {
return this._pending.length
}
/** Mark flush as in-progress. enqueue() will start queuing items. */
start(): void {
this._active = true
}
/**
* End the flush and return any queued items for draining.
* Caller is responsible for sending the returned items.
*/
end(): T[] {
this._active = false
return this._pending.splice(0)
}
/**
* If flush is active, queue the items and return true.
* If flush is not active, return false (caller should send directly).
*/
enqueue(...items: T[]): boolean {
if (!this._active) return false
this._pending.push(...items)
return true
}
/**
* Discard all queued items (permanent transport close).
* Returns the number of items dropped.
*/
drop(): number {
this._active = false
const count = this._pending.length
this._pending.length = 0
return count
}
/**
* Clear the active flag without dropping queued items.
* Used when the transport is replaced (onWorkReceived) — the new
* transport's flush will drain the pending items.
*/
deactivate(): void {
this._active = false
}
}

View File

@@ -0,0 +1,177 @@
/**
* Resolve file_uuid attachments on inbound bridge user messages.
*
* Web composer uploads via cookie-authed /api/{org}/upload, sends file_uuid
* alongside the message. Here we fetch each via GET /api/oauth/files/{uuid}/content
* (oauth-authed, same store), write to ~/.claude/uploads/{sessionId}/, and
* return @path refs to prepend. Claude's Read tool takes it from there.
*
* Best-effort: any failure (no token, network, non-2xx, disk) logs debug and
* skips that attachment. The message still reaches Claude, just without @path.
*/
import type { ContentBlockParam } from '@anthropic-ai/sdk/resources/messages.mjs'
import axios from 'axios'
import { randomUUID } from 'crypto'
import { mkdir, writeFile } from 'fs/promises'
import { basename, join } from 'path'
import { z } from 'zod/v4'
import { getSessionId } from '../bootstrap/state.js'
import { logForDebugging } from '../utils/debug.js'
import { getClaudeConfigHomeDir } from '../utils/envUtils.js'
import { lazySchema } from '../utils/lazySchema.js'
import { getBridgeAccessToken, getBridgeBaseUrl } from './bridgeConfig.js'
const DOWNLOAD_TIMEOUT_MS = 30_000
function debug(msg: string): void {
logForDebugging(`[bridge:inbound-attach] ${msg}`)
}
const attachmentSchema = lazySchema(() =>
z.object({
file_uuid: z.string(),
file_name: z.string(),
}),
)
const attachmentsArraySchema = lazySchema(() => z.array(attachmentSchema()))
export type InboundAttachment = z.infer<ReturnType<typeof attachmentSchema>>
/** Pull file_attachments off a loosely-typed inbound message. */
export function extractInboundAttachments(msg: unknown): InboundAttachment[] {
if (typeof msg !== 'object' || msg === null || !('file_attachments' in msg)) {
return []
}
const parsed = attachmentsArraySchema().safeParse(msg.file_attachments)
return parsed.success ? parsed.data : []
}
/**
* Strip path components and keep only filename-safe chars. file_name comes
* from the network (web composer), so treat it as untrusted even though the
* composer controls it.
*/
function sanitizeFileName(name: string): string {
const base = basename(name).replace(/[^a-zA-Z0-9._-]/g, '_')
return base || 'attachment'
}
function uploadsDir(): string {
return join(getClaudeConfigHomeDir(), 'uploads', getSessionId())
}
/**
* Fetch + write one attachment. Returns the absolute path on success,
* undefined on any failure.
*/
async function resolveOne(att: InboundAttachment): Promise<string | undefined> {
const token = getBridgeAccessToken()
if (!token) {
debug('skip: no oauth token')
return undefined
}
let data: Buffer
try {
// getOauthConfig() (via getBridgeBaseUrl) throws on a non-allowlisted
// CLAUDE_CODE_CUSTOM_OAUTH_URL — keep it inside the try so a bad
// FedStart URL degrades to "no @path" instead of crashing print.ts's
// reader loop (which has no catch around the await).
const url = `${getBridgeBaseUrl()}/api/oauth/files/${encodeURIComponent(att.file_uuid)}/content`
const response = await axios.get(url, {
headers: { Authorization: `Bearer ${token}` },
responseType: 'arraybuffer',
timeout: DOWNLOAD_TIMEOUT_MS,
validateStatus: () => true,
})
if (response.status !== 200) {
debug(`fetch ${att.file_uuid} failed: status=${response.status}`)
return undefined
}
data = Buffer.from(response.data)
} catch (e) {
debug(`fetch ${att.file_uuid} threw: ${e}`)
return undefined
}
// uuid-prefix makes collisions impossible across messages and within one
// (same filename, different files). 8 chars is enough — this isn't security.
const safeName = sanitizeFileName(att.file_name)
const prefix = (
att.file_uuid.slice(0, 8) || randomUUID().slice(0, 8)
).replace(/[^a-zA-Z0-9_-]/g, '_')
const dir = uploadsDir()
const outPath = join(dir, `${prefix}-${safeName}`)
try {
await mkdir(dir, { recursive: true })
await writeFile(outPath, data)
} catch (e) {
debug(`write ${outPath} failed: ${e}`)
return undefined
}
debug(`resolved ${att.file_uuid}${outPath} (${data.length} bytes)`)
return outPath
}
/**
* Resolve all attachments on an inbound message to a prefix string of
* @path refs. Empty string if none resolved.
*/
export async function resolveInboundAttachments(
attachments: InboundAttachment[],
): Promise<string> {
if (attachments.length === 0) return ''
debug(`resolving ${attachments.length} attachment(s)`)
const paths = await Promise.all(attachments.map(resolveOne))
const ok = paths.filter((p): p is string => p !== undefined)
if (ok.length === 0) return ''
// Quoted form — extractAtMentionedFiles truncates unquoted @refs at the
// first space, which breaks any home dir with spaces (/Users/John Smith/).
return ok.map(p => `@"${p}"`).join(' ') + ' '
}
/**
* Prepend @path refs to content, whichever form it's in.
* Targets the LAST text block — processUserInputBase reads inputString
* from processedBlocks[processedBlocks.length - 1], so putting refs in
* block[0] means they're silently ignored for [text, image] content.
*/
export function prependPathRefs(
content: string | Array<ContentBlockParam>,
prefix: string,
): string | Array<ContentBlockParam> {
if (!prefix) return content
if (typeof content === 'string') return prefix + content
const i = content.findLastIndex(b => b.type === 'text')
if (i !== -1) {
const b = content[i]!
if (b.type === 'text') {
return [
...content.slice(0, i),
{ ...b, text: prefix + b.text },
...content.slice(i + 1),
]
}
}
// No text block — append one at the end so it's last.
return [...content, { type: 'text', text: prefix.trimEnd() }]
}
/**
* Convenience: extract + resolve + prepend. No-op when the message has no
* file_attachments field (fast path — no network, returns same reference).
*/
export async function resolveAndPrepend(
msg: unknown,
content: string | Array<ContentBlockParam>,
): Promise<string | Array<ContentBlockParam>> {
const attachments = extractInboundAttachments(msg)
if (attachments.length === 0) return content
const prefix = await resolveInboundAttachments(attachments)
return prependPathRefs(content, prefix)
}

View File

@@ -0,0 +1,82 @@
import type {
Base64ImageSource,
ContentBlockParam,
ImageBlockParam,
} from '@anthropic-ai/sdk/resources/messages.mjs'
import type { UUID } from 'crypto'
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
import { detectImageFormatFromBase64 } from '../utils/imageResizer.js'
/**
* Process an inbound user message from the bridge, extracting content
* and UUID for enqueueing. Supports both string content and
* ContentBlockParam[] (e.g. messages containing images).
*
* Normalizes image blocks from bridge clients that may use camelCase
* `mediaType` instead of snake_case `media_type` (mobile-apps#5825).
*
* Returns the extracted fields, or undefined if the message should be
* skipped (non-user type, missing/empty content).
*/
export function extractInboundMessageFields(
msg: SDKMessage,
):
| { content: string | Array<ContentBlockParam>; uuid: UUID | undefined }
| undefined {
if (msg.type !== 'user') return undefined
const content = msg.message?.content
if (!content) return undefined
if (Array.isArray(content) && content.length === 0) return undefined
const uuid =
'uuid' in msg && typeof msg.uuid === 'string'
? (msg.uuid as UUID)
: undefined
return {
content: Array.isArray(content) ? normalizeImageBlocks(content) : content,
uuid,
}
}
/**
* Normalize image content blocks from bridge clients. iOS/web clients may
* send `mediaType` (camelCase) instead of `media_type` (snake_case), or
* omit the field entirely. Without normalization, the bad block poisons
* the session — every subsequent API call fails with
* "media_type: Field required".
*
* Fast-path scan returns the original array reference when no
* normalization is needed (zero allocation on the happy path).
*/
export function normalizeImageBlocks(
blocks: Array<ContentBlockParam>,
): Array<ContentBlockParam> {
if (!blocks.some(isMalformedBase64Image)) return blocks
return blocks.map(block => {
if (!isMalformedBase64Image(block)) return block
const src = block.source as unknown as Record<string, unknown>
const mediaType =
typeof src.mediaType === 'string' && src.mediaType
? src.mediaType
: detectImageFormatFromBase64(block.source.data)
return {
...block,
source: {
type: 'base64' as const,
media_type: mediaType as Base64ImageSource['media_type'],
data: block.source.data,
},
}
})
}
function isMalformedBase64Image(
block: ContentBlockParam,
): block is ImageBlockParam & { source: Base64ImageSource } {
if (block.type !== 'image' || block.source?.type !== 'base64') return false
return !(block.source as unknown as Record<string, unknown>).media_type
}

View File

@@ -0,0 +1,571 @@
/**
* REPL-specific wrapper around initBridgeCore. Owns the parts that read
* bootstrap state — gates, cwd, session ID, git context, OAuth, title
* derivation — then delegates to the bootstrap-free core.
*
* Split out of replBridge.ts because the sessionStorage import
* (getCurrentSessionTitle) transitively pulls in src/commands.ts → the
* entire slash command + React component tree (~1300 modules). Keeping
* initBridgeCore in a file that doesn't touch sessionStorage lets
* daemonBridge.ts import the core without bloating the Agent SDK bundle.
*
* Called via dynamic import by useReplBridge (auto-start) and print.ts
* (SDK -p mode via query.enableRemoteControl).
*/
import { feature } from 'bun:bundle'
import { hostname } from 'os'
import { getOriginalCwd, getSessionId } from '../bootstrap/state.js'
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
import type { SDKControlResponse } from '../entrypoints/sdk/controlTypes.js'
import { getFeatureValue_CACHED_WITH_REFRESH } from '../services/analytics/growthbook.js'
import { getOrganizationUUID } from '../services/oauth/client.js'
import {
isPolicyAllowed,
waitForPolicyLimitsToLoad,
} from '../services/policyLimits/index.js'
import type { Message } from '../types/message.js'
import {
checkAndRefreshOAuthTokenIfNeeded,
getClaudeAIOAuthTokens,
handleOAuth401Error,
} from '../utils/auth.js'
import { getGlobalConfig, saveGlobalConfig } from '../utils/config.js'
import { logForDebugging } from '../utils/debug.js'
import { stripDisplayTagsAllowEmpty } from '../utils/displayTags.js'
import { errorMessage } from '../utils/errors.js'
import { getBranch, getRemoteUrl } from '../utils/git.js'
import { toSDKMessages } from '../utils/messages/mappers.js'
import {
getContentText,
getMessagesAfterCompactBoundary,
isSyntheticMessage,
} from '../utils/messages.js'
import type { PermissionMode } from '../utils/permissions/PermissionMode.js'
import { getCurrentSessionTitle } from '../utils/sessionStorage.js'
import {
extractConversationText,
generateSessionTitle,
} from '../utils/sessionTitle.js'
import { generateShortWordSlug } from '../utils/words.js'
import {
getBridgeAccessToken,
getBridgeBaseUrl,
getBridgeTokenOverride,
} from './bridgeConfig.js'
import {
checkBridgeMinVersion,
isBridgeEnabledBlocking,
isCseShimEnabled,
isEnvLessBridgeEnabled,
} from './bridgeEnabled.js'
import {
archiveBridgeSession,
createBridgeSession,
updateBridgeSessionTitle,
} from './createSession.js'
import { logBridgeSkip } from './debugUtils.js'
import { checkEnvLessBridgeMinVersion } from './envLessBridgeConfig.js'
import { getPollIntervalConfig } from './pollConfig.js'
import type { BridgeState, ReplBridgeHandle } from './replBridge.js'
import { initBridgeCore } from './replBridge.js'
import { setCseShimGate } from './sessionIdCompat.js'
import type { BridgeWorkerType } from './types.js'
export type InitBridgeOptions = {
onInboundMessage?: (msg: SDKMessage) => void | Promise<void>
onPermissionResponse?: (response: SDKControlResponse) => void
onInterrupt?: () => void
onSetModel?: (model: string | undefined) => void
onSetMaxThinkingTokens?: (maxTokens: number | null) => void
onSetPermissionMode?: (
mode: PermissionMode,
) => { ok: true } | { ok: false; error: string }
onStateChange?: (state: BridgeState, detail?: string) => void
initialMessages?: Message[]
// Explicit session name from `/remote-control <name>`. When set, overrides
// the title derived from the conversation or /rename.
initialName?: string
// Fresh view of the full conversation at call time. Used by onUserMessage's
// count-3 derivation to call generateSessionTitle over the full conversation.
// Optional — print.ts's SDK enableRemoteControl path has no REPL message
// array; count-3 falls back to the single message text when absent.
getMessages?: () => Message[]
// UUIDs already flushed in a prior bridge session. Messages with these
// UUIDs are excluded from the initial flush to avoid poisoning the
// server (duplicate UUIDs across sessions cause the WS to be killed).
// Mutated in place — newly flushed UUIDs are added after each flush.
previouslyFlushedUUIDs?: Set<string>
/** See BridgeCoreParams.perpetual. */
perpetual?: boolean
/**
* When true, the bridge only forwards events outbound (no SSE inbound
* stream). Used by CCR mirror mode — local sessions visible on claude.ai
* without enabling inbound control.
*/
outboundOnly?: boolean
tags?: string[]
}
export async function initReplBridge(
options?: InitBridgeOptions,
): Promise<ReplBridgeHandle | null> {
const {
onInboundMessage,
onPermissionResponse,
onInterrupt,
onSetModel,
onSetMaxThinkingTokens,
onSetPermissionMode,
onStateChange,
initialMessages,
getMessages,
previouslyFlushedUUIDs,
initialName,
perpetual,
outboundOnly,
tags,
} = options ?? {}
// Wire the cse_ shim kill switch so toCompatSessionId respects the
// GrowthBook gate. Daemon/SDK paths skip this — shim defaults to active.
setCseShimGate(isCseShimEnabled)
// 1. Runtime gate
if (!(await isBridgeEnabledBlocking())) {
logBridgeSkip('not_enabled', '[bridge:repl] Skipping: bridge not enabled')
return null
}
// 1b. Minimum version check — deferred to after the v1/v2 branch below,
// since each implementation has its own floor (tengu_bridge_min_version
// for v1, tengu_bridge_repl_v2_config.min_version for v2).
// 2. Check OAuth — must be signed in with claude.ai. Runs before the
// policy check so console-auth users get the actionable "/login" hint
// instead of a misleading policy error from a stale/wrong-org cache.
if (!getBridgeAccessToken()) {
logBridgeSkip('no_oauth', '[bridge:repl] Skipping: no OAuth tokens')
onStateChange?.('failed', '/login')
return null
}
// 3. Check organization policy — remote control may be disabled
await waitForPolicyLimitsToLoad()
if (!isPolicyAllowed('allow_remote_control')) {
logBridgeSkip(
'policy_denied',
'[bridge:repl] Skipping: allow_remote_control policy not allowed',
)
onStateChange?.('failed', "disabled by your organization's policy")
return null
}
// When CLAUDE_BRIDGE_OAUTH_TOKEN is set (ant-only local dev), the bridge
// uses that token directly via getBridgeAccessToken() — keychain state is
// irrelevant. Skip 2b/2c to preserve that decoupling: an expired keychain
// token shouldn't block a bridge connection that doesn't use it.
if (!getBridgeTokenOverride()) {
// 2a. Cross-process backoff. If N prior processes already saw this exact
// dead token (matched by expiresAt), skip silently — no event, no refresh
// attempt. The count threshold tolerates transient refresh failures (auth
// server 5xx, lockfile errors per auth.ts:1437/1444/1485): each process
// independently retries until 3 consecutive failures prove the token dead.
// Mirrors useReplBridge's MAX_CONSECUTIVE_INIT_FAILURES for in-process.
// The expiresAt key is content-addressed: /login → new token → new expiresAt
// → this stops matching without any explicit clear.
const cfg = getGlobalConfig()
if (
cfg.bridgeOauthDeadExpiresAt != null &&
(cfg.bridgeOauthDeadFailCount ?? 0) >= 3 &&
getClaudeAIOAuthTokens()?.expiresAt === cfg.bridgeOauthDeadExpiresAt
) {
logForDebugging(
`[bridge:repl] Skipping: cross-process backoff (dead token seen ${cfg.bridgeOauthDeadFailCount} times)`,
)
return null
}
// 2b. Proactively refresh if expired. Mirrors bridgeMain.ts:2096 — the REPL
// bridge fires at useEffect mount BEFORE any v1/messages call, making this
// usually the first OAuth request of the session. Without this, ~9% of
// registrations hit the server with a >8h-expired token → 401 → withOAuthRetry
// recovers, but the server logs a 401 we can avoid. VPN egress IPs observed
// at 30:1 401:200 when many unrelated users cluster at the 8h TTL boundary.
//
// Fresh-token cost: one memoized read + one Date.now() comparison (~µs).
// checkAndRefreshOAuthTokenIfNeeded clears its own cache in every path that
// touches the keychain (refresh success, lockfile race, throw), so no
// explicit clearOAuthTokenCache() here — that would force a blocking
// keychain spawn on the 91%+ fresh-token path.
await checkAndRefreshOAuthTokenIfNeeded()
// 2c. Skip if token is still expired post-refresh-attempt. Env-var / FD
// tokens (auth.ts:894-917) have expiresAt=null → never trip this. But a
// keychain token whose refresh token is dead (password change, org left,
// token GC'd) has expiresAt<now AND refresh just failed — the client would
// otherwise loop 401 forever: withOAuthRetry → handleOAuth401Error →
// refresh fails again → retry with same stale token → 401 again.
// Datadog 2026-03-08: single IPs generating 2,879 such 401s/day. Skip the
// guaranteed-fail API call; useReplBridge surfaces the failure.
//
// Intentionally NOT using isOAuthTokenExpired here — that has a 5-minute
// proactive-refresh buffer, which is the right heuristic for "should
// refresh soon" but wrong for "provably unusable". A token with 3min left
// + transient refresh endpoint blip (5xx/timeout/wifi-reconnect) would
// falsely trip a buffered check; the still-valid token would connect fine.
// Check actual expiry instead: past-expiry AND refresh-failed → truly dead.
const tokens = getClaudeAIOAuthTokens()
if (tokens && tokens.expiresAt !== null && tokens.expiresAt <= Date.now()) {
logBridgeSkip(
'oauth_expired_unrefreshable',
'[bridge:repl] Skipping: OAuth token expired and refresh failed (re-login required)',
)
onStateChange?.('failed', '/login')
// Persist for the next process. Increments failCount when re-discovering
// the same dead token (matched by expiresAt); resets to 1 for a different
// token. Once count reaches 3, step 2a's early-return fires and this path
// is never reached again — writes are capped at 3 per dead token.
// Local const captures the narrowed type (closure loses !==null narrowing).
const deadExpiresAt = tokens.expiresAt
saveGlobalConfig(c => ({
...c,
bridgeOauthDeadExpiresAt: deadExpiresAt,
bridgeOauthDeadFailCount:
c.bridgeOauthDeadExpiresAt === deadExpiresAt
? (c.bridgeOauthDeadFailCount ?? 0) + 1
: 1,
}))
return null
}
}
// 4. Compute baseUrl — needed by both v1 (env-based) and v2 (env-less)
// paths. Hoisted above the v2 gate so both can use it.
const baseUrl = getBridgeBaseUrl()
// 5. Derive session title. Precedence: explicit initialName → /rename
// (session storage) → last meaningful user message → generated slug.
// Cosmetic only (claude.ai session list); the model never sees it.
// Two flags: `hasExplicitTitle` (initialName or /rename — never auto-
// overwrite) vs. `hasTitle` (any title, including auto-derived — blocks
// the count-1 re-derivation but not count-3). The onUserMessage callback
// (wired to both v1 and v2 below) derives from the 1st prompt and again
// from the 3rd so mobile/web show a title that reflects more context.
// The slug fallback (e.g. "remote-control-graceful-unicorn") makes
// auto-started sessions distinguishable in the claude.ai list before the
// first prompt.
let title = `remote-control-${generateShortWordSlug()}`
let hasTitle = false
let hasExplicitTitle = false
if (initialName) {
title = initialName
hasTitle = true
hasExplicitTitle = true
} else {
const sessionId = getSessionId()
const customTitle = sessionId
? getCurrentSessionTitle(sessionId)
: undefined
if (customTitle) {
title = customTitle
hasTitle = true
hasExplicitTitle = true
} else if (initialMessages && initialMessages.length > 0) {
// Find the last user message that has meaningful content. Skip meta
// (nudges), tool results, compact summaries ("This session is being
// continued…"), non-human origins (task notifications, channel pushes),
// and synthetic interrupts ([Request interrupted by user]) — none are
// human-authored. Same filter as extractTitleText + isSyntheticMessage.
for (let i = initialMessages.length - 1; i >= 0; i--) {
const msg = initialMessages[i]!
if (
msg.type !== 'user' ||
msg.isMeta ||
msg.toolUseResult ||
msg.isCompactSummary ||
(msg.origin && msg.origin.kind !== 'human') ||
isSyntheticMessage(msg)
)
continue
const rawContent = getContentText(msg.message.content)
if (!rawContent) continue
const derived = deriveTitle(rawContent)
if (!derived) continue
title = derived
hasTitle = true
break
}
}
}
// Shared by both v1 and v2 — fires on every title-worthy user message until
// it returns true. At count 1: deriveTitle placeholder immediately, then
// generateSessionTitle (Haiku, sentence-case) fire-and-forget upgrade. At
// count 3: re-generate over the full conversation. Skips entirely if the
// title is explicit (/remote-control <name> or /rename) — re-checks
// sessionStorage at call time so /rename between messages isn't clobbered.
// Skips count 1 if initialMessages already derived (that title is fresh);
// still refreshes at count 3. v2 passes cse_*; updateBridgeSessionTitle
// retags internally.
let userMessageCount = 0
let lastBridgeSessionId: string | undefined
let genSeq = 0
const patch = (
derived: string,
bridgeSessionId: string,
atCount: number,
): void => {
hasTitle = true
title = derived
logForDebugging(
`[bridge:repl] derived title from message ${atCount}: ${derived}`,
)
void updateBridgeSessionTitle(bridgeSessionId, derived, {
baseUrl,
getAccessToken: getBridgeAccessToken,
}).catch(() => {})
}
// Fire-and-forget Haiku generation with post-await guards. Re-checks /rename
// (sessionStorage), v1 env-lost (lastBridgeSessionId), and same-session
// out-of-order resolution (genSeq — count-1's Haiku resolving after count-3
// would clobber the richer title). generateSessionTitle never rejects.
const generateAndPatch = (input: string, bridgeSessionId: string): void => {
const gen = ++genSeq
const atCount = userMessageCount
void generateSessionTitle(input, AbortSignal.timeout(15_000)).then(
generated => {
if (
generated &&
gen === genSeq &&
lastBridgeSessionId === bridgeSessionId &&
!getCurrentSessionTitle(getSessionId())
) {
patch(generated, bridgeSessionId, atCount)
}
},
)
}
const onUserMessage = (text: string, bridgeSessionId: string): boolean => {
if (hasExplicitTitle || getCurrentSessionTitle(getSessionId())) {
return true
}
// v1 env-lost re-creates the session with a new ID. Reset the count so
// the new session gets its own count-3 derivation; hasTitle stays true
// (new session was created via getCurrentTitle(), which reads the count-1
// title from this closure), so count-1 of the fresh cycle correctly skips.
if (
lastBridgeSessionId !== undefined &&
lastBridgeSessionId !== bridgeSessionId
) {
userMessageCount = 0
}
lastBridgeSessionId = bridgeSessionId
userMessageCount++
if (userMessageCount === 1 && !hasTitle) {
const placeholder = deriveTitle(text)
if (placeholder) patch(placeholder, bridgeSessionId, userMessageCount)
generateAndPatch(text, bridgeSessionId)
} else if (userMessageCount === 3) {
const msgs = getMessages?.()
const input = msgs
? extractConversationText(getMessagesAfterCompactBoundary(msgs))
: text
generateAndPatch(input, bridgeSessionId)
}
// Also re-latches if v1 env-lost resets the transport's done flag past 3.
return userMessageCount >= 3
}
const initialHistoryCap = getFeatureValue_CACHED_WITH_REFRESH(
'tengu_bridge_initial_history_cap',
200,
5 * 60 * 1000,
)
// Fetch orgUUID before the v1/v2 branch — both paths need it. v1 for
// environment registration; v2 for archive (which lives at the compat
// /v1/sessions/{id}/archive, not /v1/code/sessions). Without it, v2
// archive 404s and sessions stay alive in CCR after /exit.
const orgUUID = await getOrganizationUUID()
if (!orgUUID) {
logBridgeSkip('no_org_uuid', '[bridge:repl] Skipping: no org UUID')
onStateChange?.('failed', '/login')
return null
}
// ── GrowthBook gate: env-less bridge ──────────────────────────────────
// When enabled, skips the Environments API layer entirely (no register/
// poll/ack/heartbeat) and connects directly via POST /bridge → worker_jwt.
// See server PR #292605 (renamed in #293280). REPL-only — daemon/print stay
// on env-based.
//
// NAMING: "env-less" is distinct from "CCR v2" (the /worker/* transport).
// The env-based path below can ALSO use CCR v2 via CLAUDE_CODE_USE_CCR_V2.
// tengu_bridge_repl_v2 gates env-less (no poll loop), not transport version.
//
// perpetual (assistant-mode session continuity via bridge-pointer.json) is
// env-coupled and not yet implemented here — fall back to env-based when set
// so KAIROS users don't silently lose cross-restart continuity.
if (isEnvLessBridgeEnabled() && !perpetual) {
const versionError = await checkEnvLessBridgeMinVersion()
if (versionError) {
logBridgeSkip(
'version_too_old',
`[bridge:repl] Skipping: ${versionError}`,
true,
)
onStateChange?.('failed', 'run `claude update` to upgrade')
return null
}
logForDebugging(
'[bridge:repl] Using env-less bridge path (tengu_bridge_repl_v2)',
)
const { initEnvLessBridgeCore } = await import('./remoteBridgeCore.js')
return initEnvLessBridgeCore({
baseUrl,
orgUUID,
title,
getAccessToken: getBridgeAccessToken,
onAuth401: handleOAuth401Error,
toSDKMessages,
initialHistoryCap,
initialMessages,
// v2 always creates a fresh server session (new cse_* id), so
// previouslyFlushedUUIDs is not passed — there's no cross-session
// UUID collision risk, and the ref persists across enable→disable→
// re-enable cycles which would cause the new session to receive zero
// history (all UUIDs already in the set from the prior enable).
// v1 handles this by calling previouslyFlushedUUIDs.clear() on fresh
// session creation (replBridge.ts:768); v2 skips the param entirely.
onInboundMessage,
onUserMessage,
onPermissionResponse,
onInterrupt,
onSetModel,
onSetMaxThinkingTokens,
onSetPermissionMode,
onStateChange,
outboundOnly,
tags,
})
}
// ── v1 path: env-based (register/poll/ack/heartbeat) ──────────────────
const versionError = checkBridgeMinVersion()
if (versionError) {
logBridgeSkip('version_too_old', `[bridge:repl] Skipping: ${versionError}`)
onStateChange?.('failed', 'run `claude update` to upgrade')
return null
}
// Gather git context — this is the bootstrap-read boundary.
// Everything from here down is passed explicitly to bridgeCore.
const branch = await getBranch()
const gitRepoUrl = await getRemoteUrl()
const sessionIngressUrl =
process.env.USER_TYPE === 'ant' &&
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
? process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
: baseUrl
// Assistant-mode sessions advertise a distinct worker_type so the web UI
// can filter them into a dedicated picker. KAIROS guard keeps the
// assistant module out of external builds entirely.
let workerType: BridgeWorkerType = 'claude_code'
if (feature('KAIROS')) {
/* eslint-disable @typescript-eslint/no-require-imports */
const { isAssistantMode } =
require('../assistant/index.js') as typeof import('../assistant/index.js')
/* eslint-enable @typescript-eslint/no-require-imports */
if (isAssistantMode()) {
workerType = 'claude_code_assistant'
}
}
// 6. Delegate. BridgeCoreHandle is a structural superset of
// ReplBridgeHandle (adds writeSdkMessages which REPL callers don't use),
// so no adapter needed — just the narrower type on the way out.
return initBridgeCore({
dir: getOriginalCwd(),
machineName: hostname(),
branch,
gitRepoUrl,
title,
baseUrl,
sessionIngressUrl,
workerType,
getAccessToken: getBridgeAccessToken,
createSession: opts =>
createBridgeSession({
...opts,
events: [],
baseUrl,
getAccessToken: getBridgeAccessToken,
}),
archiveSession: sessionId =>
archiveBridgeSession(sessionId, {
baseUrl,
getAccessToken: getBridgeAccessToken,
// gracefulShutdown.ts:407 races runCleanupFunctions against 2s.
// Teardown also does stopWork (parallel) + deregister (sequential),
// so archive can't have the full budget. 1.5s matches v2's
// teardown_archive_timeout_ms default.
timeoutMs: 1500,
}).catch((err: unknown) => {
// archiveBridgeSession has no try/catch — 5xx/timeout/network throw
// straight through. Previously swallowed silently, making archive
// failures BQ-invisible and undiagnosable from debug logs.
logForDebugging(
`[bridge:repl] archiveBridgeSession threw: ${errorMessage(err)}`,
{ level: 'error' },
)
}),
// getCurrentTitle is read on reconnect-after-env-lost to re-title the new
// session. /rename writes to session storage; onUserMessage mutates
// `title` directly — both paths are picked up here.
getCurrentTitle: () => getCurrentSessionTitle(getSessionId()) ?? title,
onUserMessage,
toSDKMessages,
onAuth401: handleOAuth401Error,
getPollIntervalConfig,
initialHistoryCap,
initialMessages,
previouslyFlushedUUIDs,
onInboundMessage,
onPermissionResponse,
onInterrupt,
onSetModel,
onSetMaxThinkingTokens,
onSetPermissionMode,
onStateChange,
perpetual,
})
}
const TITLE_MAX_LEN = 50
/**
* Quick placeholder title: strip display tags, take the first sentence,
* collapse whitespace, truncate to 50 chars. Returns undefined if the result
* is empty (e.g. message was only <local-command-stdout>). Replaced by
* generateSessionTitle once Haiku resolves (~1-15s).
*/
function deriveTitle(raw: string): string | undefined {
// Strip <ide_opened_file>, <session-start-hook>, etc. — these appear in
// user messages when IDE/hooks inject context. stripDisplayTagsAllowEmpty
// returns '' (not the original) so pure-tag messages are skipped.
const clean = stripDisplayTagsAllowEmpty(raw)
// First sentence is usually the intent; rest is often context/detail.
// Capture group instead of lookbehind — keeps YARR JIT happy.
const firstSentence = /^(.*?[.!?])\s/.exec(clean)?.[1] ?? clean
// Collapse newlines/tabs — titles are single-line in the claude.ai list.
const flat = firstSentence.replace(/\s+/g, ' ').trim()
if (!flat) return undefined
return flat.length > TITLE_MAX_LEN
? flat.slice(0, TITLE_MAX_LEN - 1) + '\u2026'
: flat
}

258
src/bridge/jwtUtils.ts Normal file
View File

@@ -0,0 +1,258 @@
import { logEvent } from '../services/analytics/index.js'
import { logForDebugging } from '../utils/debug.js'
import { logForDiagnosticsNoPII } from '../utils/diagLogs.js'
import { errorMessage } from '../utils/errors.js'
import { jsonParse } from '../utils/slowOperations.js'
/** Format a millisecond duration as a human-readable string (e.g. "5m 30s"). */
function formatDuration(ms: number): string {
if (ms < 60_000) return `${Math.round(ms / 1000)}s`
const m = Math.floor(ms / 60_000)
const s = Math.round((ms % 60_000) / 1000)
return s > 0 ? `${m}m ${s}s` : `${m}m`
}
/**
* Decode a JWT's payload segment without verifying the signature.
* Strips the `sk-ant-si-` session-ingress prefix if present.
* Returns the parsed JSON payload as `unknown`, or `null` if the
* token is malformed or the payload is not valid JSON.
*/
export function decodeJwtPayload(token: string): unknown | null {
const jwt = token.startsWith('sk-ant-si-')
? token.slice('sk-ant-si-'.length)
: token
const parts = jwt.split('.')
if (parts.length !== 3 || !parts[1]) return null
try {
return jsonParse(Buffer.from(parts[1], 'base64url').toString('utf8'))
} catch {
return null
}
}
/**
* Decode the `exp` (expiry) claim from a JWT without verifying the signature.
* @returns The `exp` value in Unix seconds, or `null` if unparseable
*/
export function decodeJwtExpiry(token: string): number | null {
const payload = decodeJwtPayload(token)
if (
payload !== null &&
typeof payload === 'object' &&
'exp' in payload &&
typeof payload.exp === 'number'
) {
return payload.exp
}
return null
}
/** Refresh buffer: request a new token before expiry. */
const TOKEN_REFRESH_BUFFER_MS = 5 * 60 * 1000
/** Fallback refresh interval when the new token's expiry is unknown. */
const FALLBACK_REFRESH_INTERVAL_MS = 30 * 60 * 1000 // 30 minutes
/** Max consecutive failures before giving up on the refresh chain. */
const MAX_REFRESH_FAILURES = 3
/** Retry delay when getAccessToken returns undefined. */
const REFRESH_RETRY_DELAY_MS = 60_000
/**
* Creates a token refresh scheduler that proactively refreshes session tokens
* before they expire. Used by both the standalone bridge and the REPL bridge.
*
* When a token is about to expire, the scheduler calls `onRefresh` with the
* session ID and the bridge's OAuth access token. The caller is responsible
* for delivering the token to the appropriate transport (child process stdin
* for standalone bridge, WebSocket reconnect for REPL bridge).
*/
export function createTokenRefreshScheduler({
getAccessToken,
onRefresh,
label,
refreshBufferMs = TOKEN_REFRESH_BUFFER_MS,
}: {
getAccessToken: () => string | undefined | Promise<string | undefined>
onRefresh: (sessionId: string, oauthToken: string) => void
label: string
/** How long before expiry to fire refresh. Defaults to 5 min. */
refreshBufferMs?: number
}): {
schedule: (sessionId: string, token: string) => void
scheduleFromExpiresIn: (sessionId: string, expiresInSeconds: number) => void
cancel: (sessionId: string) => void
cancelAll: () => void
} {
const timers = new Map<string, ReturnType<typeof setTimeout>>()
const failureCounts = new Map<string, number>()
// Generation counter per session — incremented by schedule() and cancel()
// so that in-flight async doRefresh() calls can detect when they've been
// superseded and should skip setting follow-up timers.
const generations = new Map<string, number>()
function nextGeneration(sessionId: string): number {
const gen = (generations.get(sessionId) ?? 0) + 1
generations.set(sessionId, gen)
return gen
}
function schedule(sessionId: string, token: string): void {
const expiry = decodeJwtExpiry(token)
if (!expiry) {
// Token is not a decodable JWT (e.g. an OAuth token passed from the
// REPL bridge WebSocket open handler). Preserve any existing timer
// (such as the follow-up refresh set by doRefresh) so the refresh
// chain is not broken.
logForDebugging(
`[${label}:token] Could not decode JWT expiry for sessionId=${sessionId}, token prefix=${token.slice(0, 15)}…, keeping existing timer`,
)
return
}
// Clear any existing refresh timer — we have a concrete expiry to replace it.
const existing = timers.get(sessionId)
if (existing) {
clearTimeout(existing)
}
// Bump generation to invalidate any in-flight async doRefresh.
const gen = nextGeneration(sessionId)
const expiryDate = new Date(expiry * 1000).toISOString()
const delayMs = expiry * 1000 - Date.now() - refreshBufferMs
if (delayMs <= 0) {
logForDebugging(
`[${label}:token] Token for sessionId=${sessionId} expires=${expiryDate} (past or within buffer), refreshing immediately`,
)
void doRefresh(sessionId, gen)
return
}
logForDebugging(
`[${label}:token] Scheduled token refresh for sessionId=${sessionId} in ${formatDuration(delayMs)} (expires=${expiryDate}, buffer=${refreshBufferMs / 1000}s)`,
)
const timer = setTimeout(doRefresh, delayMs, sessionId, gen)
timers.set(sessionId, timer)
}
/**
* Schedule refresh using an explicit TTL (seconds until expiry) rather
* than decoding a JWT's exp claim. Used by callers whose JWT is opaque
* (e.g. POST /v1/code/sessions/{id}/bridge returns expires_in directly).
*/
function scheduleFromExpiresIn(
sessionId: string,
expiresInSeconds: number,
): void {
const existing = timers.get(sessionId)
if (existing) clearTimeout(existing)
const gen = nextGeneration(sessionId)
// Clamp to 30s floor — if refreshBufferMs exceeds the server's expires_in
// (e.g. very large buffer for frequent-refresh testing, or server shortens
// expires_in unexpectedly), unclamped delayMs ≤ 0 would tight-loop.
const delayMs = Math.max(expiresInSeconds * 1000 - refreshBufferMs, 30_000)
logForDebugging(
`[${label}:token] Scheduled token refresh for sessionId=${sessionId} in ${formatDuration(delayMs)} (expires_in=${expiresInSeconds}s, buffer=${refreshBufferMs / 1000}s)`,
)
const timer = setTimeout(doRefresh, delayMs, sessionId, gen)
timers.set(sessionId, timer)
}
async function doRefresh(sessionId: string, gen: number): Promise<void> {
let oauthToken: string | undefined
try {
oauthToken = await getAccessToken()
} catch (err) {
logForDebugging(
`[${label}:token] getAccessToken threw for sessionId=${sessionId}: ${errorMessage(err)}`,
{ level: 'error' },
)
}
// If the session was cancelled or rescheduled while we were awaiting,
// the generation will have changed — bail out to avoid orphaned timers.
if (generations.get(sessionId) !== gen) {
logForDebugging(
`[${label}:token] doRefresh for sessionId=${sessionId} stale (gen ${gen} vs ${generations.get(sessionId)}), skipping`,
)
return
}
if (!oauthToken) {
const failures = (failureCounts.get(sessionId) ?? 0) + 1
failureCounts.set(sessionId, failures)
logForDebugging(
`[${label}:token] No OAuth token available for refresh, sessionId=${sessionId} (failure ${failures}/${MAX_REFRESH_FAILURES})`,
{ level: 'error' },
)
logForDiagnosticsNoPII('error', 'bridge_token_refresh_no_oauth')
// Schedule a retry so the refresh chain can recover if the token
// becomes available again (e.g. transient cache clear during refresh).
// Cap retries to avoid spamming on genuine failures.
if (failures < MAX_REFRESH_FAILURES) {
const retryTimer = setTimeout(
doRefresh,
REFRESH_RETRY_DELAY_MS,
sessionId,
gen,
)
timers.set(sessionId, retryTimer)
}
return
}
// Reset failure counter on successful token retrieval
failureCounts.delete(sessionId)
logForDebugging(
`[${label}:token] Refreshing token for sessionId=${sessionId}: new token prefix=${oauthToken.slice(0, 15)}`,
)
logEvent('tengu_bridge_token_refreshed', {})
onRefresh(sessionId, oauthToken)
// Schedule a follow-up refresh so long-running sessions stay authenticated.
// Without this, the initial one-shot timer leaves the session vulnerable
// to token expiry if it runs past the first refresh window.
const timer = setTimeout(
doRefresh,
FALLBACK_REFRESH_INTERVAL_MS,
sessionId,
gen,
)
timers.set(sessionId, timer)
logForDebugging(
`[${label}:token] Scheduled follow-up refresh for sessionId=${sessionId} in ${formatDuration(FALLBACK_REFRESH_INTERVAL_MS)}`,
)
}
function cancel(sessionId: string): void {
// Bump generation to invalidate any in-flight async doRefresh.
nextGeneration(sessionId)
const timer = timers.get(sessionId)
if (timer) {
clearTimeout(timer)
timers.delete(sessionId)
}
failureCounts.delete(sessionId)
}
function cancelAll(): void {
// Bump all generations so in-flight doRefresh calls are invalidated.
for (const sessionId of generations.keys()) {
nextGeneration(sessionId)
}
for (const timer of timers.values()) {
clearTimeout(timer)
}
timers.clear()
failureCounts.clear()
}
return { schedule, scheduleFromExpiresIn, cancel, cancelAll }
}

112
src/bridge/pollConfig.ts Normal file
View File

@@ -0,0 +1,112 @@
import { z } from 'zod/v4'
import { getFeatureValue_CACHED_WITH_REFRESH } from '../services/analytics/growthbook.js'
import { lazySchema } from '../utils/lazySchema.js'
import {
DEFAULT_POLL_CONFIG,
type PollIntervalConfig,
} from './pollConfigDefaults.js'
// .min(100) on the seek-work intervals restores the old Math.max(..., 100)
// defense-in-depth floor against fat-fingered GrowthBook values. Unlike a
// clamp, Zod rejects the whole object on violation — a config with one bad
// field falls back to DEFAULT_POLL_CONFIG entirely rather than being
// partially trusted.
//
// The at_capacity intervals use a 0-or-≥100 refinement: 0 means "disabled"
// (heartbeat-only mode), ≥100 is the fat-finger floor. Values 199 are
// rejected so unit confusion (ops thinks seconds, enters 10) doesn't poll
// every 10ms against the VerifyEnvironmentSecretAuth DB path.
//
// The object-level refines require at least one at-capacity liveness
// mechanism enabled: heartbeat OR the relevant poll interval. Without this,
// the hb=0, atCapMs=0 drift config (ops disables heartbeat without
// restoring at_capacity) falls through every throttle site with no sleep —
// tight-looping /poll at HTTP-round-trip speed.
const zeroOrAtLeast100 = {
message: 'must be 0 (disabled) or ≥100ms',
}
const pollIntervalConfigSchema = lazySchema(() =>
z
.object({
poll_interval_ms_not_at_capacity: z.number().int().min(100),
// 0 = no at-capacity polling. Independent of heartbeat — both can be
// enabled (heartbeat runs, periodically breaks out to poll).
poll_interval_ms_at_capacity: z
.number()
.int()
.refine(v => v === 0 || v >= 100, zeroOrAtLeast100),
// 0 = disabled; positive value = heartbeat at this interval while at
// capacity. Runs alongside at-capacity polling, not instead of it.
// Named non_exclusive to distinguish from the old heartbeat_interval_ms
// (either-or semantics in pre-#22145 clients). .default(0) so existing
// GrowthBook configs without this field parse successfully.
non_exclusive_heartbeat_interval_ms: z.number().int().min(0).default(0),
// Multisession (bridgeMain.ts) intervals. Defaults match the
// single-session values so existing configs without these fields
// preserve current behavior.
multisession_poll_interval_ms_not_at_capacity: z
.number()
.int()
.min(100)
.default(
DEFAULT_POLL_CONFIG.multisession_poll_interval_ms_not_at_capacity,
),
multisession_poll_interval_ms_partial_capacity: z
.number()
.int()
.min(100)
.default(
DEFAULT_POLL_CONFIG.multisession_poll_interval_ms_partial_capacity,
),
multisession_poll_interval_ms_at_capacity: z
.number()
.int()
.refine(v => v === 0 || v >= 100, zeroOrAtLeast100)
.default(DEFAULT_POLL_CONFIG.multisession_poll_interval_ms_at_capacity),
// .min(1) matches the server's ge=1 constraint (work_v1.py:230).
reclaim_older_than_ms: z.number().int().min(1).default(5000),
session_keepalive_interval_v2_ms: z
.number()
.int()
.min(0)
.default(120_000),
})
.refine(
cfg =>
cfg.non_exclusive_heartbeat_interval_ms > 0 ||
cfg.poll_interval_ms_at_capacity > 0,
{
message:
'at-capacity liveness requires non_exclusive_heartbeat_interval_ms > 0 or poll_interval_ms_at_capacity > 0',
},
)
.refine(
cfg =>
cfg.non_exclusive_heartbeat_interval_ms > 0 ||
cfg.multisession_poll_interval_ms_at_capacity > 0,
{
message:
'at-capacity liveness requires non_exclusive_heartbeat_interval_ms > 0 or multisession_poll_interval_ms_at_capacity > 0',
},
),
)
/**
* Fetch the bridge poll interval config from GrowthBook with a 5-minute
* refresh window. Validates the served JSON against the schema; falls back
* to defaults if the flag is absent, malformed, or partially-specified.
*
* Shared by bridgeMain.ts (standalone) and replBridge.ts (REPL) so ops
* can tune both poll rates fleet-wide with a single config push.
*/
export function getPollIntervalConfig(): PollIntervalConfig {
const raw = getFeatureValue_CACHED_WITH_REFRESH<unknown>(
'tengu_bridge_poll_interval_config',
DEFAULT_POLL_CONFIG,
5 * 60 * 1000,
)
const parsed = pollIntervalConfigSchema().safeParse(raw)
return parsed.success ? parsed.data : DEFAULT_POLL_CONFIG
}

View File

@@ -0,0 +1,84 @@
/**
* Bridge poll interval defaults. Extracted from pollConfig.ts so callers
* that don't need live GrowthBook tuning (daemon via Agent SDK) can avoid
* the growthbook.ts → config.ts → file.ts → sessionStorage.ts → commands.ts
* transitive dependency chain.
*/
/**
* Poll interval when actively seeking work (no transport / below maxSessions).
* Governs user-visible "connecting…" latency on initial work pickup and
* recovery speed after the server re-dispatches a work item.
*/
const POLL_INTERVAL_MS_NOT_AT_CAPACITY = 2000
/**
* Poll interval when the transport is connected. Runs independently of
* heartbeat — when both are enabled, the heartbeat loop breaks out to poll
* at this interval. Set to 0 to disable at-capacity polling entirely.
*
* Server-side constraints that bound this value:
* - BRIDGE_LAST_POLL_TTL = 4h (Redis key expiry → environment auto-archived)
* - max_poll_stale_seconds = 24h (session-creation health gate, currently disabled)
*
* 10 minutes gives 24× headroom on the Redis TTL while still picking up
* server-initiated token-rotation redispatches within one poll cycle.
* The transport auto-reconnects internally for 10 minutes on transient WS
* failures, so poll is not the recovery path — it's strictly a liveness
* signal plus a backstop for permanent close.
*/
const POLL_INTERVAL_MS_AT_CAPACITY = 600_000
/**
* Multisession bridge (bridgeMain.ts) poll intervals. Defaults match the
* single-session values so existing GrowthBook configs without these fields
* preserve current behavior. Ops can tune these independently via the
* tengu_bridge_poll_interval_config GB flag.
*/
const MULTISESSION_POLL_INTERVAL_MS_NOT_AT_CAPACITY =
POLL_INTERVAL_MS_NOT_AT_CAPACITY
const MULTISESSION_POLL_INTERVAL_MS_PARTIAL_CAPACITY =
POLL_INTERVAL_MS_NOT_AT_CAPACITY
const MULTISESSION_POLL_INTERVAL_MS_AT_CAPACITY = POLL_INTERVAL_MS_AT_CAPACITY
export type PollIntervalConfig = {
poll_interval_ms_not_at_capacity: number
poll_interval_ms_at_capacity: number
non_exclusive_heartbeat_interval_ms: number
multisession_poll_interval_ms_not_at_capacity: number
multisession_poll_interval_ms_partial_capacity: number
multisession_poll_interval_ms_at_capacity: number
reclaim_older_than_ms: number
session_keepalive_interval_v2_ms: number
}
export const DEFAULT_POLL_CONFIG: PollIntervalConfig = {
poll_interval_ms_not_at_capacity: POLL_INTERVAL_MS_NOT_AT_CAPACITY,
poll_interval_ms_at_capacity: POLL_INTERVAL_MS_AT_CAPACITY,
// 0 = disabled. When > 0, at-capacity loops send per-work-item heartbeats
// at this interval. Independent of poll_interval_ms_at_capacity — both may
// run (heartbeat periodically yields to poll). 60s gives 5× headroom under
// the server's 300s heartbeat TTL. Named non_exclusive to distinguish from
// the old heartbeat_interval_ms field (either-or semantics in pre-#22145
// clients — heartbeat suppressed poll). Old clients ignore this key; ops
// can set both fields during rollout.
non_exclusive_heartbeat_interval_ms: 0,
multisession_poll_interval_ms_not_at_capacity:
MULTISESSION_POLL_INTERVAL_MS_NOT_AT_CAPACITY,
multisession_poll_interval_ms_partial_capacity:
MULTISESSION_POLL_INTERVAL_MS_PARTIAL_CAPACITY,
multisession_poll_interval_ms_at_capacity:
MULTISESSION_POLL_INTERVAL_MS_AT_CAPACITY,
// Poll query param: reclaim unacknowledged work items older than this.
// Matches the server's DEFAULT_RECLAIM_OLDER_THAN_MS (work_service.py:24).
// Enables picking up stale-pending work after JWT expiry, when the prior
// ack failed because the session_ingress_token was already stale.
reclaim_older_than_ms: 5000,
// 0 = disabled. When > 0, push a silent {type:'keep_alive'} frame to
// session-ingress at this interval so upstream proxies don't GC an idle
// remote-control session. 2 min is the default. _v2: bridge-only gate
// (pre-v2 clients read the old key, new clients ignore it).
session_keepalive_interval_v2_ms: 120_000,
}

File diff suppressed because it is too large Load Diff

2408
src/bridge/replBridge.ts Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,38 @@
import { updateSessionBridgeId } from '../utils/concurrentSessions.js'
import type { ReplBridgeHandle } from './replBridge.js'
import { toCompatSessionId } from './sessionIdCompat.js'
/**
* Global pointer to the active REPL bridge handle, so callers outside
* useReplBridge's React tree (tools, slash commands) can invoke handle methods
* like subscribePR. Same one-bridge-per-process justification as bridgeDebug.ts
* — the handle's closure captures the sessionId and getAccessToken that created
* the session, and re-deriving those independently (BriefTool/upload.ts pattern)
* risks staging/prod token divergence.
*
* Set from useReplBridge.tsx when init completes; cleared on teardown.
*/
let handle: ReplBridgeHandle | null = null
export function setReplBridgeHandle(h: ReplBridgeHandle | null): void {
handle = h
// Publish (or clear) our bridge session ID in the session record so other
// local peers can dedup us out of their bridge list — local is preferred.
void updateSessionBridgeId(getSelfBridgeCompatId() ?? null).catch(() => {})
}
export function getReplBridgeHandle(): ReplBridgeHandle | null {
return handle
}
/**
* Our own bridge session ID in the session_* compat format the API returns
* in /v1/sessions responses — or undefined if bridge isn't connected.
*/
export function getSelfBridgeCompatId(): string | undefined {
const h = getReplBridgeHandle()
return h ? toCompatSessionId(h.bridgeSessionId) : undefined
}

View File

@@ -0,0 +1,372 @@
import type { StdoutMessage } from 'src/entrypoints/sdk/controlTypes.js'
import { CCRClient } from '../cli/transports/ccrClient.js'
import type { HybridTransport } from '../cli/transports/HybridTransport.js'
import { SSETransport } from '../cli/transports/SSETransport.js'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { updateSessionIngressAuthToken } from '../utils/sessionIngressAuth.js'
import type { SessionState } from '../utils/sessionState.js'
import { registerWorker } from './workSecret.js'
/**
* Transport abstraction for replBridge. Covers exactly the surface that
* replBridge.ts uses against HybridTransport so the v1/v2 choice is
* confined to the construction site.
*
* - v1: HybridTransport (WS reads + POST writes to Session-Ingress)
* - v2: SSETransport (reads) + CCRClient (writes to CCR v2 /worker/*)
*
* The v2 write path goes through CCRClient.writeEvent → SerialBatchEventUploader,
* NOT through SSETransport.write() — SSETransport.write() targets the
* Session-Ingress POST URL shape, which is wrong for CCR v2.
*/
export type ReplBridgeTransport = {
write(message: StdoutMessage): Promise<void>
writeBatch(messages: StdoutMessage[]): Promise<void>
close(): void
isConnectedStatus(): boolean
getStateLabel(): string
setOnData(callback: (data: string) => void): void
setOnClose(callback: (closeCode?: number) => void): void
setOnConnect(callback: () => void): void
connect(): void
/**
* High-water mark of the underlying read stream's event sequence numbers.
* replBridge reads this before swapping transports so the new one can
* resume from where the old one left off (otherwise the server replays
* the entire session history from seq 0).
*
* v1 returns 0 — Session-Ingress WS doesn't use SSE sequence numbers;
* replay-on-reconnect is handled by the server-side message cursor.
*/
getLastSequenceNum(): number
/**
* Monotonic count of batches dropped via maxConsecutiveFailures.
* Snapshot before writeBatch() and compare after to detect silent drops
* (writeBatch() resolves normally even when batches were dropped).
* v2 returns 0 — the v2 write path doesn't set maxConsecutiveFailures.
*/
readonly droppedBatchCount: number
/**
* PUT /worker state (v2 only; v1 is a no-op). `requires_action` tells
* the backend a permission prompt is pending — claude.ai shows the
* "waiting for input" indicator. REPL/daemon callers don't need this
* (user watches the REPL locally); multi-session worker callers do.
*/
reportState(state: SessionState): void
/** PUT /worker external_metadata (v2 only; v1 is a no-op). */
reportMetadata(metadata: Record<string, unknown>): void
/**
* POST /worker/events/{id}/delivery (v2 only; v1 is a no-op). Populates
* CCR's processing_at/processed_at columns. `received` is auto-fired by
* CCRClient on every SSE frame and is not exposed here.
*/
reportDelivery(eventId: string, status: 'processing' | 'processed'): void
/**
* Drain the write queue before close() (v2 only; v1 resolves
* immediately — HybridTransport POSTs are already awaited per-write).
*/
flush(): Promise<void>
}
/**
* v1 adapter: HybridTransport already has the full surface (it extends
* WebSocketTransport which has setOnConnect + getStateLabel). This is a
* no-op wrapper that exists only so replBridge's `transport` variable
* has a single type.
*/
export function createV1ReplTransport(
hybrid: HybridTransport,
): ReplBridgeTransport {
return {
write: msg => hybrid.write(msg),
writeBatch: msgs => hybrid.writeBatch(msgs),
close: () => hybrid.close(),
isConnectedStatus: () => hybrid.isConnectedStatus(),
getStateLabel: () => hybrid.getStateLabel(),
setOnData: cb => hybrid.setOnData(cb),
setOnClose: cb => hybrid.setOnClose(cb),
setOnConnect: cb => hybrid.setOnConnect(cb),
connect: () => void hybrid.connect(),
// v1 Session-Ingress WS doesn't use SSE sequence numbers; replay
// semantics are different. Always return 0 so the seq-num carryover
// logic in replBridge is a no-op for v1.
getLastSequenceNum: () => 0,
get droppedBatchCount() {
return hybrid.droppedBatchCount
},
reportState: () => {},
reportMetadata: () => {},
reportDelivery: () => {},
flush: () => Promise.resolve(),
}
}
/**
* v2 adapter: wrap SSETransport (reads) + CCRClient (writes, heartbeat,
* state, delivery tracking).
*
* Auth: v2 endpoints validate the JWT's session_id claim (register_worker.go:32)
* and worker role (environment_auth.py:856). OAuth tokens have neither.
* This is the inverse of the v1 replBridge path, which deliberately uses OAuth.
* The JWT is refreshed when the poll loop re-dispatches work — the caller
* invokes createV2ReplTransport again with the fresh token.
*
* Registration happens here (not in the caller) so the entire v2 handshake
* is one async step. registerWorker failure propagates — replBridge will
* catch it and stay on the poll loop.
*/
export async function createV2ReplTransport(opts: {
sessionUrl: string
ingressToken: string
sessionId: string
/**
* SSE sequence-number high-water mark from the previous transport.
* Passed to the new SSETransport so its first connect() sends
* from_sequence_num / Last-Event-ID and the server resumes from where
* the old stream left off. Without this, every transport swap asks the
* server to replay the entire session history from seq 0.
*/
initialSequenceNum?: number
/**
* Worker epoch from POST /bridge response. When provided, the server
* already bumped epoch (the /bridge call IS the register — see server
* PR #293280). When omitted (v1 CCR-v2 path via replBridge.ts poll loop),
* call registerWorker as before.
*/
epoch?: number
/** CCRClient heartbeat interval. Defaults to 20s when omitted. */
heartbeatIntervalMs?: number
/** ±fraction per-beat jitter. Defaults to 0 (no jitter) when omitted. */
heartbeatJitterFraction?: number
/**
* When true, skip opening the SSE read stream — only the CCRClient write
* path is activated. Use for mirror-mode attachments that forward events
* but never receive inbound prompts or control requests.
*/
outboundOnly?: boolean
/**
* Per-instance auth header source. When provided, CCRClient + SSETransport
* read auth from this closure instead of the process-wide
* CLAUDE_CODE_SESSION_ACCESS_TOKEN env var. Required for callers managing
* multiple concurrent sessions — the env-var path stomps across sessions.
* When omitted, falls back to the env var (single-session callers).
*/
getAuthToken?: () => string | undefined
}): Promise<ReplBridgeTransport> {
const {
sessionUrl,
ingressToken,
sessionId,
initialSequenceNum,
getAuthToken,
} = opts
// Auth header builder. If getAuthToken is provided, read from it
// (per-instance, multi-session safe). Otherwise write ingressToken to
// the process-wide env var (legacy single-session path — CCRClient's
// default getAuthHeaders reads it via getSessionIngressAuthHeaders).
let getAuthHeaders: (() => Record<string, string>) | undefined
if (getAuthToken) {
getAuthHeaders = (): Record<string, string> => {
const token = getAuthToken()
if (!token) return {}
return { Authorization: `Bearer ${token}` }
}
} else {
// CCRClient.request() and SSETransport.connect() both read auth via
// getSessionIngressAuthHeaders() → this env var. Set it before either
// touches the network.
updateSessionIngressAuthToken(ingressToken)
}
const epoch = opts.epoch ?? (await registerWorker(sessionUrl, ingressToken))
logForDebugging(
`[bridge:repl] CCR v2: worker sessionId=${sessionId} epoch=${epoch}${opts.epoch !== undefined ? ' (from /bridge)' : ' (via registerWorker)'}`,
)
// Derive SSE stream URL. Same logic as transportUtils.ts:26-33 but
// starting from an http(s) base instead of a --sdk-url that might be ws://.
const sseUrl = new URL(sessionUrl)
sseUrl.pathname = sseUrl.pathname.replace(/\/$/, '') + '/worker/events/stream'
const sse = new SSETransport(
sseUrl,
{},
sessionId,
undefined,
initialSequenceNum,
getAuthHeaders,
)
let onCloseCb: ((closeCode?: number) => void) | undefined
const ccr = new CCRClient(sse, new URL(sessionUrl), {
getAuthHeaders,
heartbeatIntervalMs: opts.heartbeatIntervalMs,
heartbeatJitterFraction: opts.heartbeatJitterFraction,
// Default is process.exit(1) — correct for spawn-mode children. In-process,
// that kills the REPL. Close instead: replBridge's onClose wakes the poll
// loop, which picks up the server's re-dispatch (with fresh epoch).
onEpochMismatch: () => {
logForDebugging(
'[bridge:repl] CCR v2: epoch superseded (409) — closing for poll-loop recovery',
)
// Close resources in a try block so the throw always executes.
// If ccr.close() or sse.close() throw, we still need to unwind
// the caller (request()) — otherwise handleEpochMismatch's `never`
// return type is violated at runtime and control falls through.
try {
ccr.close()
sse.close()
onCloseCb?.(4090)
} catch (closeErr: unknown) {
logForDebugging(
`[bridge:repl] CCR v2: error during epoch-mismatch cleanup: ${errorMessage(closeErr)}`,
{ level: 'error' },
)
}
// Don't return — the calling request() code continues after the 409
// branch, so callers see the logged warning and a false return. We
// throw to unwind; the uploaders catch it as a send failure.
throw new Error('epoch superseded')
},
})
// CCRClient's constructor wired sse.setOnEvent → reportDelivery('received').
// remoteIO.ts additionally sends 'processing'/'processed' via
// setCommandLifecycleListener, which the in-process query loop fires. This
// transport's only caller (replBridge/daemonBridge) has no such wiring — the
// daemon's agent child is a separate process (ProcessTransport), and its
// notifyCommandLifecycle calls fire with listener=null in its own module
// scope. So events stay at 'received' forever, and reconnectSession re-queues
// them on every daemon restart (observed: 21→24→25 phantom prompts as
// "user sent a new message while you were working" system-reminders).
//
// Fix: ACK 'processed' immediately alongside 'received'. The window between
// SSE receipt and transcript-write is narrow (queue → SDK → child stdin →
// model); a crash there loses one prompt vs. the observed N-prompt flood on
// every restart. Overwrite the constructor's wiring to do both — setOnEvent
// replaces, not appends (SSETransport.ts:658).
sse.setOnEvent(event => {
ccr.reportDelivery(event.event_id, 'received')
ccr.reportDelivery(event.event_id, 'processed')
})
// Both sse.connect() and ccr.initialize() are deferred to connect() below.
// replBridge's calling order is newTransport → setOnConnect → setOnData →
// setOnClose → connect(), and both calls need those callbacks wired first:
// sse.connect() opens the stream (events flow to onData/onClose immediately),
// and ccr.initialize().then() fires onConnectCb.
//
// onConnect fires once ccr.initialize() resolves. Writes go via
// CCRClient HTTP POST (SerialBatchEventUploader), not SSE, so the
// write path is ready the moment workerEpoch is set. SSE.connect()
// awaits its read loop and never resolves — don't gate on it.
// The SSE stream opens in parallel (~30ms) and starts delivering
// inbound events via setOnData; outbound doesn't need to wait for it.
let onConnectCb: (() => void) | undefined
let ccrInitialized = false
let closed = false
return {
write(msg) {
return ccr.writeEvent(msg)
},
async writeBatch(msgs) {
// SerialBatchEventUploader already batches internally (maxBatchSize=100);
// sequential enqueue preserves order and the uploader coalesces.
// Check closed between writes to avoid sending partial batches after
// transport teardown (epoch mismatch, SSE drop).
for (const m of msgs) {
if (closed) break
await ccr.writeEvent(m)
}
},
close() {
closed = true
ccr.close()
sse.close()
},
isConnectedStatus() {
// Write-readiness, not read-readiness — replBridge checks this
// before calling writeBatch. SSE open state is orthogonal.
return ccrInitialized
},
getStateLabel() {
// SSETransport doesn't expose its state string; synthesize from
// what we can observe. replBridge only uses this for debug logging.
if (sse.isClosedStatus()) return 'closed'
if (sse.isConnectedStatus()) return ccrInitialized ? 'connected' : 'init'
return 'connecting'
},
setOnData(cb) {
sse.setOnData(cb)
},
setOnClose(cb) {
onCloseCb = cb
// SSE reconnect-budget exhaustion fires onClose(undefined) — map to
// 4092 so ws_closed telemetry can distinguish it from HTTP-status
// closes (SSETransport:280 passes response.status). Stop CCRClient's
// heartbeat timer before notifying replBridge. (sse.close() doesn't
// invoke this, so the epoch-mismatch path above isn't double-firing.)
sse.setOnClose(code => {
ccr.close()
cb(code ?? 4092)
})
},
setOnConnect(cb) {
onConnectCb = cb
},
getLastSequenceNum() {
return sse.getLastSequenceNum()
},
// v2 write path (CCRClient) doesn't set maxConsecutiveFailures — no drops.
droppedBatchCount: 0,
reportState(state) {
ccr.reportState(state)
},
reportMetadata(metadata) {
ccr.reportMetadata(metadata)
},
reportDelivery(eventId, status) {
ccr.reportDelivery(eventId, status)
},
flush() {
return ccr.flush()
},
connect() {
// Outbound-only: skip the SSE read stream entirely — no inbound
// events to receive, no delivery ACKs to send. Only the CCRClient
// write path (POST /worker/events) and heartbeat are needed.
if (!opts.outboundOnly) {
// Fire-and-forget — SSETransport.connect() awaits readStream()
// (the read loop) and only resolves on stream close/error. The
// spawn-mode path in remoteIO.ts does the same void discard.
void sse.connect()
}
void ccr.initialize(epoch).then(
() => {
ccrInitialized = true
logForDebugging(
`[bridge:repl] v2 transport ready for writes (epoch=${epoch}, sse=${sse.isConnectedStatus() ? 'open' : 'opening'})`,
)
onConnectCb?.()
},
(err: unknown) => {
logForDebugging(
`[bridge:repl] CCR v2 initialize failed: ${errorMessage(err)}`,
{ level: 'error' },
)
// Close transport resources and notify replBridge via onClose
// so the poll loop can retry on the next work dispatch.
// Without this callback, replBridge never learns the transport
// failed to initialize and sits with transport === null forever.
ccr.close()
sse.close()
onCloseCb?.(4091) // 4091 = init failure, distinguishable from 4090 epoch mismatch
},
)
},
}
}

View File

@@ -0,0 +1,59 @@
/**
* Session ID tag translation helpers for the CCR v2 compat layer.
*
* Lives in its own file (rather than workSecret.ts) so that sessionHandle.ts
* and replBridgeTransport.ts (bridge.mjs entry points) can import from
* workSecret.ts without pulling in these retag functions.
*
* The isCseShimEnabled kill switch is injected via setCseShimGate() to avoid
* a static import of bridgeEnabled.ts → growthbook.ts → config.ts — all
* banned from the sdk.mjs bundle (scripts/build-agent-sdk.sh). Callers that
* already import bridgeEnabled.ts register the gate; the SDK path never does,
* so the shim defaults to active (matching isCseShimEnabled()'s own default).
*/
let _isCseShimEnabled: (() => boolean) | undefined
/**
* Register the GrowthBook gate for the cse_ shim. Called from bridge
* init code that already imports bridgeEnabled.ts.
*/
export function setCseShimGate(gate: () => boolean): void {
_isCseShimEnabled = gate
}
/**
* Re-tag a `cse_*` session ID to `session_*` for use with the v1 compat API.
*
* Worker endpoints (/v1/code/sessions/{id}/worker/*) want `cse_*`; that's
* what the work poll delivers. Client-facing compat endpoints
* (/v1/sessions/{id}, /v1/sessions/{id}/archive, /v1/sessions/{id}/events)
* want `session_*` — compat/convert.go:27 validates TagSession. Same UUID,
* different costume. No-op for IDs that aren't `cse_*`.
*
* bridgeMain holds one sessionId variable for both worker registration and
* session-management calls. It arrives as `cse_*` from the work poll under
* the compat gate, so archiveSession/fetchSessionTitle need this re-tag.
*/
export function toCompatSessionId(id: string): string {
if (!id.startsWith('cse_')) return id
if (_isCseShimEnabled && !_isCseShimEnabled()) return id
return 'session_' + id.slice('cse_'.length)
}
/**
* Re-tag a `session_*` session ID to `cse_*` for infrastructure-layer calls.
*
* Inverse of toCompatSessionId. POST /v1/environments/{id}/bridge/reconnect
* lives below the compat layer: once ccr_v2_compat_enabled is on server-side,
* it looks sessions up by their infra tag (`cse_*`). createBridgeSession still
* returns `session_*` (compat/convert.go:41) and that's what bridge-pointer
* stores — so perpetual reconnect passes the wrong costume and gets "Session
* not found" back. Same UUID, wrong tag. No-op for IDs that aren't `session_*`.
*/
export function toInfraSessionId(id: string): string {
if (!id.startsWith('session_')) return id
return 'cse_' + id.slice('session_'.length)
}

552
src/bridge/sessionRunner.ts Normal file
View File

@@ -0,0 +1,552 @@
import { type ChildProcess, spawn } from 'child_process'
import { createWriteStream, type WriteStream } from 'fs'
import { tmpdir } from 'os'
import { dirname, join } from 'path'
import { createInterface } from 'readline'
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
import { debugTruncate } from './debugUtils.js'
import type {
SessionActivity,
SessionDoneStatus,
SessionHandle,
SessionSpawner,
SessionSpawnOpts,
} from './types.js'
const MAX_ACTIVITIES = 10
const MAX_STDERR_LINES = 10
/**
* Sanitize a session ID for use in file names.
* Strips any characters that could cause path traversal (e.g. `../`, `/`)
* or other filesystem issues, replacing them with underscores.
*/
export function safeFilenameId(id: string): string {
return id.replace(/[^a-zA-Z0-9_-]/g, '_')
}
/**
* A control_request emitted by the child CLI when it needs permission to
* execute a **specific** tool invocation (not a general capability check).
* The bridge forwards this to the server so the user can approve/deny.
*/
export type PermissionRequest = {
type: 'control_request'
request_id: string
request: {
/** Per-invocation permission check — "may I run this tool with these inputs?" */
subtype: 'can_use_tool'
tool_name: string
input: Record<string, unknown>
tool_use_id: string
}
}
type SessionSpawnerDeps = {
execPath: string
/**
* Arguments that must precede the CLI flags when spawning. Empty for
* compiled binaries (where execPath is the claude binary itself); contains
* the script path (process.argv[1]) for npm installs where execPath is the
* node runtime. Without this, node sees --sdk-url as a node option and
* exits with "bad option: --sdk-url" (see anthropics/claude-code#28334).
*/
scriptArgs: string[]
env: NodeJS.ProcessEnv
verbose: boolean
sandbox: boolean
debugFile?: string
permissionMode?: string
onDebug: (msg: string) => void
onActivity?: (sessionId: string, activity: SessionActivity) => void
onPermissionRequest?: (
sessionId: string,
request: PermissionRequest,
accessToken: string,
) => void
}
/** Map tool names to human-readable verbs for the status display. */
const TOOL_VERBS: Record<string, string> = {
Read: 'Reading',
Write: 'Writing',
Edit: 'Editing',
MultiEdit: 'Editing',
Bash: 'Running',
Glob: 'Searching',
Grep: 'Searching',
WebFetch: 'Fetching',
WebSearch: 'Searching',
Task: 'Running task',
FileReadTool: 'Reading',
FileWriteTool: 'Writing',
FileEditTool: 'Editing',
GlobTool: 'Searching',
GrepTool: 'Searching',
BashTool: 'Running',
NotebookEditTool: 'Editing notebook',
LSP: 'LSP',
}
function toolSummary(name: string, input: Record<string, unknown>): string {
const verb = TOOL_VERBS[name] ?? name
const target =
(input.file_path as string) ??
(input.filePath as string) ??
(input.pattern as string) ??
(input.command as string | undefined)?.slice(0, 60) ??
(input.url as string) ??
(input.query as string) ??
''
if (target) {
return `${verb} ${target}`
}
return verb
}
function extractActivities(
line: string,
sessionId: string,
onDebug: (msg: string) => void,
): SessionActivity[] {
let parsed: unknown
try {
parsed = jsonParse(line)
} catch {
return []
}
if (!parsed || typeof parsed !== 'object') {
return []
}
const msg = parsed as Record<string, unknown>
const activities: SessionActivity[] = []
const now = Date.now()
switch (msg.type) {
case 'assistant': {
const message = msg.message as Record<string, unknown> | undefined
if (!message) break
const content = message.content
if (!Array.isArray(content)) break
for (const block of content) {
if (!block || typeof block !== 'object') continue
const b = block as Record<string, unknown>
if (b.type === 'tool_use') {
const name = (b.name as string) ?? 'Tool'
const input = (b.input as Record<string, unknown>) ?? {}
const summary = toolSummary(name, input)
activities.push({
type: 'tool_start',
summary,
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} tool_use name=${name} ${inputPreview(input)}`,
)
} else if (b.type === 'text') {
const text = (b.text as string) ?? ''
if (text.length > 0) {
activities.push({
type: 'text',
summary: text.slice(0, 80),
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} text "${text.slice(0, 100)}"`,
)
}
}
}
break
}
case 'result': {
const subtype = msg.subtype as string | undefined
if (subtype === 'success') {
activities.push({
type: 'result',
summary: 'Session completed',
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} result subtype=success`,
)
} else if (subtype) {
const errors = msg.errors as string[] | undefined
const errorSummary = errors?.[0] ?? `Error: ${subtype}`
activities.push({
type: 'error',
summary: errorSummary,
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} result subtype=${subtype} error="${errorSummary}"`,
)
} else {
onDebug(
`[bridge:activity] sessionId=${sessionId} result subtype=undefined`,
)
}
break
}
default:
break
}
return activities
}
/**
* Extract plain text from a replayed SDKUserMessage NDJSON line. Returns the
* trimmed text if this looks like a real human-authored message, otherwise
* undefined so the caller keeps waiting for the first real message.
*/
function extractUserMessageText(
msg: Record<string, unknown>,
): string | undefined {
// Skip tool-result user messages (wrapped subagent results) and synthetic
// caveat messages — neither is human-authored.
if (msg.parent_tool_use_id != null || msg.isSynthetic || msg.isReplay)
return undefined
const message = msg.message as Record<string, unknown> | undefined
const content = message?.content
let text: string | undefined
if (typeof content === 'string') {
text = content
} else if (Array.isArray(content)) {
for (const block of content) {
if (
block &&
typeof block === 'object' &&
(block as Record<string, unknown>).type === 'text'
) {
text = (block as Record<string, unknown>).text as string | undefined
break
}
}
}
text = text?.trim()
return text ? text : undefined
}
/** Build a short preview of tool input for debug logging. */
function inputPreview(input: Record<string, unknown>): string {
const parts: string[] = []
for (const [key, val] of Object.entries(input)) {
if (typeof val === 'string') {
parts.push(`${key}="${val.slice(0, 100)}"`)
}
if (parts.length >= 3) break
}
return parts.join(' ')
}
export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
return {
spawn(opts: SessionSpawnOpts, dir: string): SessionHandle {
// Debug file resolution:
// 1. If deps.debugFile is provided, use it with session ID suffix for uniqueness
// 2. If verbose or ant build, auto-generate a temp file path
// 3. Otherwise, no debug file
const safeId = safeFilenameId(opts.sessionId)
let debugFile: string | undefined
if (deps.debugFile) {
const ext = deps.debugFile.lastIndexOf('.')
if (ext > 0) {
debugFile = `${deps.debugFile.slice(0, ext)}-${safeId}${deps.debugFile.slice(ext)}`
} else {
debugFile = `${deps.debugFile}-${safeId}`
}
} else if (deps.verbose || process.env.USER_TYPE === 'ant') {
debugFile = join(tmpdir(), 'claude', `bridge-session-${safeId}.log`)
}
// Transcript file: write raw NDJSON lines for post-hoc analysis.
// Placed alongside the debug file when one is configured.
let transcriptStream: WriteStream | null = null
let transcriptPath: string | undefined
if (deps.debugFile) {
transcriptPath = join(
dirname(deps.debugFile),
`bridge-transcript-${safeId}.jsonl`,
)
transcriptStream = createWriteStream(transcriptPath, { flags: 'a' })
transcriptStream.on('error', err => {
deps.onDebug(
`[bridge:session] Transcript write error: ${err.message}`,
)
transcriptStream = null
})
deps.onDebug(`[bridge:session] Transcript log: ${transcriptPath}`)
}
const args = [
...deps.scriptArgs,
'--print',
'--sdk-url',
opts.sdkUrl,
'--session-id',
opts.sessionId,
'--input-format',
'stream-json',
'--output-format',
'stream-json',
'--replay-user-messages',
...(deps.verbose ? ['--verbose'] : []),
...(debugFile ? ['--debug-file', debugFile] : []),
...(deps.permissionMode
? ['--permission-mode', deps.permissionMode]
: []),
]
const env: NodeJS.ProcessEnv = {
...deps.env,
// Strip the bridge's OAuth token so the child CC process uses
// the session access token for inference instead.
CLAUDE_CODE_OAUTH_TOKEN: undefined,
CLAUDE_CODE_ENVIRONMENT_KIND: 'bridge',
...(deps.sandbox && { CLAUDE_CODE_FORCE_SANDBOX: '1' }),
CLAUDE_CODE_SESSION_ACCESS_TOKEN: opts.accessToken,
// v1: HybridTransport (WS reads + POST writes) to Session-Ingress.
// Harmless in v2 mode — transportUtils checks CLAUDE_CODE_USE_CCR_V2 first.
CLAUDE_CODE_POST_FOR_SESSION_INGRESS_V2: '1',
// v2: SSETransport + CCRClient to CCR's /v1/code/sessions/* endpoints.
// Same env vars environment-manager sets in the container path.
...(opts.useCcrV2 && {
CLAUDE_CODE_USE_CCR_V2: '1',
CLAUDE_CODE_WORKER_EPOCH: String(opts.workerEpoch),
}),
}
deps.onDebug(
`[bridge:session] Spawning sessionId=${opts.sessionId} sdkUrl=${opts.sdkUrl} accessToken=${opts.accessToken ? 'present' : 'MISSING'}`,
)
deps.onDebug(`[bridge:session] Child args: ${args.join(' ')}`)
if (debugFile) {
deps.onDebug(`[bridge:session] Debug log: ${debugFile}`)
}
// Pipe all three streams: stdin for control, stdout for NDJSON parsing,
// stderr for error capture and diagnostics.
const child: ChildProcess = spawn(deps.execPath, args, {
cwd: dir,
stdio: ['pipe', 'pipe', 'pipe'],
env,
windowsHide: true,
})
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} pid=${child.pid}`,
)
const activities: SessionActivity[] = []
let currentActivity: SessionActivity | null = null
const lastStderr: string[] = []
let sigkillSent = false
let firstUserMessageSeen = false
// Buffer stderr for error diagnostics
if (child.stderr) {
const stderrRl = createInterface({ input: child.stderr })
stderrRl.on('line', line => {
// Forward stderr to bridge's stderr in verbose mode
if (deps.verbose) {
process.stderr.write(line + '\n')
}
// Ring buffer of last N lines
if (lastStderr.length >= MAX_STDERR_LINES) {
lastStderr.shift()
}
lastStderr.push(line)
})
}
// Parse NDJSON from child stdout
if (child.stdout) {
const rl = createInterface({ input: child.stdout })
rl.on('line', line => {
// Write raw NDJSON to transcript file
if (transcriptStream) {
transcriptStream.write(line + '\n')
}
// Log all messages flowing from the child CLI to the bridge
deps.onDebug(
`[bridge:ws] sessionId=${opts.sessionId} <<< ${debugTruncate(line)}`,
)
// In verbose mode, forward raw output to stderr
if (deps.verbose) {
process.stderr.write(line + '\n')
}
const extracted = extractActivities(
line,
opts.sessionId,
deps.onDebug,
)
for (const activity of extracted) {
// Maintain ring buffer
if (activities.length >= MAX_ACTIVITIES) {
activities.shift()
}
activities.push(activity)
currentActivity = activity
deps.onActivity?.(opts.sessionId, activity)
}
// Detect control_request and replayed user messages.
// extractActivities parses the same line but swallows parse errors
// and skips 'user' type — re-parse here is cheap (NDJSON lines are
// small) and keeps each path self-contained.
{
let parsed: unknown
try {
parsed = jsonParse(line)
} catch {
// Non-JSON line, skip detection
}
if (parsed && typeof parsed === 'object') {
const msg = parsed as Record<string, unknown>
if (msg.type === 'control_request') {
const request = msg.request as
| Record<string, unknown>
| undefined
if (
request?.subtype === 'can_use_tool' &&
deps.onPermissionRequest
) {
deps.onPermissionRequest(
opts.sessionId,
parsed as PermissionRequest,
opts.accessToken,
)
}
// interrupt is turn-level; the child handles it internally (print.ts)
} else if (
msg.type === 'user' &&
!firstUserMessageSeen &&
opts.onFirstUserMessage
) {
const text = extractUserMessageText(msg)
if (text) {
firstUserMessageSeen = true
opts.onFirstUserMessage(text)
}
}
}
}
})
}
const done = new Promise<SessionDoneStatus>(resolve => {
child.on('close', (code, signal) => {
// Close transcript stream on exit
if (transcriptStream) {
transcriptStream.end()
transcriptStream = null
}
if (signal === 'SIGTERM' || signal === 'SIGINT') {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} interrupted signal=${signal} pid=${child.pid}`,
)
resolve('interrupted')
} else if (code === 0) {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} completed exit_code=0 pid=${child.pid}`,
)
resolve('completed')
} else {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} failed exit_code=${code} pid=${child.pid}`,
)
resolve('failed')
}
})
child.on('error', err => {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} spawn error: ${err.message}`,
)
resolve('failed')
})
})
const handle: SessionHandle = {
sessionId: opts.sessionId,
done,
activities,
accessToken: opts.accessToken,
lastStderr,
get currentActivity(): SessionActivity | null {
return currentActivity
},
kill(): void {
if (!child.killed) {
deps.onDebug(
`[bridge:session] Sending SIGTERM to sessionId=${opts.sessionId} pid=${child.pid}`,
)
// On Windows, child.kill('SIGTERM') throws; use default signal.
if (process.platform === 'win32') {
child.kill()
} else {
child.kill('SIGTERM')
}
}
},
forceKill(): void {
// Use separate flag because child.killed is set when kill() is called,
// not when the process exits. We need to send SIGKILL even after SIGTERM.
if (!sigkillSent && child.pid) {
sigkillSent = true
deps.onDebug(
`[bridge:session] Sending SIGKILL to sessionId=${opts.sessionId} pid=${child.pid}`,
)
if (process.platform === 'win32') {
child.kill()
} else {
child.kill('SIGKILL')
}
}
},
writeStdin(data: string): void {
if (child.stdin && !child.stdin.destroyed) {
deps.onDebug(
`[bridge:ws] sessionId=${opts.sessionId} >>> ${debugTruncate(data)}`,
)
child.stdin.write(data)
}
},
updateAccessToken(token: string): void {
handle.accessToken = token
// Send the fresh token to the child process via stdin. The child's
// StructuredIO handles update_environment_variables messages by
// setting process.env directly, so getSessionIngressAuthToken()
// picks up the new token on the next refreshHeaders call.
handle.writeStdin(
jsonStringify({
type: 'update_environment_variables',
variables: { CLAUDE_CODE_SESSION_ACCESS_TOKEN: token },
}) + '\n',
)
deps.onDebug(
`[bridge:session] Sent token refresh via stdin for sessionId=${opts.sessionId}`,
)
},
}
return handle
},
}
}
export { extractActivities as _extractActivitiesForTesting }

66
src/bridge/stub.ts Normal file
View File

@@ -0,0 +1,66 @@
/**
* Bridge stub — no-op implementations for when BRIDGE_MODE is disabled.
*
* The bridge files themselves are safe to import even when bridge is off
* (no side effects at import time), and all call sites guard execution
* with `feature('BRIDGE_MODE')` checks. This stub exists as a safety net
* for any future code path that might reference bridge functionality
* outside the feature gate.
*
* Usage:
* import { isBridgeAvailable, noopBridgeHandle } from './stub.js'
*/
import type { ReplBridgeHandle } from './replBridge.js'
/** Returns false — bridge is not available in this build/configuration. */
export function isBridgeAvailable(): false {
return false
}
/**
* A no-op ReplBridgeHandle that silently discards all messages.
* Use this when code expects a handle but bridge is disabled.
*/
export const noopBridgeHandle: ReplBridgeHandle = {
bridgeSessionId: '',
environmentId: '',
sessionIngressUrl: '',
writeMessages() {},
writeSdkMessages() {},
sendControlRequest() {},
sendControlResponse() {},
sendControlCancelRequest() {},
sendResult() {},
async teardown() {},
}
/**
* No-op bridge logger that silently drops all output.
*/
export const noopBridgeLogger = {
printBanner() {},
logSessionStart() {},
logSessionComplete() {},
logSessionFailed() {},
logStatus() {},
logVerbose() {},
logError() {},
logReconnected() {},
updateIdleStatus() {},
updateReconnectingStatus() {},
updateSessionStatus() {},
clearStatus() {},
setRepoInfo() {},
setDebugLogPath() {},
setAttached() {},
updateFailedStatus() {},
toggleQr() {},
updateSessionCount() {},
setSpawnModeDisplay() {},
addSession() {},
updateSessionActivity() {},
setSessionTitle() {},
removeSession() {},
refreshDisplay() {},
}

212
src/bridge/trustedDevice.ts Normal file
View File

@@ -0,0 +1,212 @@
import axios from 'axios'
import memoize from 'lodash-es/memoize.js'
import { hostname } from 'os'
import { getOauthConfig } from '../constants/oauth.js'
import {
checkGate_CACHED_OR_BLOCKING,
getFeatureValue_CACHED_MAY_BE_STALE,
} from '../services/analytics/growthbook.js'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { isEssentialTrafficOnly } from '../utils/privacyLevel.js'
import { getSecureStorage } from '../utils/secureStorage/index.js'
import { jsonStringify } from '../utils/slowOperations.js'
/**
* Trusted device token source for bridge (remote-control) sessions.
*
* Bridge sessions have SecurityTier=ELEVATED on the server (CCR v2).
* The server gates ConnectBridgeWorker on its own flag
* (sessions_elevated_auth_enforcement in Anthropic Main); this CLI-side
* flag controls whether the CLI sends X-Trusted-Device-Token at all.
* Two flags so rollout can be staged: flip CLI-side first (headers
* start flowing, server still no-ops), then flip server-side.
*
* Enrollment (POST /auth/trusted_devices) is gated server-side by
* account_session.created_at < 10min, so it must happen during /login.
* Token is persistent (90d rolling expiry) and stored in keychain.
*
* See anthropics/anthropic#274559 (spec), #310375 (B1b tenant RPCs),
* #295987 (B2 Python routes), #307150 (C1' CCR v2 gate).
*/
const TRUSTED_DEVICE_GATE = 'tengu_sessions_elevated_auth_enforcement'
function isGateEnabled(): boolean {
return getFeatureValue_CACHED_MAY_BE_STALE(TRUSTED_DEVICE_GATE, false)
}
// Memoized — secureStorage.read() spawns a macOS `security` subprocess (~40ms).
// bridgeApi.ts calls this from getHeaders() on every poll/heartbeat/ack.
// Cache cleared after enrollment (below) and on logout (clearAuthRelatedCaches).
//
// Only the storage read is memoized — the GrowthBook gate is checked live so
// that a gate flip after GrowthBook refresh takes effect without a restart.
const readStoredToken = memoize((): string | undefined => {
// Env var takes precedence for testing/canary.
const envToken = process.env.CLAUDE_TRUSTED_DEVICE_TOKEN
if (envToken) {
return envToken
}
return getSecureStorage().read()?.trustedDeviceToken
})
export function getTrustedDeviceToken(): string | undefined {
if (!isGateEnabled()) {
return undefined
}
return readStoredToken()
}
export function clearTrustedDeviceTokenCache(): void {
readStoredToken.cache?.clear?.()
}
/**
* Clear the stored trusted device token from secure storage and the memo cache.
* Called before enrollTrustedDevice() during /login so a stale token from the
* previous account isn't sent as X-Trusted-Device-Token while enrollment is
* in-flight (enrollTrustedDevice is async — bridge API calls between login and
* enrollment completion would otherwise still read the old cached token).
*/
export function clearTrustedDeviceToken(): void {
if (!isGateEnabled()) {
return
}
const secureStorage = getSecureStorage()
try {
const data = secureStorage.read()
if (data?.trustedDeviceToken) {
delete data.trustedDeviceToken
secureStorage.update(data)
}
} catch {
// Best-effort — don't block login if storage is inaccessible
}
readStoredToken.cache?.clear?.()
}
/**
* Enroll this device via POST /auth/trusted_devices and persist the token
* to keychain. Best-effort — logs and returns on failure so callers
* (post-login hooks) don't block the login flow.
*
* The server gates enrollment on account_session.created_at < 10min, so
* this must be called immediately after a fresh /login. Calling it later
* (e.g. lazy enrollment on /bridge 403) will fail with 403 stale_session.
*/
export async function enrollTrustedDevice(): Promise<void> {
try {
// checkGate_CACHED_OR_BLOCKING awaits any in-flight GrowthBook re-init
// (triggered by refreshGrowthBookAfterAuthChange in login.tsx) before
// reading the gate, so we get the post-refresh value.
if (!(await checkGate_CACHED_OR_BLOCKING(TRUSTED_DEVICE_GATE))) {
logForDebugging(
`[trusted-device] Gate ${TRUSTED_DEVICE_GATE} is off, skipping enrollment`,
)
return
}
// If CLAUDE_TRUSTED_DEVICE_TOKEN is set (e.g. by an enterprise wrapper),
// skip enrollment — the env var takes precedence in readStoredToken() so
// any enrolled token would be shadowed and never used.
if (process.env.CLAUDE_TRUSTED_DEVICE_TOKEN) {
logForDebugging(
'[trusted-device] CLAUDE_TRUSTED_DEVICE_TOKEN env var is set, skipping enrollment (env var takes precedence)',
)
return
}
// Lazy require — utils/auth.ts transitively pulls ~1300 modules
// (config → file → permissions → sessionStorage → commands). Daemon callers
// of getTrustedDeviceToken() don't need this; only /login does.
/* eslint-disable @typescript-eslint/no-require-imports */
const { getClaudeAIOAuthTokens } =
require('../utils/auth.js') as typeof import('../utils/auth.js')
/* eslint-enable @typescript-eslint/no-require-imports */
const accessToken = getClaudeAIOAuthTokens()?.accessToken
if (!accessToken) {
logForDebugging('[trusted-device] No OAuth token, skipping enrollment')
return
}
// Always re-enroll on /login — the existing token may belong to a
// different account (account-switch without /logout). Skipping enrollment
// would send the old account's token on the new account's bridge calls.
const secureStorage = getSecureStorage()
if (isEssentialTrafficOnly()) {
logForDebugging(
'[trusted-device] Essential traffic only, skipping enrollment',
)
return
}
const baseUrl = getOauthConfig().BASE_API_URL
let response
try {
response = await axios.post<{
device_token?: string
device_id?: string
}>(
`${baseUrl}/api/auth/trusted_devices`,
{ display_name: `Claude Code on ${hostname()} · ${process.platform}` },
{
headers: {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
timeout: 10_000,
validateStatus: s => s < 500,
},
)
} catch (err: unknown) {
logForDebugging(
`[trusted-device] Enrollment request failed: ${errorMessage(err)}`,
)
return
}
if (response.status !== 200 && response.status !== 201) {
logForDebugging(
`[trusted-device] Enrollment failed ${response.status}: ${jsonStringify(response.data).slice(0, 200)}`,
)
return
}
const token = response.data?.device_token
if (!token || typeof token !== 'string') {
logForDebugging(
'[trusted-device] Enrollment response missing device_token field',
)
return
}
try {
const storageData = secureStorage.read()
if (!storageData) {
logForDebugging(
'[trusted-device] Cannot read storage, skipping token persist',
)
return
}
storageData.trustedDeviceToken = token
const result = secureStorage.update(storageData)
if (!result.success) {
logForDebugging(
`[trusted-device] Failed to persist token: ${result.warning ?? 'unknown'}`,
)
return
}
readStoredToken.cache?.clear?.()
logForDebugging(
`[trusted-device] Enrolled device_id=${response.data.device_id ?? 'unknown'}`,
)
} catch (err: unknown) {
logForDebugging(
`[trusted-device] Storage write failed: ${errorMessage(err)}`,
)
}
} catch (err: unknown) {
logForDebugging(`[trusted-device] Enrollment error: ${errorMessage(err)}`)
}
}

264
src/bridge/types.ts Normal file
View File

@@ -0,0 +1,264 @@
/** Default per-session timeout (24 hours). */
export const DEFAULT_SESSION_TIMEOUT_MS = 24 * 60 * 60 * 1000
/** Reusable login guidance appended to bridge auth errors. */
export const BRIDGE_LOGIN_INSTRUCTION =
'Remote Control is only available with claude.ai subscriptions. Please use `/login` to sign in with your claude.ai account.'
/** Full error printed when `claude remote-control` is run without auth. */
export const BRIDGE_LOGIN_ERROR =
'Error: You must be logged in to use Remote Control.\n\n' +
BRIDGE_LOGIN_INSTRUCTION
/** Shown when the user disconnects Remote Control (via /remote-control or ultraplan launch). */
export const REMOTE_CONTROL_DISCONNECTED_MSG = 'Remote Control disconnected.'
// --- Protocol types for the environments API ---
export type WorkData = {
type: 'session' | 'healthcheck'
id: string
}
export type WorkResponse = {
id: string
type: 'work'
environment_id: string
state: string
data: WorkData
secret: string // base64url-encoded JSON
created_at: string
}
export type WorkSecret = {
version: number
session_ingress_token: string
api_base_url: string
sources: Array<{
type: string
git_info?: { type: string; repo: string; ref?: string; token?: string }
}>
auth: Array<{ type: string; token: string }>
claude_code_args?: Record<string, string> | null
mcp_config?: unknown | null
environment_variables?: Record<string, string> | null
/**
* Server-driven CCR v2 selector. Set by prepare_work_secret() when the
* session was created via the v2 compat layer (ccr_v2_compat_enabled).
* Same field the BYOC runner reads at environment-runner/sessionExecutor.ts.
*/
use_code_sessions?: boolean
}
export type SessionDoneStatus = 'completed' | 'failed' | 'interrupted'
export type SessionActivityType = 'tool_start' | 'text' | 'result' | 'error'
export type SessionActivity = {
type: SessionActivityType
summary: string // e.g. "Editing src/foo.ts", "Reading package.json"
timestamp: number
}
/**
* How `claude remote-control` chooses session working directories.
* - `single-session`: one session in cwd, bridge tears down when it ends
* - `worktree`: persistent server, every session gets an isolated git worktree
* - `same-dir`: persistent server, every session shares cwd (can stomp each other)
*/
export type SpawnMode = 'single-session' | 'worktree' | 'same-dir'
/**
* Well-known worker_type values THIS codebase produces. Sent as
* `metadata.worker_type` at environment registration so claude.ai can filter
* the session picker by origin (e.g. assistant tab only shows assistant
* workers). The backend treats this as an opaque string — desktop cowork
* sends `"cowork"`, which isn't in this union. REPL code uses this narrow
* type for its own exhaustiveness; wire-level fields accept any string.
*/
export type BridgeWorkerType = 'claude_code' | 'claude_code_assistant'
export type BridgeConfig = {
dir: string
machineName: string
branch: string
gitRepoUrl: string | null
maxSessions: number
spawnMode: SpawnMode
verbose: boolean
sandbox: boolean
/** Client-generated UUID identifying this bridge instance. */
bridgeId: string
/**
* Sent as metadata.worker_type so web clients can filter by origin.
* Backend treats this as opaque — any string, not just BridgeWorkerType.
*/
workerType: string
/** Client-generated UUID for idempotent environment registration. */
environmentId: string
/**
* Backend-issued environment_id to reuse on re-register. When set, the
* backend treats registration as a reconnect to the existing environment
* instead of creating a new one. Used by `claude remote-control
* --session-id` resume. Must be a backend-format ID — client UUIDs are
* rejected with 400.
*/
reuseEnvironmentId?: string
/** API base URL the bridge is connected to (used for polling). */
apiBaseUrl: string
/** Session ingress base URL for WebSocket connections (may differ from apiBaseUrl locally). */
sessionIngressUrl: string
/** Debug file path passed via --debug-file. */
debugFile?: string
/** Per-session timeout in milliseconds. Sessions exceeding this are killed. */
sessionTimeoutMs?: number
}
// --- Dependency interfaces (for testability) ---
/**
* A control_response event sent back to a session (e.g. a permission decision).
* The `subtype` is `'success'` per the SDK protocol; the inner `response`
* carries the permission decision payload (e.g. `{ behavior: 'allow' }`).
*/
export type PermissionResponseEvent = {
type: 'control_response'
response: {
subtype: 'success'
request_id: string
response: Record<string, unknown>
}
}
export type BridgeApiClient = {
registerBridgeEnvironment(config: BridgeConfig): Promise<{
environment_id: string
environment_secret: string
}>
pollForWork(
environmentId: string,
environmentSecret: string,
signal?: AbortSignal,
reclaimOlderThanMs?: number,
): Promise<WorkResponse | null>
acknowledgeWork(
environmentId: string,
workId: string,
sessionToken: string,
): Promise<void>
/** Stop a work item via the environments API. */
stopWork(environmentId: string, workId: string, force: boolean): Promise<void>
/** Deregister/delete the bridge environment on graceful shutdown. */
deregisterEnvironment(environmentId: string): Promise<void>
/** Send a permission response (control_response) to a session via the session events API. */
sendPermissionResponseEvent(
sessionId: string,
event: PermissionResponseEvent,
sessionToken: string,
): Promise<void>
/** Archive a session so it no longer appears as active on the server. */
archiveSession(sessionId: string): Promise<void>
/**
* Force-stop stale worker instances and re-queue a session on an environment.
* Used by `--session-id` to resume a session after the original bridge died.
*/
reconnectSession(environmentId: string, sessionId: string): Promise<void>
/**
* Send a lightweight heartbeat for an active work item, extending its lease.
* Uses SessionIngressAuth (JWT, no DB hit) instead of EnvironmentSecretAuth.
* Returns the server's response with lease status.
*/
heartbeatWork(
environmentId: string,
workId: string,
sessionToken: string,
): Promise<{ lease_extended: boolean; state: string }>
}
export type SessionHandle = {
sessionId: string
done: Promise<SessionDoneStatus>
kill(): void
forceKill(): void
activities: SessionActivity[] // ring buffer of recent activities (last ~10)
currentActivity: SessionActivity | null // most recent
accessToken: string // session_ingress_token for API calls
lastStderr: string[] // ring buffer of last stderr lines
writeStdin(data: string): void // write directly to child stdin
/** Update the access token for a running session (e.g. after token refresh). */
updateAccessToken(token: string): void
}
export type SessionSpawnOpts = {
sessionId: string
sdkUrl: string
accessToken: string
/** When true, spawn the child with CCR v2 env vars (SSE transport + CCRClient). */
useCcrV2?: boolean
/** Required when useCcrV2 is true. Obtained from POST /worker/register. */
workerEpoch?: number
/**
* Fires once with the text of the first real user message seen on the
* child's stdout (via --replay-user-messages). Lets the caller derive a
* session title when none exists yet. Tool-result and synthetic user
* messages are skipped.
*/
onFirstUserMessage?: (text: string) => void
}
export type SessionSpawner = {
spawn(opts: SessionSpawnOpts, dir: string): SessionHandle
}
export type BridgeLogger = {
printBanner(config: BridgeConfig, environmentId: string): void
logSessionStart(sessionId: string, prompt: string): void
logSessionComplete(sessionId: string, durationMs: number): void
logSessionFailed(sessionId: string, error: string): void
logStatus(message: string): void
logVerbose(message: string): void
logError(message: string): void
/** Log a reconnection success event after recovering from connection errors. */
logReconnected(disconnectedMs: number): void
/** Show idle status with repo/branch info and shimmer animation. */
updateIdleStatus(): void
/** Show reconnecting status in the live display. */
updateReconnectingStatus(delayStr: string, elapsedStr: string): void
updateSessionStatus(
sessionId: string,
elapsed: string,
activity: SessionActivity,
trail: string[],
): void
clearStatus(): void
/** Set repository info for status line display. */
setRepoInfo(repoName: string, branch: string): void
/** Set debug log glob shown above the status line (ant users). */
setDebugLogPath(path: string): void
/** Transition to "Attached" state when a session starts. */
setAttached(sessionId: string): void
/** Show failed status in the live display. */
updateFailedStatus(error: string): void
/** Toggle QR code visibility. */
toggleQr(): void
/** Update the "<n> of <m> sessions" indicator and spawn mode hint. */
updateSessionCount(active: number, max: number, mode: SpawnMode): void
/** Update the spawn mode shown in the session-count line. Pass null to hide (single-session or toggle unavailable). */
setSpawnModeDisplay(mode: 'same-dir' | 'worktree' | null): void
/** Register a new session for multi-session display (called after spawn succeeds). */
addSession(sessionId: string, url: string): void
/** Update the per-session activity summary (tool being run) in the multi-session list. */
updateSessionActivity(sessionId: string, activity: SessionActivity): void
/**
* Set a session's display title. In multi-session mode, updates the bullet list
* entry. In single-session mode, also shows the title in the main status line.
* Triggers a render (guarded against reconnecting/failed states).
*/
setSessionTitle(sessionId: string, title: string): void
/** Remove a session from the multi-session display when it ends. */
removeSession(sessionId: string): void
/** Force a re-render of the status display (for multi-session activity refresh). */
refreshDisplay(): void
}

129
src/bridge/workSecret.ts Normal file
View File

@@ -0,0 +1,129 @@
import axios from 'axios'
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
import type { WorkSecret } from './types.js'
/** Decode a base64url-encoded work secret and validate its version. */
export function decodeWorkSecret(secret: string): WorkSecret {
const json = Buffer.from(secret, 'base64url').toString('utf-8')
const parsed: unknown = jsonParse(json)
if (
!parsed ||
typeof parsed !== 'object' ||
!('version' in parsed) ||
parsed.version !== 1
) {
throw new Error(
`Unsupported work secret version: ${parsed && typeof parsed === 'object' && 'version' in parsed ? parsed.version : 'unknown'}`,
)
}
const obj = parsed as Record<string, unknown>
if (
typeof obj.session_ingress_token !== 'string' ||
obj.session_ingress_token.length === 0
) {
throw new Error(
'Invalid work secret: missing or empty session_ingress_token',
)
}
if (typeof obj.api_base_url !== 'string') {
throw new Error('Invalid work secret: missing api_base_url')
}
return parsed as WorkSecret
}
/**
* Build a WebSocket SDK URL from the API base URL and session ID.
* Strips the HTTP(S) protocol and constructs a ws(s):// ingress URL.
*
* Uses /v2/ for localhost (direct to session-ingress, no Envoy rewrite)
* and /v1/ for production (Envoy rewrites /v1/ → /v2/).
*/
export function buildSdkUrl(apiBaseUrl: string, sessionId: string): string {
const isLocalhost =
apiBaseUrl.includes('localhost') || apiBaseUrl.includes('127.0.0.1')
const protocol = isLocalhost ? 'ws' : 'wss'
const version = isLocalhost ? 'v2' : 'v1'
const host = apiBaseUrl.replace(/^https?:\/\//, '').replace(/\/+$/, '')
return `${protocol}://${host}/${version}/session_ingress/ws/${sessionId}`
}
/**
* Compare two session IDs regardless of their tagged-ID prefix.
*
* Tagged IDs have the form {tag}_{body} or {tag}_staging_{body}, where the
* body encodes a UUID. CCR v2's compat layer returns `session_*` to v1 API
* clients (compat/convert.go:41) but the infrastructure layer (sandbox-gateway
* work queue, work poll response) uses `cse_*` (compat/CLAUDE.md:13). Both
* have the same underlying UUID.
*
* Without this, replBridge rejects its own session as "foreign" at the
* work-received check when the ccr_v2_compat_enabled gate is on.
*/
export function sameSessionId(a: string, b: string): boolean {
if (a === b) return true
// The body is everything after the last underscore — this handles both
// `{tag}_{body}` and `{tag}_staging_{body}`.
const aBody = a.slice(a.lastIndexOf('_') + 1)
const bBody = b.slice(b.lastIndexOf('_') + 1)
// Guard against IDs with no underscore (bare UUIDs): lastIndexOf returns -1,
// slice(0) returns the whole string, and we already checked a === b above.
// Require a minimum length to avoid accidental matches on short suffixes
// (e.g. single-char tag remnants from malformed IDs).
return aBody.length >= 4 && aBody === bBody
}
/**
* Build a CCR v2 session URL from the API base URL and session ID.
* Unlike buildSdkUrl, this returns an HTTP(S) URL (not ws://) and points at
* /v1/code/sessions/{id} — the child CC will derive the SSE stream path
* and worker endpoints from this base.
*/
export function buildCCRv2SdkUrl(
apiBaseUrl: string,
sessionId: string,
): string {
const base = apiBaseUrl.replace(/\/+$/, '')
return `${base}/v1/code/sessions/${sessionId}`
}
/**
* Register this bridge as the worker for a CCR v2 session.
* Returns the worker_epoch, which must be passed to the child CC process
* so its CCRClient can include it in every heartbeat/state/event request.
*
* Mirrors what environment-manager does in the container path
* (api-go/environment-manager/cmd/cmd_task_run.go RegisterWorker).
*/
export async function registerWorker(
sessionUrl: string,
accessToken: string,
): Promise<number> {
const response = await axios.post(
`${sessionUrl}/worker/register`,
{},
{
headers: {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json',
'anthropic-version': '2023-06-01',
},
timeout: 10_000,
},
)
// protojson serializes int64 as a string to avoid JS number precision loss;
// the Go side may also return a number depending on encoder settings.
const raw = response.data?.worker_epoch
const epoch = typeof raw === 'string' ? Number(raw) : raw
if (
typeof epoch !== 'number' ||
!Number.isFinite(epoch) ||
!Number.isSafeInteger(epoch)
) {
throw new Error(
`registerWorker: invalid worker_epoch in response: ${jsonStringify(response.data)}`,
)
}
return epoch
}

File diff suppressed because one or more lines are too long

135
src/buddy/companion.ts Normal file
View File

@@ -0,0 +1,135 @@
import { getGlobalConfig } from '../utils/config.js'
import {
type Companion,
type CompanionBones,
EYES,
HATS,
RARITIES,
RARITY_WEIGHTS,
type Rarity,
SPECIES,
STAT_NAMES,
type StatName,
} from './types.js'
// Mulberry32 — tiny seeded PRNG, good enough for picking ducks
function mulberry32(seed: number): () => number {
let a = seed >>> 0
return function () {
a |= 0
a = (a + 0x6d2b79f5) | 0
let t = Math.imul(a ^ (a >>> 15), 1 | a)
t = (t + Math.imul(t ^ (t >>> 7), 61 | t)) ^ t
return ((t ^ (t >>> 14)) >>> 0) / 4294967296
}
}
function hashString(s: string): number {
if (typeof Bun !== 'undefined') {
return Number(BigInt(Bun.hash(s)) & 0xffffffffn)
}
let h = 2166136261
for (let i = 0; i < s.length; i++) {
h ^= s.charCodeAt(i)
h = Math.imul(h, 16777619)
}
return h >>> 0
}
function pick<T>(rng: () => number, arr: readonly T[]): T {
return arr[Math.floor(rng() * arr.length)]!
}
function rollRarity(rng: () => number): Rarity {
const total = Object.values(RARITY_WEIGHTS).reduce((a, b) => a + b, 0)
let roll = rng() * total
for (const rarity of RARITIES) {
roll -= RARITY_WEIGHTS[rarity]
if (roll < 0) return rarity
}
return 'common'
}
const RARITY_FLOOR: Record<Rarity, number> = {
common: 5,
uncommon: 15,
rare: 25,
epic: 35,
legendary: 50,
}
// One peak stat, one dump stat, rest scattered. Rarity bumps the floor.
function rollStats(
rng: () => number,
rarity: Rarity,
): Record<StatName, number> {
const floor = RARITY_FLOOR[rarity]
const peak = pick(rng, STAT_NAMES)
let dump = pick(rng, STAT_NAMES)
while (dump === peak) dump = pick(rng, STAT_NAMES)
const stats = {} as Record<StatName, number>
for (const name of STAT_NAMES) {
if (name === peak) {
stats[name] = Math.min(100, floor + 50 + Math.floor(rng() * 30))
} else if (name === dump) {
stats[name] = Math.max(1, floor - 10 + Math.floor(rng() * 15))
} else {
stats[name] = floor + Math.floor(rng() * 40)
}
}
return stats
}
const SALT = 'friend-2026-401'
export type Roll = {
bones: CompanionBones
inspirationSeed: number
}
function rollFrom(rng: () => number): Roll {
const rarity = rollRarity(rng)
const bones: CompanionBones = {
rarity,
species: pick(rng, SPECIES),
eye: pick(rng, EYES),
hat: rarity === 'common' ? 'none' : pick(rng, HATS),
shiny: rng() < 0.01,
stats: rollStats(rng, rarity),
}
return { bones, inspirationSeed: Math.floor(rng() * 1e9) }
}
// Called from three hot paths (500ms sprite tick, per-keystroke PromptInput,
// per-turn observer) with the same userId → cache the deterministic result.
let rollCache: { key: string; value: Roll } | undefined
export function roll(userId: string): Roll {
const key = userId + SALT
if (rollCache?.key === key) return rollCache.value
const value = rollFrom(mulberry32(hashString(key)))
rollCache = { key, value }
return value
}
export function rollWithSeed(seed: string): Roll {
return rollFrom(mulberry32(hashString(seed)))
}
export function companionUserId(): string {
const config = getGlobalConfig()
return config.oauthAccount?.accountUuid ?? config.userID ?? 'anon'
}
// Regenerate bones from userId, merge with stored soul. Bones never persist
// so species renames and SPECIES-array edits can't break stored companions,
// and editing config.companion can't fake a rarity.
export function getCompanion(): Companion | undefined {
const stored = getGlobalConfig().companion
if (!stored) return undefined
const { bones } = roll(companionUserId())
// bones last so stale bones fields in old-format configs get overridden
return { ...stored, ...bones }
}

38
src/buddy/prompt.ts Normal file
View File

@@ -0,0 +1,38 @@
import { feature } from 'bun:bundle'
import type { Message } from '../types/message.js'
import type { Attachment } from '../utils/attachments.js'
import { getGlobalConfig } from '../utils/config.js'
import { getCompanion } from './companion.js'
export function companionIntroText(name: string, species: string): string {
return `# Companion
A small ${species} named ${name} sits beside the user's input box and occasionally comments in a speech bubble. You're not ${name} — it's a separate watcher.
When the user addresses ${name} directly (by name), its bubble will answer. Your job in that moment is to stay out of the way: respond in ONE line or less, or just answer any part of the message meant for you. Don't explain that you're not ${name} — they know. Don't narrate what ${name} might say — the bubble handles that.`
}
export function getCompanionIntroAttachment(
messages: Message[] | undefined,
): Attachment[] {
if (!feature('BUDDY')) return []
const companion = getCompanion()
if (!companion || getGlobalConfig().companionMuted) return []
// Skip if already announced for this companion.
for (const msg of messages ?? []) {
if (msg.type !== 'attachment') continue
if (msg.attachment.type !== 'companion_intro') continue
if (msg.attachment.name === companion.name) return []
}
return [
{
type: 'companion_intro',
name: companion.name,
species: companion.species,
},
]
}

516
src/buddy/sprites.ts Normal file
View File

@@ -0,0 +1,516 @@
import type { CompanionBones, Eye, Hat, Species } from './types.js'
import {
axolotl,
blob,
cactus,
capybara,
cat,
chonk,
dragon,
duck,
ghost,
goose,
mushroom,
octopus,
owl,
penguin,
rabbit,
robot,
snail,
turtle,
} from './types.js'
// Each sprite is 5 lines tall, 12 wide (after {E}→1char substitution).
// Multiple frames per species for idle fidget animation.
// Line 0 is the hat slot — must be blank in frames 0-1; frame 2 may use it.
const BODIES: Record<Species, string[][]> = {
[duck]: [
[
' ',
' __ ',
' <({E} )___ ',
' ( ._> ',
' `--´ ',
],
[
' ',
' __ ',
' <({E} )___ ',
' ( ._> ',
' `--´~ ',
],
[
' ',
' __ ',
' <({E} )___ ',
' ( .__> ',
' `--´ ',
],
],
[goose]: [
[
' ',
' ({E}> ',
' || ',
' _(__)_ ',
' ^^^^ ',
],
[
' ',
' ({E}> ',
' || ',
' _(__)_ ',
' ^^^^ ',
],
[
' ',
' ({E}>> ',
' || ',
' _(__)_ ',
' ^^^^ ',
],
],
[blob]: [
[
' ',
' .----. ',
' ( {E} {E} ) ',
' ( ) ',
' `----´ ',
],
[
' ',
' .------. ',
' ( {E} {E} ) ',
' ( ) ',
' `------´ ',
],
[
' ',
' .--. ',
' ({E} {E}) ',
' ( ) ',
' `--´ ',
],
],
[cat]: [
[
' ',
' /\\_/\\ ',
' ( {E} {E}) ',
' ( ω ) ',
' (")_(") ',
],
[
' ',
' /\\_/\\ ',
' ( {E} {E}) ',
' ( ω ) ',
' (")_(")~ ',
],
[
' ',
' /\\-/\\ ',
' ( {E} {E}) ',
' ( ω ) ',
' (")_(") ',
],
],
[dragon]: [
[
' ',
' /^\\ /^\\ ',
' < {E} {E} > ',
' ( ~~ ) ',
' `-vvvv-´ ',
],
[
' ',
' /^\\ /^\\ ',
' < {E} {E} > ',
' ( ) ',
' `-vvvv-´ ',
],
[
' ~ ~ ',
' /^\\ /^\\ ',
' < {E} {E} > ',
' ( ~~ ) ',
' `-vvvv-´ ',
],
],
[octopus]: [
[
' ',
' .----. ',
' ( {E} {E} ) ',
' (______) ',
' /\\/\\/\\/\\ ',
],
[
' ',
' .----. ',
' ( {E} {E} ) ',
' (______) ',
' \\/\\/\\/\\/ ',
],
[
' o ',
' .----. ',
' ( {E} {E} ) ',
' (______) ',
' /\\/\\/\\/\\ ',
],
],
[owl]: [
[
' ',
' /\\ /\\ ',
' (({E})({E})) ',
' ( >< ) ',
' `----´ ',
],
[
' ',
' /\\ /\\ ',
' (({E})({E})) ',
' ( >< ) ',
' .----. ',
],
[
' ',
' /\\ /\\ ',
' (({E})(-)) ',
' ( >< ) ',
' `----´ ',
],
],
[penguin]: [
[
' ',
' .---. ',
' ({E}>{E}) ',
' /( )\\ ',
' `---´ ',
],
[
' ',
' .---. ',
' ({E}>{E}) ',
' |( )| ',
' `---´ ',
],
[
' .---. ',
' ({E}>{E}) ',
' /( )\\ ',
' `---´ ',
' ~ ~ ',
],
],
[turtle]: [
[
' ',
' _,--._ ',
' ( {E} {E} ) ',
' /[______]\\ ',
' `` `` ',
],
[
' ',
' _,--._ ',
' ( {E} {E} ) ',
' /[______]\\ ',
' `` `` ',
],
[
' ',
' _,--._ ',
' ( {E} {E} ) ',
' /[======]\\ ',
' `` `` ',
],
],
[snail]: [
[
' ',
' {E} .--. ',
' \\ ( @ ) ',
' \\_`--´ ',
' ~~~~~~~ ',
],
[
' ',
' {E} .--. ',
' | ( @ ) ',
' \\_`--´ ',
' ~~~~~~~ ',
],
[
' ',
' {E} .--. ',
' \\ ( @ ) ',
' \\_`--´ ',
' ~~~~~~ ',
],
],
[ghost]: [
[
' ',
' .----. ',
' / {E} {E} \\ ',
' | | ',
' ~`~``~`~ ',
],
[
' ',
' .----. ',
' / {E} {E} \\ ',
' | | ',
' `~`~~`~` ',
],
[
' ~ ~ ',
' .----. ',
' / {E} {E} \\ ',
' | | ',
' ~~`~~`~~ ',
],
],
[axolotl]: [
[
' ',
'}~(______)~{',
'}~({E} .. {E})~{',
' ( .--. ) ',
' (_/ \\_) ',
],
[
' ',
'~}(______){~',
'~}({E} .. {E}){~',
' ( .--. ) ',
' (_/ \\_) ',
],
[
' ',
'}~(______)~{',
'}~({E} .. {E})~{',
' ( -- ) ',
' ~_/ \\_~ ',
],
],
[capybara]: [
[
' ',
' n______n ',
' ( {E} {E} ) ',
' ( oo ) ',
' `------´ ',
],
[
' ',
' n______n ',
' ( {E} {E} ) ',
' ( Oo ) ',
' `------´ ',
],
[
' ~ ~ ',
' u______n ',
' ( {E} {E} ) ',
' ( oo ) ',
' `------´ ',
],
],
[cactus]: [
[
' ',
' n ____ n ',
' | |{E} {E}| | ',
' |_| |_| ',
' | | ',
],
[
' ',
' ____ ',
' n |{E} {E}| n ',
' |_| |_| ',
' | | ',
],
[
' n n ',
' | ____ | ',
' | |{E} {E}| | ',
' |_| |_| ',
' | | ',
],
],
[robot]: [
[
' ',
' .[||]. ',
' [ {E} {E} ] ',
' [ ==== ] ',
' `------´ ',
],
[
' ',
' .[||]. ',
' [ {E} {E} ] ',
' [ -==- ] ',
' `------´ ',
],
[
' * ',
' .[||]. ',
' [ {E} {E} ] ',
' [ ==== ] ',
' `------´ ',
],
],
[rabbit]: [
[
' ',
' (\\__/) ',
' ( {E} {E} ) ',
' =( .. )= ',
' (")__(") ',
],
[
' ',
' (|__/) ',
' ( {E} {E} ) ',
' =( .. )= ',
' (")__(") ',
],
[
' ',
' (\\__/) ',
' ( {E} {E} ) ',
' =( . . )= ',
' (")__(") ',
],
],
[mushroom]: [
[
' ',
' .-o-OO-o-. ',
'(__________)',
' |{E} {E}| ',
' |____| ',
],
[
' ',
' .-O-oo-O-. ',
'(__________)',
' |{E} {E}| ',
' |____| ',
],
[
' . o . ',
' .-o-OO-o-. ',
'(__________)',
' |{E} {E}| ',
' |____| ',
],
],
[chonk]: [
[
' ',
' /\\ /\\ ',
' ( {E} {E} ) ',
' ( .. ) ',
' `------´ ',
],
[
' ',
' /\\ /| ',
' ( {E} {E} ) ',
' ( .. ) ',
' `------´ ',
],
[
' ',
' /\\ /\\ ',
' ( {E} {E} ) ',
' ( .. ) ',
' `------´~ ',
],
],
}
const HAT_LINES: Record<Hat, string> = {
none: '',
crown: ' \\^^^/ ',
tophat: ' [___] ',
propeller: ' -+- ',
halo: ' ( ) ',
wizard: ' /^\\ ',
beanie: ' (___) ',
tinyduck: ' ,> ',
}
export function renderSprite(bones: CompanionBones, frame = 0): string[] {
const frames = BODIES[bones.species]
const body = frames[frame % frames.length]!.map(line =>
line.replaceAll('{E}', bones.eye),
)
const lines = [...body]
// Only replace with hat if line 0 is empty (some fidget frames use it for smoke etc)
if (bones.hat !== 'none' && !lines[0]!.trim()) {
lines[0] = HAT_LINES[bones.hat]
}
// Drop blank hat slot — wastes a row in the Card and ambient sprite when
// there's no hat and the frame isn't using it for smoke/antenna/etc.
// Only safe when ALL frames have blank line 0; otherwise heights oscillate.
if (!lines[0]!.trim() && frames.every(f => !f[0]!.trim())) lines.shift()
return lines
}
export function spriteFrameCount(species: Species): number {
return BODIES[species].length
}
export function renderFace(bones: CompanionBones): string {
const eye: Eye = bones.eye
switch (bones.species) {
case duck:
case goose:
return `(${eye}>`
case blob:
return `(${eye}${eye})`
case cat:
return `=${eye}ω${eye}=`
case dragon:
return `<${eye}~${eye}>`
case octopus:
return `~(${eye}${eye})~`
case owl:
return `(${eye})(${eye})`
case penguin:
return `(${eye}>)`
case turtle:
return `[${eye}_${eye}]`
case snail:
return `${eye}(@)`
case ghost:
return `/${eye}${eye}\\`
case axolotl:
return `}${eye}.${eye}{`
case capybara:
return `(${eye}oo${eye})`
case cactus:
return `|${eye} ${eye}|`
case robot:
return `[${eye}${eye}]`
case rabbit:
return `(${eye}..${eye})`
case mushroom:
return `|${eye} ${eye}|`
case chonk:
return `(${eye}.${eye})`
}
}

150
src/buddy/types.ts Normal file
View File

@@ -0,0 +1,150 @@
export const RARITIES = [
'common',
'uncommon',
'rare',
'epic',
'legendary',
] as const
export type Rarity = (typeof RARITIES)[number]
// One species name collides with a model-codename canary in excluded-strings.txt.
// The check greps build output (not source), so runtime-constructing the value keeps
// the literal out of the bundle while the check stays armed for the actual codename.
// All species encoded uniformly; `as` casts are type-position only (erased pre-bundle).
const c = String.fromCharCode
// biome-ignore format: keep the species list compact
export const duck = c(0x64,0x75,0x63,0x6b) as 'duck'
export const goose = c(0x67, 0x6f, 0x6f, 0x73, 0x65) as 'goose'
export const blob = c(0x62, 0x6c, 0x6f, 0x62) as 'blob'
export const cat = c(0x63, 0x61, 0x74) as 'cat'
export const dragon = c(0x64, 0x72, 0x61, 0x67, 0x6f, 0x6e) as 'dragon'
export const octopus = c(0x6f, 0x63, 0x74, 0x6f, 0x70, 0x75, 0x73) as 'octopus'
export const owl = c(0x6f, 0x77, 0x6c) as 'owl'
export const penguin = c(0x70, 0x65, 0x6e, 0x67, 0x75, 0x69, 0x6e) as 'penguin'
export const turtle = c(0x74, 0x75, 0x72, 0x74, 0x6c, 0x65) as 'turtle'
export const snail = c(0x73, 0x6e, 0x61, 0x69, 0x6c) as 'snail'
export const ghost = c(0x67, 0x68, 0x6f, 0x73, 0x74) as 'ghost'
export const axolotl = c(0x61, 0x78, 0x6f, 0x6c, 0x6f, 0x74, 0x6c) as 'axolotl'
export const capybara = c(
0x63,
0x61,
0x70,
0x79,
0x62,
0x61,
0x72,
0x61,
) as 'capybara'
export const cactus = c(0x63, 0x61, 0x63, 0x74, 0x75, 0x73) as 'cactus'
export const robot = c(0x72, 0x6f, 0x62, 0x6f, 0x74) as 'robot'
export const rabbit = c(0x72, 0x61, 0x62, 0x62, 0x69, 0x74) as 'rabbit'
export const mushroom = c(
0x6d,
0x75,
0x73,
0x68,
0x72,
0x6f,
0x6f,
0x6d,
) as 'mushroom'
export const chonk = c(0x63, 0x68, 0x6f, 0x6e, 0x6b) as 'chonk'
export const SPECIES = [
duck,
goose,
blob,
cat,
dragon,
octopus,
owl,
penguin,
turtle,
snail,
ghost,
axolotl,
capybara,
cactus,
robot,
rabbit,
mushroom,
chonk,
] as const
export type Species = (typeof SPECIES)[number] // biome-ignore format: keep compact
export const EYES = ['·', '✦', '×', '◉', '@', '°'] as const
export type Eye = (typeof EYES)[number]
export const HATS = [
'none',
'crown',
'tophat',
'propeller',
'halo',
'wizard',
'beanie',
'tinyduck',
] as const
export type Hat = (typeof HATS)[number]
export const STAT_NAMES = [
'DEBUGGING',
'PATIENCE',
'CHAOS',
'WISDOM',
'SNARK',
] as const
export type StatName = (typeof STAT_NAMES)[number]
// Deterministic parts — derived from hash(userId)
export type CompanionBones = {
rarity: Rarity
species: Species
eye: Eye
hat: Hat
shiny: boolean
stats: Record<StatName, number>
}
// Model-generated soul — stored in config after first hatch
export type CompanionSoul = {
name: string
personality: string
}
export type Companion = CompanionBones &
CompanionSoul & {
hatchedAt: number
}
// What actually persists in config. Bones are regenerated from hash(userId)
// on every read so species renames don't break stored companions and users
// can't edit their way to a legendary.
export type StoredCompanion = CompanionSoul & { hatchedAt: number }
export const RARITY_WEIGHTS = {
common: 60,
uncommon: 25,
rare: 10,
epic: 4,
legendary: 1,
} as const satisfies Record<Rarity, number>
export const RARITY_STARS = {
common: '★',
uncommon: '★★',
rare: '★★★',
epic: '★★★★',
legendary: '★★★★★',
} as const satisfies Record<Rarity, string>
export const RARITY_COLORS = {
common: 'inactive',
uncommon: 'success',
rare: 'permission',
epic: 'autoAccept',
legendary: 'warning',
} as const satisfies Record<Rarity, keyof import('../utils/theme.js').Theme>

File diff suppressed because one or more lines are too long

33
src/cli/exit.ts Normal file
View File

@@ -0,0 +1,33 @@
/**
* CLI exit helpers for subcommand handlers.
*
* Consolidates the 4-5 line "print + lint-suppress + exit" block that was
* copy-pasted ~60 times across `claude mcp *` / `claude plugin *` handlers.
* The `: never` return type lets TypeScript narrow control flow at call sites
* without a trailing `return`.
*/
/* eslint-disable custom-rules/no-process-exit -- centralized CLI exit point */
// `return undefined as never` (not a post-exit throw) — tests spy on
// process.exit and let it return. Call sites write `return cliError(...)`
// where subsequent code would dereference narrowed-away values under mock.
// cliError uses console.error (tests spy on console.error); cliOk uses
// process.stdout.write (tests spy on process.stdout.write — Bun's console.log
// doesn't route through a spied process.stdout.write).
/** Write an error message to stderr (if given) and exit with code 1. */
export function cliError(msg?: string): never {
// biome-ignore lint/suspicious/noConsole: centralized CLI error output
if (msg) console.error(msg)
process.exit(1)
return undefined as never
}
/** Write a message to stdout (if given) and exit with code 0. */
export function cliOk(msg?: string): never {
if (msg) process.stdout.write(msg + '\n')
process.exit(0)
return undefined as never
}

View File

@@ -0,0 +1,72 @@
/**
* Agents subcommand handler — prints the list of configured agents.
* Dynamically imported only when `claude agents` runs.
*/
import {
AGENT_SOURCE_GROUPS,
compareAgentsByName,
getOverrideSourceLabel,
type ResolvedAgent,
resolveAgentModelDisplay,
resolveAgentOverrides,
} from '../../tools/AgentTool/agentDisplay.js'
import {
getActiveAgentsFromList,
getAgentDefinitionsWithOverrides,
} from '../../tools/AgentTool/loadAgentsDir.js'
import { getCwd } from '../../utils/cwd.js'
function formatAgent(agent: ResolvedAgent): string {
const model = resolveAgentModelDisplay(agent)
const parts = [agent.agentType]
if (model) {
parts.push(model)
}
if (agent.memory) {
parts.push(`${agent.memory} memory`)
}
return parts.join(' · ')
}
export async function agentsHandler(): Promise<void> {
const cwd = getCwd()
const { allAgents } = await getAgentDefinitionsWithOverrides(cwd)
const activeAgents = getActiveAgentsFromList(allAgents)
const resolvedAgents = resolveAgentOverrides(allAgents, activeAgents)
const lines: string[] = []
let totalActive = 0
for (const { label, source } of AGENT_SOURCE_GROUPS) {
const groupAgents = resolvedAgents
.filter(a => a.source === source)
.sort(compareAgentsByName)
if (groupAgents.length === 0) continue
lines.push(`${label}:`)
for (const agent of groupAgents) {
if (agent.overriddenBy) {
const winnerSource = getOverrideSourceLabel(agent.overriddenBy)
lines.push(` (shadowed by ${winnerSource}) ${formatAgent(agent)}`)
} else {
lines.push(` ${formatAgent(agent)}`)
totalActive++
}
}
lines.push('')
}
if (lines.length === 0) {
// biome-ignore lint/suspicious/noConsole:: intentional console output
console.log('No agents found.')
} else {
// biome-ignore lint/suspicious/noConsole:: intentional console output
console.log(`${totalActive} active agents\n`)
// biome-ignore lint/suspicious/noConsole:: intentional console output
console.log(lines.join('\n').trimEnd())
}
}

332
src/cli/handlers/auth.ts Normal file
View File

@@ -0,0 +1,332 @@
/* eslint-disable custom-rules/no-process-exit -- CLI subcommand handler intentionally exits */
import {
clearAuthRelatedCaches,
performLogout,
} from '../../commands/logout/logout.js'
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
logEvent,
} from '../../services/analytics/index.js'
import { getSSLErrorHint } from '../../services/api/errorUtils.js'
import { fetchAndStoreClaudeCodeFirstTokenDate } from '../../services/api/firstTokenDate.js'
import {
createAndStoreApiKey,
fetchAndStoreUserRoles,
refreshOAuthToken,
shouldUseClaudeAIAuth,
storeOAuthAccountInfo,
} from '../../services/oauth/client.js'
import { getOauthProfileFromOauthToken } from '../../services/oauth/getOauthProfile.js'
import { OAuthService } from '../../services/oauth/index.js'
import type { OAuthTokens } from '../../services/oauth/types.js'
import {
clearOAuthTokenCache,
getAnthropicApiKeyWithSource,
getAuthTokenSource,
getOauthAccountInfo,
getSubscriptionType,
isUsing3PServices,
saveOAuthTokensIfNeeded,
validateForceLoginOrg,
} from '../../utils/auth.js'
import { saveGlobalConfig } from '../../utils/config.js'
import { logForDebugging } from '../../utils/debug.js'
import { isRunningOnHomespace } from '../../utils/envUtils.js'
import { errorMessage } from '../../utils/errors.js'
import { logError } from '../../utils/log.js'
import { getAPIProvider } from '../../utils/model/providers.js'
import { getInitialSettings } from '../../utils/settings/settings.js'
import { jsonStringify } from '../../utils/slowOperations.js'
import {
buildAccountProperties,
buildAPIProviderProperties,
} from '../../utils/status.js'
/**
* Shared post-token-acquisition logic. Saves tokens, fetches profile/roles,
* and sets up the local auth state.
*/
export async function installOAuthTokens(tokens: OAuthTokens): Promise<void> {
// Clear old state before saving new credentials
await performLogout({ clearOnboarding: false })
// Reuse pre-fetched profile if available, otherwise fetch fresh
const profile =
tokens.profile ?? (await getOauthProfileFromOauthToken(tokens.accessToken))
if (profile) {
storeOAuthAccountInfo({
accountUuid: profile.account.uuid,
emailAddress: profile.account.email,
organizationUuid: profile.organization.uuid,
displayName: profile.account.display_name || undefined,
hasExtraUsageEnabled:
profile.organization.has_extra_usage_enabled ?? undefined,
billingType: profile.organization.billing_type ?? undefined,
subscriptionCreatedAt:
profile.organization.subscription_created_at ?? undefined,
accountCreatedAt: profile.account.created_at,
})
} else if (tokens.tokenAccount) {
// Fallback to token exchange account data when profile endpoint fails
storeOAuthAccountInfo({
accountUuid: tokens.tokenAccount.uuid,
emailAddress: tokens.tokenAccount.emailAddress,
organizationUuid: tokens.tokenAccount.organizationUuid,
})
}
const storageResult = saveOAuthTokensIfNeeded(tokens)
clearOAuthTokenCache()
if (storageResult.warning) {
logEvent('tengu_oauth_storage_warning', {
warning:
storageResult.warning as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
})
}
// Roles and first-token-date may fail for limited-scope tokens (e.g.
// inference-only from setup-token). They're not required for core auth.
await fetchAndStoreUserRoles(tokens.accessToken).catch(err =>
logForDebugging(String(err), { level: 'error' }),
)
if (shouldUseClaudeAIAuth(tokens.scopes)) {
await fetchAndStoreClaudeCodeFirstTokenDate().catch(err =>
logForDebugging(String(err), { level: 'error' }),
)
} else {
// API key creation is critical for Console users — let it throw.
const apiKey = await createAndStoreApiKey(tokens.accessToken)
if (!apiKey) {
throw new Error(
'Unable to create API key. The server accepted the request but did not return a key.',
)
}
}
await clearAuthRelatedCaches()
}
export async function authLogin({
email,
sso,
console: useConsole,
claudeai,
}: {
email?: string
sso?: boolean
console?: boolean
claudeai?: boolean
}): Promise<void> {
if (useConsole && claudeai) {
process.stderr.write(
'Error: --console and --claudeai cannot be used together.\n',
)
process.exit(1)
}
const settings = getInitialSettings()
// forceLoginMethod is a hard constraint (enterprise setting) — matches ConsoleOAuthFlow behavior.
// Without it, --console selects Console; --claudeai (or no flag) selects claude.ai.
const loginWithClaudeAi = settings.forceLoginMethod
? settings.forceLoginMethod === 'claudeai'
: !useConsole
const orgUUID = settings.forceLoginOrgUUID
// Fast path: if a refresh token is provided via env var, skip the browser
// OAuth flow and exchange it directly for tokens.
const envRefreshToken = process.env.CLAUDE_CODE_OAUTH_REFRESH_TOKEN
if (envRefreshToken) {
const envScopes = process.env.CLAUDE_CODE_OAUTH_SCOPES
if (!envScopes) {
process.stderr.write(
'CLAUDE_CODE_OAUTH_SCOPES is required when using CLAUDE_CODE_OAUTH_REFRESH_TOKEN.\n' +
'Set it to the space-separated scopes the refresh token was issued with\n' +
'(e.g. "user:inference" or "user:profile user:inference user:sessions:claude_code user:mcp_servers").\n',
)
process.exit(1)
}
const scopes = envScopes.split(/\s+/).filter(Boolean)
try {
logEvent('tengu_login_from_refresh_token', {})
const tokens = await refreshOAuthToken(envRefreshToken, { scopes })
await installOAuthTokens(tokens)
const orgResult = await validateForceLoginOrg()
if (!orgResult.valid) {
process.stderr.write(orgResult.message + '\n')
process.exit(1)
}
// Mark onboarding complete — interactive paths handle this via
// the Onboarding component, but the env var path skips it.
saveGlobalConfig(current => {
if (current.hasCompletedOnboarding) return current
return { ...current, hasCompletedOnboarding: true }
})
logEvent('tengu_oauth_success', {
loginWithClaudeAi: shouldUseClaudeAIAuth(tokens.scopes),
})
process.stdout.write('Login successful.\n')
process.exit(0)
} catch (err) {
logError(err)
const sslHint = getSSLErrorHint(err)
process.stderr.write(
`Login failed: ${errorMessage(err)}\n${sslHint ? sslHint + '\n' : ''}`,
)
process.exit(1)
}
}
const resolvedLoginMethod = sso ? 'sso' : undefined
const oauthService = new OAuthService()
try {
logEvent('tengu_oauth_flow_start', { loginWithClaudeAi })
const result = await oauthService.startOAuthFlow(
async url => {
process.stdout.write('Opening browser to sign in…\n')
process.stdout.write(`If the browser didn't open, visit: ${url}\n`)
},
{
loginWithClaudeAi,
loginHint: email,
loginMethod: resolvedLoginMethod,
orgUUID,
},
)
await installOAuthTokens(result)
const orgResult = await validateForceLoginOrg()
if (!orgResult.valid) {
process.stderr.write(orgResult.message + '\n')
process.exit(1)
}
logEvent('tengu_oauth_success', { loginWithClaudeAi })
process.stdout.write('Login successful.\n')
process.exit(0)
} catch (err) {
logError(err)
const sslHint = getSSLErrorHint(err)
process.stderr.write(
`Login failed: ${errorMessage(err)}\n${sslHint ? sslHint + '\n' : ''}`,
)
process.exit(1)
} finally {
oauthService.cleanup()
}
}
export async function authStatus(opts: {
json?: boolean
text?: boolean
}): Promise<void> {
const { source: authTokenSource, hasToken } = getAuthTokenSource()
const { source: apiKeySource } = getAnthropicApiKeyWithSource()
const hasApiKeyEnvVar =
!!process.env.ANTHROPIC_API_KEY && !isRunningOnHomespace()
const oauthAccount = getOauthAccountInfo()
const subscriptionType = getSubscriptionType()
const using3P = isUsing3PServices()
const loggedIn =
hasToken || apiKeySource !== 'none' || hasApiKeyEnvVar || using3P
// Determine auth method
let authMethod: string = 'none'
if (using3P) {
authMethod = 'third_party'
} else if (authTokenSource === 'claude.ai') {
authMethod = 'claude.ai'
} else if (authTokenSource === 'apiKeyHelper') {
authMethod = 'api_key_helper'
} else if (authTokenSource !== 'none') {
authMethod = 'oauth_token'
} else if (apiKeySource === 'ANTHROPIC_API_KEY' || hasApiKeyEnvVar) {
authMethod = 'api_key'
} else if (apiKeySource === '/login managed key') {
authMethod = 'claude.ai'
}
if (opts.text) {
const properties = [
...buildAccountProperties(),
...buildAPIProviderProperties(),
]
let hasAuthProperty = false
for (const prop of properties) {
const value =
typeof prop.value === 'string'
? prop.value
: Array.isArray(prop.value)
? prop.value.join(', ')
: null
if (value === null || value === 'none') {
continue
}
hasAuthProperty = true
if (prop.label) {
process.stdout.write(`${prop.label}: ${value}\n`)
} else {
process.stdout.write(`${value}\n`)
}
}
if (!hasAuthProperty && hasApiKeyEnvVar) {
process.stdout.write('API key: ANTHROPIC_API_KEY\n')
}
if (!loggedIn) {
process.stdout.write(
'Not logged in. Run claude auth login to authenticate.\n',
)
}
} else {
const apiProvider = getAPIProvider()
const resolvedApiKeySource =
apiKeySource !== 'none'
? apiKeySource
: hasApiKeyEnvVar
? 'ANTHROPIC_API_KEY'
: null
const output: Record<string, string | boolean | null> = {
loggedIn,
authMethod,
apiProvider,
}
if (resolvedApiKeySource) {
output.apiKeySource = resolvedApiKeySource
}
if (authMethod === 'claude.ai') {
output.email = oauthAccount?.emailAddress ?? null
output.orgId = oauthAccount?.organizationUuid ?? null
output.orgName = oauthAccount?.organizationName ?? null
output.subscriptionType = subscriptionType ?? null
}
process.stdout.write(jsonStringify(output, null, 2) + '\n')
}
process.exit(loggedIn ? 0 : 1)
}
export async function authLogout(): Promise<void> {
try {
await performLogout({ clearOnboarding: false })
} catch {
process.stderr.write('Failed to log out.\n')
process.exit(1)
}
process.stdout.write('Successfully logged out from your Anthropic account.\n')
process.exit(0)
}

Some files were not shown because too many files have changed in this diff Show More