Просмотр исходного кода

Reapply Batch 1: 22 clean non-AI-SDK cherry-picks (#11473)

* fix: add image content support to MCP tool responses (#10874)

Co-authored-by: Roo Code <[email protected]>

* fix: transform tool blocks to text before condensing (EXT-624) (#10975)

* refactor(read_file): Codex-inspired read_file refactor EXT-617 (#10981)

* feat: allow import settings in initial welcome screen (#10994)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>

* fix(code-index): remove deprecated text-embedding-004 and migrate to gemini-embedding-001 (#11038)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Hannes Rudolph <[email protected]>

* chore: treat extension .env as optional (#11116)

* fix: sanitize tool_use_id in tool_result blocks to match API history (#11131)

Tool IDs from providers like Gemini/OpenRouter contain special characters
(e.g., 'functions.read_file:0') that are sanitized when saving tool_use
blocks to API history. However, tool_result blocks were using the original
unsanitized IDs, causing ToolResultIdMismatchError.

This fix ensures tool_result blocks use sanitizeToolUseId() to match the
sanitized tool_use IDs in conversation history.

Fixes EXT-711

* fix: queue messages during command execution instead of losing them (#11140)

* IPC fixes for task cancellation and queued messages (#11162)

* feat: add support for AGENTS.local.md personal override files (#11183)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>

* fix(cli): resolve race condition causing provider switch during mode changes (#11205)

When using slash commands with `mode:` frontmatter (e.g., `/cli-release`
with `mode: code`), the CLI would fail with "Could not resolve
authentication method" from the Anthropic SDK, even when using a
non-Anthropic provider like `--provider roo`.

Root cause: In `markWebviewReady()`, the `webviewDidLaunch` message was
sent before `updateSettings`, creating a race condition. The
`webviewDidLaunch` handler's "first-time init" sync would read
`getState()` before CLI-provided settings were applied to the context
proxy. Since `getState()` defaults `apiProvider` to "anthropic" when
unset, this default was saved to the provider profile. When a slash
command triggered `handleModeSwitch()`, it found this corrupted profile
with `apiProvider: "anthropic"` (but no API key) and activated it,
overwriting the CLI's working roo provider configuration.

Fix:
1. Reorder `markWebviewReady()` to send `updateSettings` before
   `webviewDidLaunch`, ensuring the context proxy has CLI-provided
   values when the initialization handler runs.
2. Guard the first-time init sync with `checkExistKey(apiConfiguration)`
   to prevent saving a profile with only the default "anthropic"
   fallback and no actual API keys configured.

Co-authored-by: Claude Opus 4.5 <[email protected]>

* chore: remove dead toolFormat code from getEnvironmentDetails (#11207)

Remove the toolFormat constant and <tool_format> line from environment
details output. Native tool calling is now the only supported protocol,
making this code unnecessary.

Fixes #11206

Co-authored-by: Roo Code <[email protected]>

* feat: extract translation and merge resolver modes into reusable skills (#11215)

* feat: extract translation and merge resolver modes into reusable skills

- Add roo-translation skill with comprehensive i18n guidelines
- Add roo-conflict-resolution skill for intelligent merge conflict resolution
- Add /roo-translate slash command as shortcut for translation skill
- Add /roo-resolve-conflicts slash command as shortcut for conflict resolution skill

The existing translate and merge-resolver modes are preserved. These new skills
and commands provide reusable access to the same functionality.

Closes CLO-722

* feat: add guidances directory with translator guidance file

- Add .roo/guidances/roo-translator.md for brand voice, tone, and word choice guidance
- Update roo-translation skill to reference the guidance file

The guidance file serves as a placeholder for translation style guidelines
that will be interpolated at runtime.

* fix: rename guidances directory to guidance (singular)

* fix: remove language-specific section from translator guidance

The guidance file should focus on brand voice, tone, and word choice only.

* fix: remove language-specific guidelines section from skill file

* Update .roo/skills/roo-translation/SKILL.md

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>

---------

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>
Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>

* feat: add Claude Opus 4.6 support across all providers (#11224)

* feat: add Claude Opus 4.6 support across all providers

Add Claude Opus 4.6 (claude-opus-4-6) model definitions and 1M context
support across Anthropic, Bedrock, Vertex AI, OpenRouter, and Vercel AI
Gateway providers.

- Anthropic: 128K max output, /5 pricing, 1M context tiers
- Bedrock: anthropic.claude-opus-4-6-v1:0 with 1M context + global inference
- Vertex: claude-opus-4-6 with 1M context tiers
- OpenRouter: prompt caching + reasoning budget sets
- Vercel AI Gateway: Opus 4.5 and 4.6 added to capability sets
- UI: 1M context checkbox for Opus 4.6 on all providers
- i18n: Updated 1M context descriptions across 18 locales

Also adds Opus 4.5 to Vercel AI Gateway (previously missing) and
OpenRouter maxTokens overrides for Opus 4.5/4.6.

Closes #11223

* fix: apply tier pricing when 1M context is enabled on Bedrock

When awsBedrock1MContext is enabled for tiered models like Opus 4.6,
also apply the 1M tier pricing (inputPrice, outputPrice, cache prices)
instead of only updating contextWindow. This ensures cost calculations
and UI display use the correct >200K rates.

* feat: add gpt-5.3-codex model to OpenAI Codex provider (#11225)

feat: add gpt-5.3-codex model and make it default for OpenAI Codex provider

Co-authored-by: Roo Code <[email protected]>

* fix: prevent parent task state loss during orchestrator delegation (#11281)

* fix: make removeClineFromStack() delegation-aware to prevent orphaned parent tasks (#11302)

* fix: make removeClineFromStack() delegation-aware to prevent orphaned parent tasks

When a delegated child task is removed via removeClineFromStack() (e.g., Clear
Task, navigate to history, start new task), the parent task was left orphaned
in "delegated" status with a stale awaitingChildId. This made the parent
unresumable without manual history repair.

This fix captures parentTaskId and childTaskId before abort/dispose, then
repairs the parent metadata (status -> active, clear awaitingChildId) when
the popped task is a delegated child and awaitingChildId matches.

Parent lookup + updateTaskHistory are wrapped in try/catch so failures are
non-fatal (logged but do not block the pop).

Closes #11301

* fix: add skipDelegationRepair opt-out to removeClineFromStack() for nested delegation

---------

Co-authored-by: Roo Code <[email protected]>

* fix(reliability): prevent webview postMessage crashes and make dispose idempotent (#11313)

* fix(reliability): prevent webview postMessage crashes and make dispose idempotent

Closes: #11311

1. postMessageToWebview() now catches rejections from
   webview.postMessage() so that messages sent after the webview is
   disposed do not surface as unhandled promise rejections.

2. dispose() is guarded by a _disposed flag so that repeated calls
   (e.g. during rapid extension deactivation) are no-ops.

3. CloudService mock in ClineProvider.spec.ts updated to include
   off() — a pre-existing gap exposed by the new dispose test.

Co-Authored-By: Claude Opus 4.6 <[email protected]>

* fix: add early _disposed check in postMessageToWebview

Skip the postMessage call entirely when the provider is already disposed,
avoiding unnecessary try/catch execution. Added test coverage for this path.

* chore: trigger CI

---------

Co-authored-by: Claude Opus 4.6 <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* fix: resolve race condition in new_task delegation that loses parent task history (#11331)

* fix: resolve race condition in new_task delegation that loses parent task history

When delegateParentAndOpenChild creates a child task via createTask(), the
Task constructor fires startTask() as a fire-and-forget async call. The child
immediately begins its task loop and eventually calls saveClineMessages() →
updateTaskHistory(), which reads globalState, modifies it, and writes back.

Meanwhile, delegateParentAndOpenChild persists the parent's delegation
metadata (status: 'delegated', delegatedToId, awaitingChildId, childIds) via
a separate updateTaskHistory() call AFTER createTask() returns.

These two concurrent read-modify-write operations on globalState race: the
last writer wins, overwriting the other's changes. When the child's write
lands last, the parent's delegation fields are lost, making the parent task
unresumable when the child finishes.

Fix: create the child task with startTask: false, persist the parent's
delegation metadata first, then manually call child.start(). This ensures
the parent metadata is safely in globalState before the child begins writing.

* docs: clarify Task.start() only handles new tasks, not history resume

* fix: serialize taskHistory writes and fix delegation status overwrite race (#11335)

Add a promise-chain mutex (withTaskHistoryLock) to serialize all
read-modify-write operations on taskHistory, preventing concurrent
interleaving from silently dropping entries.

Reorder reopenParentFromDelegation to close the child instance
before marking it completed, so the abort path's stale 'active'
status write no longer overwrites the 'completed' state.

Covered by new tests: RPD-04/05/06, UTH-02/04, and a full mutex
concurrency suite.

* Fix task resumption in the API module (#11369)

* chore: clean up repo-facing mode rules (#11410)

* fix: add maxReadFileLine to ExtensionState type for webview compatibility

---------

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Daniel <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>
Co-authored-by: Chris Estreich <[email protected]>
Co-authored-by: Claude Opus 4.5 <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>
Co-authored-by: 0xMink <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>
Hannes Rudolph 4 часов назад
Родитель
Сommit
b2b77809ff
100 измененных файлов с 6345 добавлено и 4558 удалено
  1. 1 0
      .gitignore
  2. 8 4
      apps/cli/src/agent/extension-host.ts
  3. 3 4
      apps/vscode-e2e/src/suite/tools/read-file.test.ts
  4. 0 1
      packages/cloud/src/__tests__/CloudSettingsService.parsing.test.ts
  5. 2 2
      packages/evals/src/cli/runTaskInCli.ts
  6. 2 2
      packages/evals/src/cli/runTaskInVscode.ts
  7. 2 2
      packages/types/src/__tests__/ipc.test.ts
  8. 0 2
      packages/types/src/cloud.ts
  9. 8 1
      packages/types/src/events.ts
  10. 0 3
      packages/types/src/global-settings.ts
  11. 0 2
      packages/types/src/ipc.ts
  12. 22 0
      packages/types/src/providers/anthropic.ts
  13. 27 0
      packages/types/src/providers/bedrock.ts
  14. 15 1
      packages/types/src/providers/openai-codex.ts
  15. 4 2
      packages/types/src/providers/openrouter.ts
  16. 4 0
      packages/types/src/providers/vercel-ai-gateway.ts
  17. 26 1
      packages/types/src/providers/vertex.ts
  18. 4 0
      packages/types/src/task.ts
  19. 2 0
      packages/types/src/telemetry.ts
  20. 80 0
      packages/types/src/tool-params.ts
  21. 2 3
      packages/types/src/vscode-extension-host.ts
  22. 0 1
      src/__tests__/command-mentions.spec.ts
  23. 35 0
      src/__tests__/extension.spec.ts
  24. 244 1
      src/__tests__/history-resume-delegation.spec.ts
  25. 55 2
      src/__tests__/provider-delegation.spec.ts
  26. 281 0
      src/__tests__/removeClineFromStack-delegation.spec.ts
  27. 15 21
      src/api/providers/__tests__/bedrock-native-tools.spec.ts
  28. 1 1
      src/api/providers/__tests__/openai-codex.spec.ts
  29. 11 4
      src/api/providers/anthropic.ts
  30. 9 4
      src/api/providers/bedrock.ts
  31. 10 0
      src/api/providers/fetchers/openrouter.ts
  32. 136 20
      src/core/assistant-message/NativeToolCallParser.ts
  33. 239 134
      src/core/assistant-message/__tests__/NativeToolCallParser.spec.ts
  34. 18 8
      src/core/assistant-message/presentAssistantMessage.ts
  35. 307 0
      src/core/condense/__tests__/index.spec.ts
  36. 102 2
      src/core/condense/index.ts
  37. 0 3
      src/core/environment/getEnvironmentDetails.ts
  38. 18 96
      src/core/mentions/__tests__/processUserContentMentions.spec.ts
  39. 147 54
      src/core/mentions/index.ts
  40. 64 22
      src/core/mentions/processUserContentMentions.ts
  41. 0 127
      src/core/prompts/__tests__/__snapshots__/add-custom-instructions/partial-reads-enabled.snap
  42. 0 21
      src/core/prompts/__tests__/add-custom-instructions.spec.ts
  43. 0 3
      src/core/prompts/__tests__/sections.spec.ts
  44. 0 17
      src/core/prompts/__tests__/system-prompt.spec.ts
  45. 132 8
      src/core/prompts/sections/__tests__/custom-instructions.spec.ts
  46. 64 29
      src/core/prompts/sections/custom-instructions.ts
  47. 0 3
      src/core/prompts/system.ts
  48. 8 8
      src/core/prompts/tools/native-tools/__tests__/converters.spec.ts
  49. 24 145
      src/core/prompts/tools/native-tools/__tests__/read_file.spec.ts
  50. 1 7
      src/core/prompts/tools/native-tools/index.ts
  51. 104 69
      src/core/prompts/tools/native-tools/read_file.ts
  52. 0 1
      src/core/prompts/types.ts
  53. 86 0
      src/core/task-persistence/__tests__/apiMessages.spec.ts
  54. 34 1
      src/core/task-persistence/__tests__/taskMessages.spec.ts
  55. 21 9
      src/core/task-persistence/apiMessages.ts
  56. 15 1
      src/core/task-persistence/taskMessages.ts
  57. 73 25
      src/core/task/Task.ts
  58. 471 0
      src/core/task/__tests__/Task.persistence.spec.ts
  59. 44 1
      src/core/task/__tests__/Task.spec.ts
  60. 5 1
      src/core/task/__tests__/flushPendingToolResultsToHistory.spec.ts
  61. 1 1
      src/core/task/__tests__/grace-retry-errors.spec.ts
  62. 1 1
      src/core/task/__tests__/grounding-sources.test.ts
  63. 1 1
      src/core/task/__tests__/reasoning-preservation.test.ts
  64. 0 9
      src/core/task/build-tools.ts
  65. 497 486
      src/core/tools/ReadFileTool.ts
  66. 28 9
      src/core/tools/UseMcpToolTool.ts
  67. 11 7
      src/core/tools/__tests__/ToolRepetitionDetector.spec.ts
  68. 511 1783
      src/core/tools/__tests__/readFileTool.spec.ts
  69. 244 3
      src/core/tools/__tests__/useMcpToolTool.spec.ts
  70. 0 160
      src/core/tools/helpers/__tests__/truncateDefinitions.spec.ts
  71. 0 9
      src/core/tools/helpers/fileTokenBudget.ts
  72. 0 44
      src/core/tools/helpers/truncateDefinitions.ts
  73. 179 75
      src/core/webview/ClineProvider.ts
  74. 87 1
      src/core/webview/__tests__/ClineProvider.spec.ts
  75. 161 0
      src/core/webview/__tests__/ClineProvider.taskHistory.spec.ts
  76. 0 2
      src/core/webview/__tests__/generateSystemPrompt.browser-capability.spec.ts
  77. 0 4
      src/core/webview/generateSystemPrompt.ts
  78. 11 5
      src/core/webview/webviewMessageHandler.ts
  79. 12 7
      src/extension.ts
  80. 1 0
      src/extension/__tests__/api-send-message.spec.ts
  81. 57 25
      src/extension/api.ts
  82. 0 221
      src/integrations/misc/__tests__/extract-text-large-files.spec.ts
  83. 639 0
      src/integrations/misc/__tests__/indentation-reader.spec.ts
  84. 0 147
      src/integrations/misc/__tests__/read-file-tool.spec.ts
  85. 0 321
      src/integrations/misc/__tests__/read-file-with-budget.spec.ts
  86. 58 34
      src/integrations/misc/extract-text.ts
  87. 469 0
      src/integrations/misc/indentation-reader.ts
  88. 0 182
      src/integrations/misc/read-file-with-budget.ts
  89. 20 1
      src/services/code-index/__tests__/service-factory.spec.ts
  90. 22 5
      src/services/code-index/embedders/__tests__/gemini.spec.ts
  91. 25 4
      src/services/code-index/embedders/gemini.ts
  92. 95 0
      src/shared/__tests__/embeddingModels.spec.ts
  93. 3 1
      src/shared/embeddingModels.ts
  94. 36 6
      src/shared/tools.ts
  95. 24 52
      src/utils/__tests__/json-schema.spec.ts
  96. 8 0
      src/utils/__tests__/tool-id.spec.ts
  97. 7 1
      webview-ui/src/components/chat/ChatRow.tsx
  98. 89 4
      webview-ui/src/components/chat/ChatView.tsx
  99. 62 0
      webview-ui/src/components/chat/__tests__/ChatView.spec.tsx
  100. 0 68
      webview-ui/src/components/settings/ContextManagementSettings.tsx

+ 1 - 0
.gitignore

@@ -18,6 +18,7 @@ bin/
 
 # Local prompts and rules
 /local-prompts
+AGENTS.local.md
 
 # Test environment
 .test_env

+ 8 - 4
apps/cli/src/agent/extension-host.ts

@@ -428,12 +428,16 @@ export class ExtensionHost extends EventEmitter implements ExtensionHostInterfac
 	public markWebviewReady(): void {
 		this.isReady = true
 
-		// Send initial webview messages to trigger proper extension initialization.
-		// This is critical for the extension to start sending state updates properly.
-		this.sendToExtension({ type: "webviewDidLaunch" })
-
+		// Apply CLI settings to the runtime config and context proxy BEFORE
+		// sending webviewDidLaunch. This prevents a race condition where the
+		// webviewDidLaunch handler's first-time init sync reads default state
+		// (apiProvider: "anthropic") instead of the CLI-provided settings.
 		setRuntimeConfigValues("roo-cline", this.initialSettings as Record<string, unknown>)
 		this.sendToExtension({ type: "updateSettings", updatedSettings: this.initialSettings })
+
+		// Now trigger extension initialization. The context proxy should already
+		// have CLI-provided values when the webviewDidLaunch handler runs.
+		this.sendToExtension({ type: "webviewDidLaunch" })
 	}
 
 	public isInInitialSetup(): boolean {

+ 3 - 4
apps/vscode-e2e/src/suite/tools/read-file.test.ts

@@ -376,7 +376,7 @@ suite.skip("Roo Code read_file Tool", function () {
 		}
 	})
 
-	test("Should read file with line range", async function () {
+	test("Should read file with slice offset/limit", async function () {
 		const api = globalThis.api
 		const messages: ClineMessage[] = []
 		let taskCompleted = false
@@ -446,7 +446,7 @@ suite.skip("Roo Code read_file Tool", function () {
 					alwaysAllowReadOnly: true,
 					alwaysAllowReadOnlyOutsideWorkspace: true,
 				},
-				text: `Use the read_file tool to read the file "${fileName}" and show me what's on lines 2, 3, and 4. The file contains lines like "Line 1", "Line 2", etc. Assume the file exists and you can read it directly.`,
+				text: `Use the read_file tool to read the file "${fileName}" using slice mode with offset=2 and limit=3 (1-based offset). The file contains lines like "Line 1", "Line 2", etc. After reading, show me the three lines you read.`,
 			})
 
 			// Wait for task completion
@@ -455,9 +455,8 @@ suite.skip("Roo Code read_file Tool", function () {
 			// Verify tool was executed
 			assert.ok(toolExecuted, "The read_file tool should have been executed")
 
-			// Verify the tool returned the correct lines (when line range is used)
+			// Verify the tool returned the correct lines (offset=2, limit=3 -> lines 2-4)
 			if (toolResult && (toolResult as string).includes(" | ")) {
-				// The result includes line numbers
 				assert.ok(
 					(toolResult as string).includes("2 | Line 2"),
 					"Tool result should include line 2 with line number",

+ 0 - 1
packages/cloud/src/__tests__/CloudSettingsService.parsing.test.ts

@@ -81,7 +81,6 @@ describe("CloudSettingsService - Response Parsing", () => {
 				version: 2,
 				defaultSettings: {
 					maxOpenTabsContext: 10,
-					maxReadFileLine: 1000,
 				},
 				allowList: {
 					allowAll: false,

+ 2 - 2
packages/evals/src/cli/runTaskInCli.ts

@@ -263,7 +263,7 @@ export const runTaskWithCli = async ({ run, task, publish, logger, jobToken }: R
 
 		if (rooTaskId && !isClientDisconnected) {
 			logger.info("cancelling task")
-			client.sendCommand({ commandName: TaskCommandName.CancelTask, data: rooTaskId })
+			client.sendCommand({ commandName: TaskCommandName.CancelTask })
 			await new Promise((resolve) => setTimeout(resolve, 5_000))
 		}
 
@@ -288,7 +288,7 @@ export const runTaskWithCli = async ({ run, task, publish, logger, jobToken }: R
 
 	if (rooTaskId && !isClientDisconnected) {
 		logger.info("closing task")
-		client.sendCommand({ commandName: TaskCommandName.CloseTask, data: rooTaskId })
+		client.sendCommand({ commandName: TaskCommandName.CloseTask })
 		await new Promise((resolve) => setTimeout(resolve, 2_000))
 	}
 

+ 2 - 2
packages/evals/src/cli/runTaskInVscode.ts

@@ -270,7 +270,7 @@ export const runTaskInVscode = async ({ run, task, publish, logger, jobToken }:
 
 		if (rooTaskId && !isClientDisconnected) {
 			logger.info("cancelling task")
-			client.sendCommand({ commandName: TaskCommandName.CancelTask, data: rooTaskId })
+			client.sendCommand({ commandName: TaskCommandName.CancelTask })
 			await new Promise((resolve) => setTimeout(resolve, 5_000)) // Allow some time for the task to cancel.
 		}
 
@@ -289,7 +289,7 @@ export const runTaskInVscode = async ({ run, task, publish, logger, jobToken }:
 
 	if (rooTaskId && !isClientDisconnected) {
 		logger.info("closing task")
-		client.sendCommand({ commandName: TaskCommandName.CloseTask, data: rooTaskId })
+		client.sendCommand({ commandName: TaskCommandName.CloseTask })
 		await new Promise((resolve) => setTimeout(resolve, 2_000)) // Allow some time for the window to close.
 	}
 

+ 2 - 2
packages/types/src/__tests__/ipc.test.ts

@@ -27,7 +27,7 @@ describe("IPC Types", () => {
 				const result = taskCommandSchema.safeParse(resumeTaskCommand)
 				expect(result.success).toBe(true)
 
-				if (result.success) {
+				if (result.success && result.data.commandName === TaskCommandName.ResumeTask) {
 					expect(result.data.commandName).toBe("ResumeTask")
 					expect(result.data.data).toBe("non-existent-task-id")
 				}
@@ -45,7 +45,7 @@ describe("IPC Types", () => {
 			const result = taskCommandSchema.safeParse(resumeTaskCommand)
 			expect(result.success).toBe(true)
 
-			if (result.success) {
+			if (result.success && result.data.commandName === TaskCommandName.ResumeTask) {
 				expect(result.data.commandName).toBe("ResumeTask")
 				expect(result.data.data).toBe("task-123")
 			}

+ 0 - 2
packages/types/src/cloud.ts

@@ -95,7 +95,6 @@ export const organizationDefaultSettingsSchema = globalSettingsSchema
 	.pick({
 		enableCheckpoints: true,
 		maxOpenTabsContext: true,
-		maxReadFileLine: true,
 		maxWorkspaceFiles: true,
 		showRooIgnoredFiles: true,
 		terminalCommandDelay: true,
@@ -108,7 +107,6 @@ export const organizationDefaultSettingsSchema = globalSettingsSchema
 	.merge(
 		z.object({
 			maxOpenTabsContext: z.number().int().nonnegative().optional(),
-			maxReadFileLine: z.number().int().gte(-1).optional(),
 			maxWorkspaceFiles: z.number().int().nonnegative().optional(),
 			terminalCommandDelay: z.number().int().nonnegative().optional(),
 			terminalShellIntegrationTimeout: z.number().int().nonnegative().optional(),

+ 8 - 1
packages/types/src/events.ts

@@ -1,6 +1,6 @@
 import { z } from "zod"
 
-import { clineMessageSchema, tokenUsageSchema } from "./message.js"
+import { clineMessageSchema, queuedMessageSchema, tokenUsageSchema } from "./message.js"
 import { toolNamesSchema, toolUsageSchema } from "./tool.js"
 
 /**
@@ -35,6 +35,7 @@ export enum RooCodeEventName {
 	TaskModeSwitched = "taskModeSwitched",
 	TaskAskResponded = "taskAskResponded",
 	TaskUserMessage = "taskUserMessage",
+	QueuedMessagesUpdated = "queuedMessagesUpdated",
 
 	// Task Analytics
 	TaskTokenUsageUpdated = "taskTokenUsageUpdated",
@@ -100,6 +101,7 @@ export const rooCodeEventsSchema = z.object({
 	[RooCodeEventName.TaskModeSwitched]: z.tuple([z.string(), z.string()]),
 	[RooCodeEventName.TaskAskResponded]: z.tuple([z.string()]),
 	[RooCodeEventName.TaskUserMessage]: z.tuple([z.string()]),
+	[RooCodeEventName.QueuedMessagesUpdated]: z.tuple([z.string(), z.array(queuedMessageSchema)]),
 
 	[RooCodeEventName.TaskToolFailed]: z.tuple([z.string(), toolNamesSchema, z.string()]),
 	[RooCodeEventName.TaskTokenUsageUpdated]: z.tuple([z.string(), tokenUsageSchema, toolUsageSchema]),
@@ -217,6 +219,11 @@ export const taskEventSchema = z.discriminatedUnion("eventName", [
 		payload: rooCodeEventsSchema.shape[RooCodeEventName.TaskAskResponded],
 		taskId: z.number().optional(),
 	}),
+	z.object({
+		eventName: z.literal(RooCodeEventName.QueuedMessagesUpdated),
+		payload: rooCodeEventsSchema.shape[RooCodeEventName.QueuedMessagesUpdated],
+		taskId: z.number().optional(),
+	}),
 
 	// Task Analytics
 	z.object({

+ 0 - 3
packages/types/src/global-settings.ts

@@ -119,7 +119,6 @@ export const globalSettingsSchema = z.object({
 	allowedMaxCost: z.number().nullish(),
 	autoCondenseContext: z.boolean().optional(),
 	autoCondenseContextPercent: z.number().optional(),
-	maxConcurrentFileReads: z.number().optional(),
 
 	/**
 	 * Whether to include current time in the environment details
@@ -173,7 +172,6 @@ export const globalSettingsSchema = z.object({
 	maxWorkspaceFiles: z.number().optional(),
 	showRooIgnoredFiles: z.boolean().optional(),
 	enableSubfolderRules: z.boolean().optional(),
-	maxReadFileLine: z.number().optional(),
 	maxImageFileSize: z.number().optional(),
 	maxTotalImageSize: z.number().optional(),
 
@@ -389,7 +387,6 @@ export const EVALS_SETTINGS: RooCodeSettings = {
 	maxWorkspaceFiles: 200,
 	maxGitStatusFiles: 20,
 	showRooIgnoredFiles: true,
-	maxReadFileLine: -1, // -1 to enable full file reading.
 
 	includeDiagnosticMessages: true,
 	maxDiagnosticMessages: 50,

+ 0 - 2
packages/types/src/ipc.ts

@@ -64,11 +64,9 @@ export const taskCommandSchema = z.discriminatedUnion("commandName", [
 	}),
 	z.object({
 		commandName: z.literal(TaskCommandName.CancelTask),
-		data: z.string(),
 	}),
 	z.object({
 		commandName: z.literal(TaskCommandName.CloseTask),
-		data: z.string(),
 	}),
 	z.object({
 		commandName: z.literal(TaskCommandName.ResumeTask),

+ 22 - 0
packages/types/src/providers/anthropic.ts

@@ -1,6 +1,7 @@
 import type { ModelInfo } from "../model.js"
 
 // https://docs.anthropic.com/en/docs/about-claude/models
+// https://platform.claude.com/docs/en/about-claude/pricing
 
 export type AnthropicModelId = keyof typeof anthropicModels
 export const anthropicDefaultModelId: AnthropicModelId = "claude-sonnet-4-5"
@@ -48,6 +49,27 @@ export const anthropicModels = {
 			},
 		],
 	},
+	"claude-opus-4-6": {
+		maxTokens: 128_000, // Overridden to 8k if `enableReasoningEffort` is false.
+		contextWindow: 200_000, // Default 200K, extendable to 1M with beta flag
+		supportsImages: true,
+		supportsPromptCache: true,
+		inputPrice: 5.0, // $5 per million input tokens (≤200K context)
+		outputPrice: 25.0, // $25 per million output tokens (≤200K context)
+		cacheWritesPrice: 6.25, // $6.25 per million tokens
+		cacheReadsPrice: 0.5, // $0.50 per million tokens
+		supportsReasoningBudget: true,
+		// Tiered pricing for extended context (requires beta flag)
+		tiers: [
+			{
+				contextWindow: 1_000_000, // 1M tokens with beta flag
+				inputPrice: 10.0, // $10 per million input tokens (>200K context)
+				outputPrice: 37.5, // $37.50 per million output tokens (>200K context)
+				cacheWritesPrice: 12.5, // $12.50 per million tokens (>200K context)
+				cacheReadsPrice: 1.0, // $1.00 per million tokens (>200K context)
+			},
+		],
+	},
 	"claude-opus-4-5-20251101": {
 		maxTokens: 32_000, // Overridden to 8k if `enableReasoningEffort` is false.
 		contextWindow: 200_000,

+ 27 - 0
packages/types/src/providers/bedrock.ts

@@ -119,6 +119,30 @@ export const bedrockModels = {
 		maxCachePoints: 4,
 		cachableFields: ["system", "messages", "tools"],
 	},
+	"anthropic.claude-opus-4-6-v1:0": {
+		maxTokens: 8192,
+		contextWindow: 200_000, // Default 200K, extendable to 1M with beta flag 'context-1m-2025-08-07'
+		supportsImages: true,
+		supportsPromptCache: true,
+		supportsReasoningBudget: true,
+		inputPrice: 5.0, // $5 per million input tokens (≤200K context)
+		outputPrice: 25.0, // $25 per million output tokens (≤200K context)
+		cacheWritesPrice: 6.25, // $6.25 per million tokens
+		cacheReadsPrice: 0.5, // $0.50 per million tokens
+		minTokensPerCachePoint: 1024,
+		maxCachePoints: 4,
+		cachableFields: ["system", "messages", "tools"],
+		// Tiered pricing for extended context (requires beta flag 'context-1m-2025-08-07')
+		tiers: [
+			{
+				contextWindow: 1_000_000, // 1M tokens with beta flag
+				inputPrice: 10.0, // $10 per million input tokens (>200K context)
+				outputPrice: 37.5, // $37.50 per million output tokens (>200K context)
+				cacheWritesPrice: 12.5, // $12.50 per million tokens (>200K context)
+				cacheReadsPrice: 1.0, // $1.00 per million tokens (>200K context)
+			},
+		],
+	},
 	"anthropic.claude-opus-4-5-20251101-v1:0": {
 		maxTokens: 8192,
 		contextWindow: 200_000,
@@ -475,6 +499,7 @@ export const BEDROCK_REGIONS = [
 export const BEDROCK_1M_CONTEXT_MODEL_IDS = [
 	"anthropic.claude-sonnet-4-20250514-v1:0",
 	"anthropic.claude-sonnet-4-5-20250929-v1:0",
+	"anthropic.claude-opus-4-6-v1:0",
 ] as const
 
 // Amazon Bedrock models that support Global Inference profiles
@@ -483,11 +508,13 @@ export const BEDROCK_1M_CONTEXT_MODEL_IDS = [
 // - Claude Sonnet 4.5
 // - Claude Haiku 4.5
 // - Claude Opus 4.5
+// - Claude Opus 4.6
 export const BEDROCK_GLOBAL_INFERENCE_MODEL_IDS = [
 	"anthropic.claude-sonnet-4-20250514-v1:0",
 	"anthropic.claude-sonnet-4-5-20250929-v1:0",
 	"anthropic.claude-haiku-4-5-20251001-v1:0",
 	"anthropic.claude-opus-4-5-20251101-v1:0",
+	"anthropic.claude-opus-4-6-v1:0",
 ] as const
 
 // Amazon Bedrock Service Tier types

+ 15 - 1
packages/types/src/providers/openai-codex.ts

@@ -16,7 +16,7 @@ import type { ModelInfo } from "../model.js"
 
 export type OpenAiCodexModelId = keyof typeof openAiCodexModels
 
-export const openAiCodexDefaultModelId: OpenAiCodexModelId = "gpt-5.2-codex"
+export const openAiCodexDefaultModelId: OpenAiCodexModelId = "gpt-5.3-codex"
 
 /**
  * Models available through the Codex OAuth flow.
@@ -54,6 +54,20 @@ export const openAiCodexModels = {
 		supportsTemperature: false,
 		description: "GPT-5.1 Codex: GPT-5.1 optimized for agentic coding via ChatGPT subscription",
 	},
+	"gpt-5.3-codex": {
+		maxTokens: 128000,
+		contextWindow: 400000,
+		includedTools: ["apply_patch"],
+		excludedTools: ["apply_diff", "write_to_file"],
+		supportsImages: true,
+		supportsPromptCache: true,
+		supportsReasoningEffort: ["low", "medium", "high", "xhigh"],
+		reasoningEffort: "medium",
+		inputPrice: 0,
+		outputPrice: 0,
+		supportsTemperature: false,
+		description: "GPT-5.3 Codex: OpenAI's flagship coding model via ChatGPT subscription",
+	},
 	"gpt-5.2-codex": {
 		maxTokens: 128000,
 		contextWindow: 400000,

+ 4 - 2
packages/types/src/providers/openrouter.ts

@@ -40,8 +40,9 @@ export const OPEN_ROUTER_PROMPT_CACHING_MODELS = new Set([
 	"anthropic/claude-sonnet-4.5",
 	"anthropic/claude-opus-4",
 	"anthropic/claude-opus-4.1",
-	"anthropic/claude-haiku-4.5",
 	"anthropic/claude-opus-4.5",
+	"anthropic/claude-opus-4.6",
+	"anthropic/claude-haiku-4.5",
 	"google/gemini-2.5-flash-preview",
 	"google/gemini-2.5-flash-preview:thinking",
 	"google/gemini-2.5-flash-preview-05-20",
@@ -70,9 +71,10 @@ export const OPEN_ROUTER_REASONING_BUDGET_MODELS = new Set([
 	"anthropic/claude-3.7-sonnet:beta",
 	"anthropic/claude-opus-4",
 	"anthropic/claude-opus-4.1",
+	"anthropic/claude-opus-4.5",
+	"anthropic/claude-opus-4.6",
 	"anthropic/claude-sonnet-4",
 	"anthropic/claude-sonnet-4.5",
-	"anthropic/claude-opus-4.5",
 	"anthropic/claude-haiku-4.5",
 	"google/gemini-2.5-pro-preview",
 	"google/gemini-2.5-pro",

+ 4 - 0
packages/types/src/providers/vercel-ai-gateway.ts

@@ -11,6 +11,8 @@ export const VERCEL_AI_GATEWAY_PROMPT_CACHING_MODELS = new Set([
 	"anthropic/claude-3.7-sonnet",
 	"anthropic/claude-opus-4",
 	"anthropic/claude-opus-4.1",
+	"anthropic/claude-opus-4.5",
+	"anthropic/claude-opus-4.6",
 	"anthropic/claude-sonnet-4",
 	"openai/gpt-4.1",
 	"openai/gpt-4.1-mini",
@@ -50,6 +52,8 @@ export const VERCEL_AI_GATEWAY_VISION_AND_TOOLS_MODELS = new Set([
 	"anthropic/claude-3.7-sonnet",
 	"anthropic/claude-opus-4",
 	"anthropic/claude-opus-4.1",
+	"anthropic/claude-opus-4.5",
+	"anthropic/claude-opus-4.6",
 	"anthropic/claude-sonnet-4",
 	"google/gemini-1.5-flash",
 	"google/gemini-1.5-pro",

+ 26 - 1
packages/types/src/providers/vertex.ts

@@ -274,6 +274,27 @@ export const vertexModels = {
 		cacheReadsPrice: 0.1,
 		supportsReasoningBudget: true,
 	},
+	"claude-opus-4-6": {
+		maxTokens: 8192,
+		contextWindow: 200_000, // Default 200K, extendable to 1M with beta flag 'context-1m-2025-08-07'
+		supportsImages: true,
+		supportsPromptCache: true,
+		inputPrice: 5.0, // $5 per million input tokens (≤200K context)
+		outputPrice: 25.0, // $25 per million output tokens (≤200K context)
+		cacheWritesPrice: 6.25, // $6.25 per million tokens
+		cacheReadsPrice: 0.5, // $0.50 per million tokens
+		supportsReasoningBudget: true,
+		// Tiered pricing for extended context (requires beta flag 'context-1m-2025-08-07')
+		tiers: [
+			{
+				contextWindow: 1_000_000, // 1M tokens with beta flag
+				inputPrice: 10.0, // $10 per million input tokens (>200K context)
+				outputPrice: 37.5, // $37.50 per million output tokens (>200K context)
+				cacheWritesPrice: 12.5, // $12.50 per million tokens (>200K context)
+				cacheReadsPrice: 1.0, // $1.00 per million tokens (>200K context)
+			},
+		],
+	},
 	"claude-opus-4-5@20251101": {
 		maxTokens: 8192,
 		contextWindow: 200_000,
@@ -467,7 +488,11 @@ export const vertexModels = {
 
 // Vertex AI models that support 1M context window beta
 // Uses the same beta header 'context-1m-2025-08-07' as Anthropic and Bedrock
-export const VERTEX_1M_CONTEXT_MODEL_IDS = ["claude-sonnet-4@20250514", "claude-sonnet-4-5@20250929"] as const
+export const VERTEX_1M_CONTEXT_MODEL_IDS = [
+	"claude-sonnet-4@20250514",
+	"claude-sonnet-4-5@20250929",
+	"claude-opus-4-6",
+] as const
 
 export const VERTEX_REGIONS = [
 	{ value: "global", label: "global" },

+ 4 - 0
packages/types/src/task.ts

@@ -95,6 +95,9 @@ export interface CreateTaskOptions {
 	initialTodos?: TodoItem[]
 	/** Initial status for the task's history item (e.g., "active" for child tasks) */
 	initialStatus?: "active" | "delegated" | "completed"
+	/** Whether to start the task loop immediately (default: true).
+	 *  When false, the caller must invoke `task.start()` manually. */
+	startTask?: boolean
 }
 
 export enum TaskStatus {
@@ -154,6 +157,7 @@ export type TaskEvents = {
 	[RooCodeEventName.TaskModeSwitched]: [taskId: string, mode: string]
 	[RooCodeEventName.TaskAskResponded]: []
 	[RooCodeEventName.TaskUserMessage]: [taskId: string]
+	[RooCodeEventName.QueuedMessagesUpdated]: [taskId: string, messages: QueuedMessage[]]
 
 	// Task Analytics
 	[RooCodeEventName.TaskToolFailed]: [taskId: string, tool: ToolName, error: string]

+ 2 - 0
packages/types/src/telemetry.ts

@@ -73,6 +73,7 @@ export enum TelemetryEventName {
 	CODE_INDEX_ERROR = "Code Index Error",
 	TELEMETRY_SETTINGS_CHANGED = "Telemetry Settings Changed",
 	MODEL_CACHE_EMPTY_RESPONSE = "Model Cache Empty Response",
+	READ_FILE_LEGACY_FORMAT_USED = "Read File Legacy Format Used",
 }
 
 /**
@@ -203,6 +204,7 @@ export const rooCodeTelemetryEventSchema = z.discriminatedUnion("type", [
 			TelemetryEventName.TAB_SHOWN,
 			TelemetryEventName.MODE_SETTINGS_CHANGED,
 			TelemetryEventName.CUSTOM_MODE_CREATED,
+			TelemetryEventName.READ_FILE_LEGACY_FORMAT_USED,
 		]),
 		properties: telemetryPropertiesSchema,
 	}),

+ 80 - 0
packages/types/src/tool-params.ts

@@ -2,16 +2,96 @@
  * Tool parameter type definitions for native protocol
  */
 
+/**
+ * Read mode for the read_file tool.
+ * - "slice": Simple offset/limit reading (default)
+ * - "indentation": Semantic block extraction based on code structure
+ */
+export type ReadFileMode = "slice" | "indentation"
+
+/**
+ * Indentation-mode configuration for the read_file tool.
+ */
+export interface IndentationParams {
+	/** 1-based line number to anchor indentation extraction (defaults to offset) */
+	anchor_line?: number
+	/** Maximum indentation levels to include above anchor (0 = unlimited) */
+	max_levels?: number
+	/** Include sibling blocks at the same indentation level */
+	include_siblings?: boolean
+	/** Include file header (imports, comments at top) */
+	include_header?: boolean
+	/** Hard cap on lines returned for indentation mode */
+	max_lines?: number
+}
+
+/**
+ * Parameters for the read_file tool (new format).
+ *
+ * NOTE: This is the canonical, single-file-per-call shape.
+ */
+export interface ReadFileParams {
+	/** Path to the file, relative to workspace */
+	path: string
+	/** Reading mode: "slice" (default) or "indentation" */
+	mode?: ReadFileMode
+	/** 1-based line number to start reading from (slice mode, default: 1) */
+	offset?: number
+	/** Maximum number of lines to read (default: 2000) */
+	limit?: number
+	/** Indentation-mode configuration (only used when mode === "indentation") */
+	indentation?: IndentationParams
+}
+
+// ─── Legacy Format Types (Backward Compatibility) ─────────────────────────────
+
+/**
+ * Line range specification for legacy read_file format.
+ * Represents a contiguous range of lines [start, end] (1-based, inclusive).
+ */
 export interface LineRange {
 	start: number
 	end: number
 }
 
+/**
+ * File entry for legacy read_file format.
+ * Supports reading multiple disjoint line ranges from a single file.
+ */
 export interface FileEntry {
+	/** Path to the file, relative to workspace */
 	path: string
+	/** Optional list of line ranges to read (if omitted, reads entire file) */
 	lineRanges?: LineRange[]
 }
 
+/**
+ * Legacy parameters for the read_file tool (pre-refactor format).
+ * Supports reading multiple files in a single call with optional line ranges.
+ *
+ * @deprecated Use ReadFileParams instead. This format is maintained for
+ * backward compatibility with existing chat histories.
+ */
+export interface LegacyReadFileParams {
+	/** Array of file entries to read */
+	files: FileEntry[]
+	/** Discriminant flag for type narrowing */
+	_legacyFormat: true
+}
+
+/**
+ * Union type for read_file tool parameters.
+ * Supports both new single-file format and legacy multi-file format.
+ */
+export type ReadFileToolParams = ReadFileParams | LegacyReadFileParams
+
+/**
+ * Type guard to check if params are in legacy format.
+ */
+export function isLegacyReadFileParams(params: ReadFileToolParams): params is LegacyReadFileParams {
+	return "_legacyFormat" in params && params._legacyFormat === true
+}
+
 export interface Coordinate {
 	x: number
 	y: number

+ 2 - 3
packages/types/src/vscode-extension-host.ts

@@ -64,7 +64,6 @@ export interface ExtensionMessage {
 		| "remoteBrowserEnabled"
 		| "ttsStart"
 		| "ttsStop"
-		| "maxReadFileLine"
 		| "fileSearchResults"
 		| "toggleApiConfigPin"
 		| "acceptInput"
@@ -301,7 +300,6 @@ export type ExtensionState = Pick<
 	| "ttsSpeed"
 	| "soundEnabled"
 	| "soundVolume"
-	| "maxConcurrentFileReads"
 	| "terminalOutputPreviewSize"
 	| "terminalShellIntegrationTimeout"
 	| "terminalShellIntegrationDisabled"
@@ -353,7 +351,7 @@ export type ExtensionState = Pick<
 	maxWorkspaceFiles: number // Maximum number of files to include in current working directory details (0-500)
 	showRooIgnoredFiles: boolean // Whether to show .rooignore'd files in listings
 	enableSubfolderRules: boolean // Whether to load rules from subdirectories
-	maxReadFileLine: number // Maximum number of lines to read from a file before truncating
+	maxReadFileLine?: number // Maximum line limit for read_file tool (-1 for default)
 	maxImageFileSize: number // Maximum size of image files to process in MB
 	maxTotalImageSize: number // Maximum total size for all images in a single read operation in MB
 
@@ -814,6 +812,7 @@ export interface ClineSayTool {
 	isProtected?: boolean
 	additionalFileCount?: number // Number of additional files in the same read_file request
 	lineNumber?: number
+	startLine?: number // Starting line for read_file operations (for navigation on click)
 	query?: string
 	batchFiles?: Array<{
 		path: string

+ 0 - 1
src/__tests__/command-mentions.spec.ts

@@ -36,7 +36,6 @@ describe("Command Mentions", () => {
 			false, // showRooIgnoredFiles
 			true, // includeDiagnosticMessages
 			50, // maxDiagnosticMessages
-			undefined, // maxReadFileLine
 		)
 	}
 

+ 35 - 0
src/__tests__/extension.spec.ts

@@ -46,6 +46,11 @@ vi.mock("@dotenvx/dotenvx", () => ({
 	config: vi.fn(),
 }))
 
+// Mock fs so the extension module can safely check for optional .env.
+vi.mock("fs", () => ({
+	existsSync: vi.fn().mockReturnValue(false),
+}))
+
 const mockBridgeOrchestratorDisconnect = vi.fn().mockResolvedValue(undefined)
 
 const mockCloudServiceInstance = {
@@ -239,6 +244,36 @@ describe("extension.ts", () => {
 		authStateChangedHandler = undefined
 	})
 
+	test("does not call dotenvx.config when optional .env does not exist", async () => {
+		vi.resetModules()
+		vi.clearAllMocks()
+
+		const fs = await import("fs")
+		vi.mocked(fs.existsSync).mockReturnValue(false)
+
+		const dotenvx = await import("@dotenvx/dotenvx")
+
+		const { activate } = await import("../extension")
+		await activate(mockContext)
+
+		expect(dotenvx.config).not.toHaveBeenCalled()
+	})
+
+	test("calls dotenvx.config when optional .env exists", async () => {
+		vi.resetModules()
+		vi.clearAllMocks()
+
+		const fs = await import("fs")
+		vi.mocked(fs.existsSync).mockReturnValue(true)
+
+		const dotenvx = await import("@dotenvx/dotenvx")
+
+		const { activate } = await import("../extension")
+		await activate(mockContext)
+
+		expect(dotenvx.config).toHaveBeenCalledTimes(1)
+	})
+
 	test("authStateChangedHandler calls BridgeOrchestrator.disconnect when logged-out event fires", async () => {
 		const { CloudService, BridgeOrchestrator } = await import("@roo-code/cloud")
 

+ 244 - 1
src/__tests__/history-resume-delegation.spec.ts

@@ -387,6 +387,7 @@ describe("History resume delegation - parent metadata transitions", () => {
 
 	it("reopenParentFromDelegation emits events in correct order: TaskDelegationCompleted → TaskDelegationResumed", async () => {
 		const emitSpy = vi.fn()
+		const updateTaskHistory = vi.fn().mockResolvedValue([])
 
 		const provider = {
 			contextProxy: { globalStorageUri: { fsPath: "/tmp" } },
@@ -411,7 +412,7 @@ describe("History resume delegation - parent metadata transitions", () => {
 				overwriteClineMessages: vi.fn().mockResolvedValue(undefined),
 				overwriteApiConversationHistory: vi.fn().mockResolvedValue(undefined),
 			}),
-			updateTaskHistory: vi.fn().mockResolvedValue([]),
+			updateTaskHistory,
 		} as unknown as ClineProvider
 
 		vi.mocked(readTaskMessages).mockResolvedValue([])
@@ -433,6 +434,92 @@ describe("History resume delegation - parent metadata transitions", () => {
 		const resumedIdx = emitSpy.mock.calls.findIndex((c) => c[0] === RooCodeEventName.TaskDelegationResumed)
 		expect(completedIdx).toBeGreaterThanOrEqual(0)
 		expect(resumedIdx).toBeGreaterThan(completedIdx)
+
+		// RPD-05: verify parent metadata persistence happens before TaskDelegationCompleted emit
+		const parentUpdateCallIdx = updateTaskHistory.mock.calls.findIndex((call) => {
+			const item = call[0] as { id?: string; status?: string } | undefined
+			return item?.id === "p3" && item.status === "active"
+		})
+		expect(parentUpdateCallIdx).toBeGreaterThanOrEqual(0)
+
+		const parentUpdateCallOrder = updateTaskHistory.mock.invocationCallOrder[parentUpdateCallIdx]
+		const completedEmitCallOrder = emitSpy.mock.invocationCallOrder[completedIdx]
+		expect(parentUpdateCallOrder).toBeLessThan(completedEmitCallOrder)
+	})
+
+	it("reopenParentFromDelegation continues when overwrite operations fail and still resumes/emits (RPD-06)", async () => {
+		const emitSpy = vi.fn()
+		const parentInstance = {
+			resumeAfterDelegation: vi.fn().mockResolvedValue(undefined),
+			overwriteClineMessages: vi.fn().mockRejectedValue(new Error("ui overwrite failed")),
+			overwriteApiConversationHistory: vi.fn().mockRejectedValue(new Error("api overwrite failed")),
+		}
+
+		const provider = {
+			contextProxy: { globalStorageUri: { fsPath: "/tmp" } },
+			getTaskWithId: vi.fn().mockImplementation(async (id: string) => {
+				if (id === "parent-rpd06") {
+					return {
+						historyItem: {
+							id: "parent-rpd06",
+							status: "delegated",
+							awaitingChildId: "child-rpd06",
+							childIds: ["child-rpd06"],
+							ts: 800,
+							task: "Parent RPD-06",
+							tokensIn: 0,
+							tokensOut: 0,
+							totalCost: 0,
+						},
+					}
+				}
+
+				return {
+					historyItem: {
+						id: "child-rpd06",
+						status: "active",
+						ts: 801,
+						task: "Child RPD-06",
+						tokensIn: 0,
+						tokensOut: 0,
+						totalCost: 0,
+					},
+				}
+			}),
+			emit: emitSpy,
+			getCurrentTask: vi.fn(() => ({ taskId: "child-rpd06" })),
+			removeClineFromStack: vi.fn().mockResolvedValue(undefined),
+			createTaskWithHistoryItem: vi.fn().mockResolvedValue(parentInstance),
+			updateTaskHistory: vi.fn().mockResolvedValue([]),
+		} as unknown as ClineProvider
+
+		vi.mocked(readTaskMessages).mockResolvedValue([])
+		vi.mocked(readApiMessages).mockResolvedValue([])
+
+		await expect(
+			(ClineProvider.prototype as any).reopenParentFromDelegation.call(provider, {
+				parentTaskId: "parent-rpd06",
+				childTaskId: "child-rpd06",
+				completionResultSummary: "Subtask finished despite overwrite failures",
+			}),
+		).resolves.toBeUndefined()
+
+		expect(parentInstance.overwriteClineMessages).toHaveBeenCalledTimes(1)
+		expect(parentInstance.overwriteApiConversationHistory).toHaveBeenCalledTimes(1)
+		expect(parentInstance.resumeAfterDelegation).toHaveBeenCalledTimes(1)
+
+		expect(emitSpy).toHaveBeenCalledWith(
+			RooCodeEventName.TaskDelegationCompleted,
+			"parent-rpd06",
+			"child-rpd06",
+			"Subtask finished despite overwrite failures",
+		)
+		expect(emitSpy).toHaveBeenCalledWith(RooCodeEventName.TaskDelegationResumed, "parent-rpd06", "child-rpd06")
+
+		const completedIdx = emitSpy.mock.calls.findIndex((c) => c[0] === RooCodeEventName.TaskDelegationCompleted)
+		const resumedIdx = emitSpy.mock.calls.findIndex((c) => c[0] === RooCodeEventName.TaskDelegationResumed)
+		expect(completedIdx).toBeGreaterThanOrEqual(0)
+		expect(resumedIdx).toBeGreaterThan(completedIdx)
 	})
 
 	it("reopenParentFromDelegation does NOT emit TaskPaused or TaskUnpaused (new flow only)", async () => {
@@ -480,6 +567,162 @@ describe("History resume delegation - parent metadata transitions", () => {
 		expect(eventNames).not.toContain(RooCodeEventName.TaskSpawned)
 	})
 
+	it("reopenParentFromDelegation skips child close when current task differs and still reopens parent (RPD-02)", async () => {
+		const parentInstance = {
+			resumeAfterDelegation: vi.fn().mockResolvedValue(undefined),
+			overwriteClineMessages: vi.fn().mockResolvedValue(undefined),
+			overwriteApiConversationHistory: vi.fn().mockResolvedValue(undefined),
+		}
+
+		const updateTaskHistory = vi.fn().mockResolvedValue([])
+		const removeClineFromStack = vi.fn().mockResolvedValue(undefined)
+		const createTaskWithHistoryItem = vi.fn().mockResolvedValue(parentInstance)
+
+		const provider = {
+			contextProxy: { globalStorageUri: { fsPath: "/tmp" } },
+			getTaskWithId: vi.fn().mockImplementation(async (id: string) => {
+				if (id === "parent-rpd02") {
+					return {
+						historyItem: {
+							id: "parent-rpd02",
+							status: "delegated",
+							awaitingChildId: "child-rpd02",
+							childIds: ["child-rpd02"],
+							ts: 600,
+							task: "Parent RPD-02",
+							tokensIn: 0,
+							tokensOut: 0,
+							totalCost: 0,
+						},
+					}
+				}
+				return {
+					historyItem: {
+						id: "child-rpd02",
+						status: "active",
+						ts: 601,
+						task: "Child RPD-02",
+						tokensIn: 0,
+						tokensOut: 0,
+						totalCost: 0,
+					},
+				}
+			}),
+			emit: vi.fn(),
+			getCurrentTask: vi.fn(() => ({ taskId: "different-open-task" })),
+			removeClineFromStack,
+			createTaskWithHistoryItem,
+			updateTaskHistory,
+		} as unknown as ClineProvider
+
+		vi.mocked(readTaskMessages).mockResolvedValue([])
+		vi.mocked(readApiMessages).mockResolvedValue([])
+
+		await (ClineProvider.prototype as any).reopenParentFromDelegation.call(provider, {
+			parentTaskId: "parent-rpd02",
+			childTaskId: "child-rpd02",
+			completionResultSummary: "Child done without being current",
+		})
+
+		expect(removeClineFromStack).not.toHaveBeenCalled()
+		expect(updateTaskHistory).toHaveBeenCalledWith(
+			expect.objectContaining({
+				id: "child-rpd02",
+				status: "completed",
+			}),
+		)
+		expect(createTaskWithHistoryItem).toHaveBeenCalledWith(
+			expect.objectContaining({
+				id: "parent-rpd02",
+				status: "active",
+				completedByChildId: "child-rpd02",
+			}),
+			{ startTask: false },
+		)
+		expect(parentInstance.resumeAfterDelegation).toHaveBeenCalledTimes(1)
+	})
+
+	it("reopenParentFromDelegation logs child status persistence failure and continues reopen flow (RPD-04)", async () => {
+		const logSpy = vi.fn()
+		const emitSpy = vi.fn()
+		const parentInstance = {
+			resumeAfterDelegation: vi.fn().mockResolvedValue(undefined),
+			overwriteClineMessages: vi.fn().mockResolvedValue(undefined),
+			overwriteApiConversationHistory: vi.fn().mockResolvedValue(undefined),
+		}
+
+		const updateTaskHistory = vi.fn().mockImplementation(async (historyItem: { id?: string }) => {
+			if (historyItem.id === "child-rpd04") {
+				throw new Error("child status persist failed")
+			}
+			return []
+		})
+
+		const provider = {
+			contextProxy: { globalStorageUri: { fsPath: "/tmp" } },
+			getTaskWithId: vi.fn().mockImplementation(async (id: string) => {
+				if (id === "parent-rpd04") {
+					return {
+						historyItem: {
+							id: "parent-rpd04",
+							status: "delegated",
+							awaitingChildId: "child-rpd04",
+							childIds: ["child-rpd04"],
+							ts: 700,
+							task: "Parent RPD-04",
+							tokensIn: 0,
+							tokensOut: 0,
+							totalCost: 0,
+						},
+					}
+				}
+				return {
+					historyItem: {
+						id: "child-rpd04",
+						status: "active",
+						ts: 701,
+						task: "Child RPD-04",
+						tokensIn: 0,
+						tokensOut: 0,
+						totalCost: 0,
+					},
+				}
+			}),
+			emit: emitSpy,
+			log: logSpy,
+			getCurrentTask: vi.fn(() => ({ taskId: "child-rpd04" })),
+			removeClineFromStack: vi.fn().mockResolvedValue(undefined),
+			createTaskWithHistoryItem: vi.fn().mockResolvedValue(parentInstance),
+			updateTaskHistory,
+		} as unknown as ClineProvider
+
+		vi.mocked(readTaskMessages).mockResolvedValue([])
+		vi.mocked(readApiMessages).mockResolvedValue([])
+
+		await expect(
+			(ClineProvider.prototype as any).reopenParentFromDelegation.call(provider, {
+				parentTaskId: "parent-rpd04",
+				childTaskId: "child-rpd04",
+				completionResultSummary: "Child completion with persistence failure",
+			}),
+		).resolves.toBeUndefined()
+
+		expect(logSpy).toHaveBeenCalledWith(
+			expect.stringContaining(
+				"[reopenParentFromDelegation] Failed to persist child completed status for child-rpd04:",
+			),
+		)
+		expect(updateTaskHistory).toHaveBeenCalledWith(
+			expect.objectContaining({
+				id: "parent-rpd04",
+				status: "active",
+				completedByChildId: "child-rpd04",
+			}),
+		)
+		expect(parentInstance.resumeAfterDelegation).toHaveBeenCalledTimes(1)
+		expect(emitSpy).toHaveBeenCalledWith(RooCodeEventName.TaskDelegationResumed, "parent-rpd04", "child-rpd04")
+	})
+
 	it("handles empty history gracefully when injecting synthetic messages", async () => {
 		const provider = {
 			contextProxy: { globalStorageUri: { fsPath: "/tmp" } },

+ 55 - 2
src/__tests__/provider-delegation.spec.ts

@@ -9,9 +9,10 @@ describe("ClineProvider.delegateParentAndOpenChild()", () => {
 		const providerEmit = vi.fn()
 		const parentTask = { taskId: "parent-1", emit: vi.fn() } as any
 
+		const childStart = vi.fn()
 		const updateTaskHistory = vi.fn()
 		const removeClineFromStack = vi.fn().mockResolvedValue(undefined)
-		const createTask = vi.fn().mockResolvedValue({ taskId: "child-1" })
+		const createTask = vi.fn().mockResolvedValue({ taskId: "child-1", start: childStart })
 		const handleModeSwitch = vi.fn().mockResolvedValue(undefined)
 		const getTaskWithId = vi.fn().mockImplementation(async (id: string) => {
 			if (id === "parent-1") {
@@ -62,10 +63,11 @@ describe("ClineProvider.delegateParentAndOpenChild()", () => {
 
 		// Invariant: parent closed before child creation
 		expect(removeClineFromStack).toHaveBeenCalledTimes(1)
-		// Child task is created with initialStatus: "active" to avoid race conditions
+		// Child task is created with startTask: false and initialStatus: "active"
 		expect(createTask).toHaveBeenCalledWith("Do something", undefined, parentTask, {
 			initialTodos: [],
 			initialStatus: "active",
+			startTask: false,
 		})
 
 		// Metadata persistence - parent gets "delegated" status (child status is set at creation via initialStatus)
@@ -83,10 +85,61 @@ describe("ClineProvider.delegateParentAndOpenChild()", () => {
 			}),
 		)
 
+		// child.start() must be called AFTER parent metadata is persisted
+		expect(childStart).toHaveBeenCalledTimes(1)
+
 		// Event emission (provider-level)
 		expect(providerEmit).toHaveBeenCalledWith(RooCodeEventName.TaskDelegated, "parent-1", "child-1")
 
 		// Mode switch
 		expect(handleModeSwitch).toHaveBeenCalledWith("code")
 	})
+
+	it("calls child.start() only after parent metadata is persisted (no race condition)", async () => {
+		const callOrder: string[] = []
+
+		const parentTask = { taskId: "parent-1", emit: vi.fn() } as any
+		const childStart = vi.fn(() => callOrder.push("child.start"))
+
+		const updateTaskHistory = vi.fn(async () => {
+			callOrder.push("updateTaskHistory")
+		})
+		const removeClineFromStack = vi.fn().mockResolvedValue(undefined)
+		const createTask = vi.fn(async () => {
+			callOrder.push("createTask")
+			return { taskId: "child-1", start: childStart }
+		})
+		const handleModeSwitch = vi.fn().mockResolvedValue(undefined)
+		const getTaskWithId = vi.fn().mockResolvedValue({
+			historyItem: {
+				id: "parent-1",
+				task: "Parent",
+				tokensIn: 0,
+				tokensOut: 0,
+				totalCost: 0,
+				childIds: [],
+			},
+		})
+
+		const provider = {
+			emit: vi.fn(),
+			getCurrentTask: vi.fn(() => parentTask),
+			removeClineFromStack,
+			createTask,
+			getTaskWithId,
+			updateTaskHistory,
+			handleModeSwitch,
+			log: vi.fn(),
+		} as unknown as ClineProvider
+
+		await (ClineProvider.prototype as any).delegateParentAndOpenChild.call(provider, {
+			parentTaskId: "parent-1",
+			message: "Do something",
+			initialTodos: [],
+			mode: "code",
+		})
+
+		// Verify ordering: createTask → updateTaskHistory → child.start
+		expect(callOrder).toEqual(["createTask", "updateTaskHistory", "child.start"])
+	})
 })

+ 281 - 0
src/__tests__/removeClineFromStack-delegation.spec.ts

@@ -0,0 +1,281 @@
+// npx vitest run __tests__/removeClineFromStack-delegation.spec.ts
+
+import { describe, it, expect, vi } from "vitest"
+import { ClineProvider } from "../core/webview/ClineProvider"
+
+describe("ClineProvider.removeClineFromStack() delegation awareness", () => {
+	/**
+	 * Helper to build a minimal mock provider with a single task on the stack.
+	 * The task's parentTaskId and taskId are configurable.
+	 */
+	function buildMockProvider(opts: {
+		childTaskId: string
+		parentTaskId?: string
+		parentHistoryItem?: Record<string, any>
+		getTaskWithIdError?: Error
+	}) {
+		const childTask = {
+			taskId: opts.childTaskId,
+			instanceId: "inst-1",
+			parentTaskId: opts.parentTaskId,
+			emit: vi.fn(),
+			abortTask: vi.fn().mockResolvedValue(undefined),
+		}
+
+		const updateTaskHistory = vi.fn().mockResolvedValue([])
+		const getTaskWithId = opts.getTaskWithIdError
+			? vi.fn().mockRejectedValue(opts.getTaskWithIdError)
+			: vi.fn().mockImplementation(async (id: string) => {
+					if (id === opts.parentTaskId && opts.parentHistoryItem) {
+						return { historyItem: { ...opts.parentHistoryItem } }
+					}
+					throw new Error("Task not found")
+				})
+
+		const provider = {
+			clineStack: [childTask] as any[],
+			taskEventListeners: new Map(),
+			log: vi.fn(),
+			getTaskWithId,
+			updateTaskHistory,
+		}
+
+		return { provider, childTask, updateTaskHistory, getTaskWithId }
+	}
+
+	it("repairs parent metadata (delegated → active) when a delegated child is removed", async () => {
+		const { provider, updateTaskHistory, getTaskWithId } = buildMockProvider({
+			childTaskId: "child-1",
+			parentTaskId: "parent-1",
+			parentHistoryItem: {
+				id: "parent-1",
+				task: "Parent task",
+				ts: 1000,
+				number: 1,
+				tokensIn: 0,
+				tokensOut: 0,
+				totalCost: 0,
+				status: "delegated",
+				awaitingChildId: "child-1",
+				delegatedToId: "child-1",
+				childIds: ["child-1"],
+			},
+		})
+
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider)
+
+		// Stack should be empty after pop
+		expect(provider.clineStack).toHaveLength(0)
+
+		// Parent lookup should have been called
+		expect(getTaskWithId).toHaveBeenCalledWith("parent-1")
+
+		// Parent metadata should be repaired
+		expect(updateTaskHistory).toHaveBeenCalledTimes(1)
+		const updatedParent = updateTaskHistory.mock.calls[0][0]
+		expect(updatedParent).toEqual(
+			expect.objectContaining({
+				id: "parent-1",
+				status: "active",
+				awaitingChildId: undefined,
+			}),
+		)
+
+		// Log the repair
+		expect(provider.log).toHaveBeenCalledWith(expect.stringContaining("Repaired parent parent-1 metadata"))
+	})
+
+	it("does NOT modify parent metadata when the task has no parentTaskId (non-delegated)", async () => {
+		const { provider, updateTaskHistory, getTaskWithId } = buildMockProvider({
+			childTaskId: "standalone-1",
+			// No parentTaskId — this is a top-level task
+		})
+
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider)
+
+		// Stack should be empty
+		expect(provider.clineStack).toHaveLength(0)
+
+		// No parent lookup or update should happen
+		expect(getTaskWithId).not.toHaveBeenCalled()
+		expect(updateTaskHistory).not.toHaveBeenCalled()
+	})
+
+	it("does NOT modify parent metadata when awaitingChildId does not match the popped child", async () => {
+		const { provider, updateTaskHistory, getTaskWithId } = buildMockProvider({
+			childTaskId: "child-1",
+			parentTaskId: "parent-1",
+			parentHistoryItem: {
+				id: "parent-1",
+				task: "Parent task",
+				ts: 1000,
+				number: 1,
+				tokensIn: 0,
+				tokensOut: 0,
+				totalCost: 0,
+				status: "delegated",
+				awaitingChildId: "child-OTHER", // different child
+				delegatedToId: "child-OTHER",
+				childIds: ["child-OTHER"],
+			},
+		})
+
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider)
+
+		// Parent was looked up but should NOT be updated
+		expect(getTaskWithId).toHaveBeenCalledWith("parent-1")
+		expect(updateTaskHistory).not.toHaveBeenCalled()
+	})
+
+	it("does NOT modify parent metadata when parent status is not 'delegated'", async () => {
+		const { provider, updateTaskHistory, getTaskWithId } = buildMockProvider({
+			childTaskId: "child-1",
+			parentTaskId: "parent-1",
+			parentHistoryItem: {
+				id: "parent-1",
+				task: "Parent task",
+				ts: 1000,
+				number: 1,
+				tokensIn: 0,
+				tokensOut: 0,
+				totalCost: 0,
+				status: "completed", // already completed
+				awaitingChildId: "child-1",
+				childIds: ["child-1"],
+			},
+		})
+
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider)
+
+		expect(getTaskWithId).toHaveBeenCalledWith("parent-1")
+		expect(updateTaskHistory).not.toHaveBeenCalled()
+	})
+
+	it("catches and logs errors during parent metadata repair without blocking the pop", async () => {
+		const { provider, childTask, updateTaskHistory, getTaskWithId } = buildMockProvider({
+			childTaskId: "child-1",
+			parentTaskId: "parent-1",
+			getTaskWithIdError: new Error("Storage unavailable"),
+		})
+
+		// Should NOT throw
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider)
+
+		// Stack should still be empty (pop was not blocked)
+		expect(provider.clineStack).toHaveLength(0)
+
+		// The abort should still have been called
+		expect(childTask.abortTask).toHaveBeenCalledWith(true)
+
+		// Error should be logged as non-fatal
+		expect(provider.log).toHaveBeenCalledWith(
+			expect.stringContaining("Failed to repair parent metadata for parent-1 (non-fatal)"),
+		)
+
+		// No update should have been attempted
+		expect(updateTaskHistory).not.toHaveBeenCalled()
+	})
+
+	it("handles empty stack gracefully", async () => {
+		const provider = {
+			clineStack: [] as any[],
+			taskEventListeners: new Map(),
+			log: vi.fn(),
+			getTaskWithId: vi.fn(),
+			updateTaskHistory: vi.fn(),
+		}
+
+		// Should not throw
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider)
+
+		expect(provider.clineStack).toHaveLength(0)
+		expect(provider.getTaskWithId).not.toHaveBeenCalled()
+		expect(provider.updateTaskHistory).not.toHaveBeenCalled()
+	})
+
+	it("skips delegation repair when skipDelegationRepair option is true", async () => {
+		const { provider, updateTaskHistory, getTaskWithId } = buildMockProvider({
+			childTaskId: "child-1",
+			parentTaskId: "parent-1",
+			parentHistoryItem: {
+				id: "parent-1",
+				task: "Parent task",
+				ts: 1000,
+				number: 1,
+				tokensIn: 0,
+				tokensOut: 0,
+				totalCost: 0,
+				status: "delegated",
+				awaitingChildId: "child-1",
+				delegatedToId: "child-1",
+				childIds: ["child-1"],
+			},
+		})
+
+		// Call with skipDelegationRepair: true (as delegateParentAndOpenChild would)
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider, { skipDelegationRepair: true })
+
+		// Stack should be empty after pop
+		expect(provider.clineStack).toHaveLength(0)
+
+		// Parent lookup should NOT have been called — repair was skipped entirely
+		expect(getTaskWithId).not.toHaveBeenCalled()
+		expect(updateTaskHistory).not.toHaveBeenCalled()
+	})
+
+	it("does NOT reset grandparent during A→B→C nested delegation transition", async () => {
+		// Scenario: A delegated to B, B is now delegating to C.
+		// delegateParentAndOpenChild() pops B via removeClineFromStack({ skipDelegationRepair: true }).
+		// Grandparent A should remain "delegated" — its metadata must not be repaired.
+		const grandparentHistory = {
+			id: "task-A",
+			task: "Grandparent task",
+			ts: 1000,
+			number: 1,
+			tokensIn: 0,
+			tokensOut: 0,
+			totalCost: 0,
+			status: "delegated",
+			awaitingChildId: "task-B",
+			delegatedToId: "task-B",
+			childIds: ["task-B"],
+		}
+
+		const taskB = {
+			taskId: "task-B",
+			instanceId: "inst-B",
+			parentTaskId: "task-A",
+			emit: vi.fn(),
+			abortTask: vi.fn().mockResolvedValue(undefined),
+		}
+
+		const getTaskWithId = vi.fn().mockImplementation(async (id: string) => {
+			if (id === "task-A") {
+				return { historyItem: { ...grandparentHistory } }
+			}
+			throw new Error("Task not found")
+		})
+		const updateTaskHistory = vi.fn().mockResolvedValue([])
+
+		const provider = {
+			clineStack: [taskB] as any[],
+			taskEventListeners: new Map(),
+			log: vi.fn(),
+			getTaskWithId,
+			updateTaskHistory,
+		}
+
+		// Simulate what delegateParentAndOpenChild does: pop B with skipDelegationRepair
+		await (ClineProvider.prototype as any).removeClineFromStack.call(provider, { skipDelegationRepair: true })
+
+		// B was popped
+		expect(provider.clineStack).toHaveLength(0)
+
+		// Grandparent A should NOT have been looked up or modified
+		expect(getTaskWithId).not.toHaveBeenCalled()
+		expect(updateTaskHistory).not.toHaveBeenCalled()
+
+		// Grandparent A's metadata remains intact (delegated, awaitingChildId: task-B)
+		// The caller (delegateParentAndOpenChild) will update A to point to C separately.
+	})
+})

+ 15 - 21
src/api/providers/__tests__/bedrock-native-tools.spec.ts

@@ -135,23 +135,18 @@ describe("AwsBedrockHandler Native Tool Calling", () => {
 						parameters: {
 							type: "object",
 							properties: {
-								files: {
-									type: "array",
-									items: {
-										type: "object",
-										properties: {
-											path: { type: "string" },
-											line_ranges: {
-												type: ["array", "null"],
-												items: { type: "integer" },
-												description: "Optional line ranges",
-											},
+								path: { type: "string" },
+								indentation: {
+									type: ["object", "null"],
+									properties: {
+										anchor_line: {
+											type: ["integer", "null"],
+											description: "Optional anchor line",
 										},
-										required: ["path", "line_ranges"],
 									},
 								},
 							},
-							required: ["files"],
+							required: ["path"],
 						},
 					},
 				},
@@ -167,15 +162,14 @@ describe("AwsBedrockHandler Native Tool Calling", () => {
 			expect(executeCommandSchema.properties.cwd.type).toBeUndefined()
 			expect(executeCommandSchema.properties.cwd.description).toBe("Working directory (optional)")
 
-			// Second tool: line_ranges should be transformed from type: ["array", "null"] to anyOf
-			// with items moved inside the array variant (required by GPT-5-mini strict schema validation)
+			// Second tool: nested nullable object should be transformed from type: ["object", "null"] to anyOf
 			const readFileSchema = bedrockTools[1].toolSpec.inputSchema.json as any
-			const lineRanges = readFileSchema.properties.files.items.properties.line_ranges
-			expect(lineRanges.anyOf).toEqual([{ type: "array", items: { type: "integer" } }, { type: "null" }])
-			expect(lineRanges.type).toBeUndefined()
-			// items should now be inside the array variant, not at root
-			expect(lineRanges.items).toBeUndefined()
-			expect(lineRanges.description).toBe("Optional line ranges")
+			const indentation = readFileSchema.properties.indentation
+			expect(indentation.anyOf).toBeDefined()
+			expect(indentation.type).toBeUndefined()
+			// Object-level schema properties are preserved at the root, not inside the anyOf object variant
+			expect(indentation.additionalProperties).toBe(false)
+			expect(indentation.properties.anchor_line.anyOf).toEqual([{ type: "integer" }, { type: "null" }])
 		})
 
 		it("should filter non-function tools", () => {

+ 1 - 1
src/api/providers/__tests__/openai-codex.spec.ts

@@ -20,7 +20,7 @@ describe("OpenAiCodexHandler.getModel", () => {
 		const handler = new OpenAiCodexHandler({ apiModelId: "not-a-real-model" })
 		const model = handler.getModel()
 
-		expect(model.id).toBe("gpt-5.2-codex")
+		expect(model.id).toBe("gpt-5.3-codex")
 		expect(model.info).toBeDefined()
 	})
 })

+ 11 - 4
src/api/providers/anthropic.ts

@@ -64,9 +64,11 @@ export class AnthropicHandler extends BaseProvider implements SingleCompletionHa
 		// Filter out non-Anthropic blocks (reasoning, thoughtSignature, etc.) before sending to the API
 		const sanitizedMessages = filterNonAnthropicBlocks(messages)
 
-		// Add 1M context beta flag if enabled for Claude Sonnet 4 and 4.5
+		// Add 1M context beta flag if enabled for supported models (Claude Sonnet 4/4.5, Opus 4.6)
 		if (
-			(modelId === "claude-sonnet-4-20250514" || modelId === "claude-sonnet-4-5") &&
+			(modelId === "claude-sonnet-4-20250514" ||
+				modelId === "claude-sonnet-4-5" ||
+				modelId === "claude-opus-4-6") &&
 			this.options.anthropicBeta1MContext
 		) {
 			betas.push("context-1m-2025-08-07")
@@ -80,6 +82,7 @@ export class AnthropicHandler extends BaseProvider implements SingleCompletionHa
 		switch (modelId) {
 			case "claude-sonnet-4-5":
 			case "claude-sonnet-4-20250514":
+			case "claude-opus-4-6":
 			case "claude-opus-4-5-20251101":
 			case "claude-opus-4-1-20250805":
 			case "claude-opus-4-20250514":
@@ -144,6 +147,7 @@ export class AnthropicHandler extends BaseProvider implements SingleCompletionHa
 							switch (modelId) {
 								case "claude-sonnet-4-5":
 								case "claude-sonnet-4-20250514":
+								case "claude-opus-4-6":
 								case "claude-opus-4-5-20251101":
 								case "claude-opus-4-1-20250805":
 								case "claude-opus-4-20250514":
@@ -330,8 +334,11 @@ export class AnthropicHandler extends BaseProvider implements SingleCompletionHa
 		let id = modelId && modelId in anthropicModels ? (modelId as AnthropicModelId) : anthropicDefaultModelId
 		let info: ModelInfo = anthropicModels[id]
 
-		// If 1M context beta is enabled for Claude Sonnet 4 or 4.5, update the model info
-		if ((id === "claude-sonnet-4-20250514" || id === "claude-sonnet-4-5") && this.options.anthropicBeta1MContext) {
+		// If 1M context beta is enabled for supported models, update the model info
+		if (
+			(id === "claude-sonnet-4-20250514" || id === "claude-sonnet-4-5" || id === "claude-opus-4-6") &&
+			this.options.anthropicBeta1MContext
+		) {
 			// Use the tier pricing for 1M context
 			const tier = info.tiers?.[0]
 			if (tier) {

+ 9 - 4
src/api/providers/bedrock.ts

@@ -408,7 +408,7 @@ export class AwsBedrockHandler extends BaseProvider implements SingleCompletionH
 			temperature: modelConfig.temperature ?? (this.options.modelTemperature as number),
 		}
 
-		// Check if 1M context is enabled for Claude Sonnet 4
+		// Check if 1M context is enabled for supported Claude 4 models
 		// Use parseBaseModelId to handle cross-region inference prefixes
 		const baseModelId = this.parseBaseModelId(modelConfig.id)
 		const is1MContextEnabled =
@@ -1097,14 +1097,19 @@ export class AwsBedrockHandler extends BaseProvider implements SingleCompletionH
 			}
 		}
 
-		// Check if 1M context is enabled for Claude Sonnet 4 / 4.5
+		// Check if 1M context is enabled for supported Claude 4 models
 		// Use parseBaseModelId to handle cross-region inference prefixes
 		const baseModelId = this.parseBaseModelId(modelConfig.id)
 		if (BEDROCK_1M_CONTEXT_MODEL_IDS.includes(baseModelId as any) && this.options.awsBedrock1MContext) {
-			// Update context window to 1M tokens when 1M context beta is enabled
+			// Update context window and pricing to 1M tier when 1M context beta is enabled
+			const tier = modelConfig.info.tiers?.[0]
 			modelConfig.info = {
 				...modelConfig.info,
-				contextWindow: 1_000_000,
+				contextWindow: tier?.contextWindow ?? 1_000_000,
+				inputPrice: tier?.inputPrice ?? modelConfig.info.inputPrice,
+				outputPrice: tier?.outputPrice ?? modelConfig.info.outputPrice,
+				cacheWritesPrice: tier?.cacheWritesPrice ?? modelConfig.info.cacheWritesPrice,
+				cacheReadsPrice: tier?.cacheReadsPrice ?? modelConfig.info.cacheReadsPrice,
 			}
 		}
 

+ 10 - 0
src/api/providers/fetchers/openrouter.ts

@@ -248,6 +248,16 @@ export const parseOpenRouterModel = ({
 		modelInfo.maxTokens = anthropicModels["claude-opus-4-1-20250805"].maxTokens
 	}
 
+	// Set claude-opus-4.5 model to use the correct configuration
+	if (id === "anthropic/claude-opus-4.5") {
+		modelInfo.maxTokens = anthropicModels["claude-opus-4-5-20251101"].maxTokens
+	}
+
+	// Set claude-opus-4.6 model to use the correct configuration
+	if (id === "anthropic/claude-opus-4.6") {
+		modelInfo.maxTokens = anthropicModels["claude-opus-4-6"].maxTokens
+	}
+
 	// Ensure correct reasoning handling for Claude Haiku 4.5 on OpenRouter
 	// Use budget control and disable effort-based reasoning fallback
 	if (id === "anthropic/claude-haiku-4.5") {

+ 136 - 20
src/core/assistant-message/NativeToolCallParser.ts

@@ -311,9 +311,22 @@ export class NativeToolCallParser {
 		return finalToolUse
 	}
 
+	private static coerceOptionalNumber(value: unknown): number | undefined {
+		if (typeof value === "number" && Number.isFinite(value)) {
+			return value
+		}
+		if (typeof value === "string") {
+			const n = Number(value)
+			if (Number.isFinite(n)) {
+				return n
+			}
+		}
+		return undefined
+	}
+
 	/**
 	 * Convert raw file entries from API (with line_ranges) to FileEntry objects
-	 * (with lineRanges). Handles multiple formats for compatibility:
+	 * (with lineRanges). Handles multiple formats for backward compatibility:
 	 *
 	 * New tuple format: { path: string, line_ranges: [[1, 50], [100, 150]] }
 	 * Object format: { path: string, line_ranges: [{ start: 1, end: 50 }] }
@@ -321,19 +334,21 @@ export class NativeToolCallParser {
 	 *
 	 * Returns: { path: string, lineRanges: [{ start: 1, end: 50 }] }
 	 */
-	private static convertFileEntries(files: any[]): FileEntry[] {
-		return files.map((file: any) => {
-			const entry: FileEntry = { path: file.path }
-			if (file.line_ranges && Array.isArray(file.line_ranges)) {
-				entry.lineRanges = file.line_ranges
-					.map((range: any) => {
+	private static convertFileEntries(files: unknown[]): FileEntry[] {
+		return files.map((file: unknown) => {
+			const f = file as Record<string, unknown>
+			const entry: FileEntry = { path: f.path as string }
+			if (f.line_ranges && Array.isArray(f.line_ranges)) {
+				entry.lineRanges = (f.line_ranges as unknown[])
+					.map((range: unknown) => {
 						// Handle tuple format: [start, end]
 						if (Array.isArray(range) && range.length >= 2) {
 							return { start: Number(range[0]), end: Number(range[1]) }
 						}
 						// Handle object format: { start: number, end: number }
 						if (typeof range === "object" && range !== null && "start" in range && "end" in range) {
-							return { start: Number(range.start), end: Number(range.end) }
+							const r = range as { start: unknown; end: unknown }
+							return { start: Number(r.start), end: Number(r.end) }
 						}
 						// Handle legacy string format: "1-50"
 						if (typeof range === "string") {
@@ -344,7 +359,7 @@ export class NativeToolCallParser {
 						}
 						return null
 					})
-					.filter(Boolean)
+					.filter((r): r is { start: number; end: number } => r !== null)
 			}
 			return entry
 		})
@@ -376,10 +391,60 @@ export class NativeToolCallParser {
 		// Build partial nativeArgs based on what we have so far
 		let nativeArgs: any = undefined
 
+		// Track if legacy format was used (for telemetry)
+		let usedLegacyFormat = false
+
 		switch (name) {
 			case "read_file":
-				if (partialArgs.files && Array.isArray(partialArgs.files)) {
-					nativeArgs = { files: this.convertFileEntries(partialArgs.files) }
+				// Check for legacy format first: { files: [...] }
+				// Handle both array and stringified array (some models double-stringify)
+				if (partialArgs.files !== undefined) {
+					let filesArray: unknown[] | null = null
+
+					if (Array.isArray(partialArgs.files)) {
+						filesArray = partialArgs.files
+					} else if (typeof partialArgs.files === "string") {
+						// Handle double-stringified case: files is a string containing JSON array
+						try {
+							const parsed = JSON.parse(partialArgs.files)
+							if (Array.isArray(parsed)) {
+								filesArray = parsed
+							}
+						} catch {
+							// Not valid JSON, ignore
+						}
+					}
+
+					if (filesArray && filesArray.length > 0) {
+						usedLegacyFormat = true
+						nativeArgs = {
+							files: this.convertFileEntries(filesArray),
+							_legacyFormat: true as const,
+						}
+					}
+				}
+				// New format: { path: "...", mode: "..." }
+				if (!nativeArgs && partialArgs.path !== undefined) {
+					nativeArgs = {
+						path: partialArgs.path,
+						mode: partialArgs.mode,
+						offset: this.coerceOptionalNumber(partialArgs.offset),
+						limit: this.coerceOptionalNumber(partialArgs.limit),
+						indentation:
+							partialArgs.indentation && typeof partialArgs.indentation === "object"
+								? {
+										anchor_line: this.coerceOptionalNumber(partialArgs.indentation.anchor_line),
+										max_levels: this.coerceOptionalNumber(partialArgs.indentation.max_levels),
+										max_lines: this.coerceOptionalNumber(partialArgs.indentation.max_lines),
+										include_siblings: this.coerceOptionalBoolean(
+											partialArgs.indentation.include_siblings,
+										),
+										include_header: this.coerceOptionalBoolean(
+											partialArgs.indentation.include_header,
+										),
+									}
+								: undefined,
+					}
 				}
 				break
 
@@ -601,6 +666,11 @@ export class NativeToolCallParser {
 			result.originalName = originalName
 		}
 
+		// Track legacy format usage for telemetry
+		if (usedLegacyFormat) {
+			result.usedLegacyFormat = true
+		}
+
 		return result
 	}
 
@@ -647,13 +717,6 @@ export class NativeToolCallParser {
 			const params: Partial<Record<ToolParamName, string>> = {}
 
 			for (const [key, value] of Object.entries(args)) {
-				// Skip complex parameters that have been migrated to nativeArgs.
-				// For read_file, the 'files' parameter is a FileEntry[] array that can't be
-				// meaningfully stringified. The properly typed data is in nativeArgs instead.
-				if (resolvedName === "read_file" && key === "files") {
-					continue
-				}
-
 				// Validate parameter name
 				if (!toolParamNames.includes(key as ToolParamName) && !customToolRegistry.has(resolvedName)) {
 					console.warn(`Unknown parameter '${key}' for tool '${resolvedName}'`)
@@ -671,10 +734,58 @@ export class NativeToolCallParser {
 			// nativeArgs object. If validation fails, we treat the tool call as invalid and fail fast.
 			let nativeArgs: NativeArgsFor<TName> | undefined = undefined
 
+			// Track if legacy format was used (for telemetry)
+			let usedLegacyFormat = false
+
 			switch (resolvedName) {
 				case "read_file":
-					if (args.files && Array.isArray(args.files)) {
-						nativeArgs = { files: this.convertFileEntries(args.files) } as NativeArgsFor<TName>
+					// Check for legacy format first: { files: [...] }
+					// Handle both array and stringified array (some models double-stringify)
+					if (args.files !== undefined) {
+						let filesArray: unknown[] | null = null
+
+						if (Array.isArray(args.files)) {
+							filesArray = args.files
+						} else if (typeof args.files === "string") {
+							// Handle double-stringified case: files is a string containing JSON array
+							try {
+								const parsed = JSON.parse(args.files)
+								if (Array.isArray(parsed)) {
+									filesArray = parsed
+								}
+							} catch {
+								// Not valid JSON, ignore
+							}
+						}
+
+						if (filesArray && filesArray.length > 0) {
+							usedLegacyFormat = true
+							nativeArgs = {
+								files: this.convertFileEntries(filesArray),
+								_legacyFormat: true as const,
+							} as NativeArgsFor<TName>
+						}
+					}
+					// New format: { path: "...", mode: "..." }
+					if (!nativeArgs && args.path !== undefined) {
+						nativeArgs = {
+							path: args.path,
+							mode: args.mode,
+							offset: this.coerceOptionalNumber(args.offset),
+							limit: this.coerceOptionalNumber(args.limit),
+							indentation:
+								args.indentation && typeof args.indentation === "object"
+									? {
+											anchor_line: this.coerceOptionalNumber(args.indentation.anchor_line),
+											max_levels: this.coerceOptionalNumber(args.indentation.max_levels),
+											max_lines: this.coerceOptionalNumber(args.indentation.max_lines),
+											include_siblings: this.coerceOptionalBoolean(
+												args.indentation.include_siblings,
+											),
+											include_header: this.coerceOptionalBoolean(args.indentation.include_header),
+										}
+									: undefined,
+						} as NativeArgsFor<TName>
 					}
 					break
 
@@ -930,6 +1041,11 @@ export class NativeToolCallParser {
 				result.originalName = toolCall.name
 			}
 
+			// Track legacy format usage for telemetry
+			if (usedLegacyFormat) {
+				result.usedLegacyFormat = true
+			}
+
 			return result
 		} catch (error) {
 			console.error(

+ 239 - 134
src/core/assistant-message/__tests__/NativeToolCallParser.spec.ts

@@ -8,20 +8,12 @@ describe("NativeToolCallParser", () => {
 
 	describe("parseToolCall", () => {
 		describe("read_file tool", () => {
-			it("should handle line_ranges as tuples (new format)", () => {
+			it("should parse minimal single-file read_file args", () => {
 				const toolCall = {
 					id: "toolu_123",
 					name: "read_file" as const,
 					arguments: JSON.stringify({
-						files: [
-							{
-								path: "src/core/task/Task.ts",
-								line_ranges: [
-									[1920, 1990],
-									[2060, 2120],
-								],
-							},
-						],
+						path: "src/core/task/Task.ts",
 					}),
 				}
 
@@ -31,29 +23,20 @@ describe("NativeToolCallParser", () => {
 				expect(result?.type).toBe("tool_use")
 				if (result?.type === "tool_use") {
 					expect(result.nativeArgs).toBeDefined()
-					const nativeArgs = result.nativeArgs as {
-						files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
-					}
-					expect(nativeArgs.files).toHaveLength(1)
-					expect(nativeArgs.files[0].path).toBe("src/core/task/Task.ts")
-					expect(nativeArgs.files[0].lineRanges).toEqual([
-						{ start: 1920, end: 1990 },
-						{ start: 2060, end: 2120 },
-					])
+					const nativeArgs = result.nativeArgs as { path: string }
+					expect(nativeArgs.path).toBe("src/core/task/Task.ts")
 				}
 			})
 
-			it("should handle line_ranges as strings (legacy format)", () => {
+			it("should parse slice-mode params", () => {
 				const toolCall = {
 					id: "toolu_123",
 					name: "read_file" as const,
 					arguments: JSON.stringify({
-						files: [
-							{
-								path: "src/core/task/Task.ts",
-								line_ranges: ["1920-1990", "2060-2120"],
-							},
-						],
+						path: "src/core/task/Task.ts",
+						mode: "slice",
+						offset: 10,
+						limit: 20,
 					}),
 				}
 
@@ -62,29 +45,32 @@ describe("NativeToolCallParser", () => {
 				expect(result).not.toBeNull()
 				expect(result?.type).toBe("tool_use")
 				if (result?.type === "tool_use") {
-					expect(result.nativeArgs).toBeDefined()
 					const nativeArgs = result.nativeArgs as {
-						files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
-					}
-					expect(nativeArgs.files).toHaveLength(1)
-					expect(nativeArgs.files[0].path).toBe("src/core/task/Task.ts")
-					expect(nativeArgs.files[0].lineRanges).toEqual([
-						{ start: 1920, end: 1990 },
-						{ start: 2060, end: 2120 },
-					])
+						path: string
+						mode?: string
+						offset?: number
+						limit?: number
+					}
+					expect(nativeArgs.path).toBe("src/core/task/Task.ts")
+					expect(nativeArgs.mode).toBe("slice")
+					expect(nativeArgs.offset).toBe(10)
+					expect(nativeArgs.limit).toBe(20)
 				}
 			})
 
-			it("should handle files without line_ranges", () => {
+			it("should parse indentation-mode params", () => {
 				const toolCall = {
 					id: "toolu_123",
 					name: "read_file" as const,
 					arguments: JSON.stringify({
-						files: [
-							{
-								path: "src/utils.ts",
-							},
-						],
+						path: "src/utils.ts",
+						mode: "indentation",
+						indentation: {
+							anchor_line: 123,
+							max_levels: 2,
+							include_siblings: true,
+							include_header: false,
+						},
 					}),
 				}
 
@@ -94,120 +80,242 @@ describe("NativeToolCallParser", () => {
 				expect(result?.type).toBe("tool_use")
 				if (result?.type === "tool_use") {
 					const nativeArgs = result.nativeArgs as {
-						files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
+						path: string
+						mode?: string
+						indentation?: {
+							anchor_line?: number
+							max_levels?: number
+							include_siblings?: boolean
+							include_header?: boolean
+						}
 					}
-					expect(nativeArgs.files).toHaveLength(1)
-					expect(nativeArgs.files[0].path).toBe("src/utils.ts")
-					expect(nativeArgs.files[0].lineRanges).toBeUndefined()
+					expect(nativeArgs.path).toBe("src/utils.ts")
+					expect(nativeArgs.mode).toBe("indentation")
+					expect(nativeArgs.indentation?.anchor_line).toBe(123)
+					expect(nativeArgs.indentation?.include_siblings).toBe(true)
+					expect(nativeArgs.indentation?.include_header).toBe(false)
 				}
 			})
 
-			it("should handle multiple files with different line_ranges", () => {
-				const toolCall = {
-					id: "toolu_123",
-					name: "read_file" as const,
-					arguments: JSON.stringify({
-						files: [
-							{
-								path: "file1.ts",
-								line_ranges: ["1-50"],
-							},
-							{
-								path: "file2.ts",
-								line_ranges: ["100-150", "200-250"],
-							},
-							{
-								path: "file3.ts",
-							},
-						],
-					}),
-				}
+			// Legacy format backward compatibility tests
+			describe("legacy format backward compatibility", () => {
+				it("should parse legacy files array format with single file", () => {
+					const toolCall = {
+						id: "toolu_legacy_1",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							files: [{ path: "src/legacy/file.ts" }],
+						}),
+					}
 
-				const result = NativeToolCallParser.parseToolCall(toolCall)
+					const result = NativeToolCallParser.parseToolCall(toolCall)
 
-				expect(result).not.toBeNull()
-				expect(result?.type).toBe("tool_use")
-				if (result?.type === "tool_use") {
-					const nativeArgs = result.nativeArgs as {
-						files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
-					}
-					expect(nativeArgs.files).toHaveLength(3)
-					expect(nativeArgs.files[0].lineRanges).toEqual([{ start: 1, end: 50 }])
-					expect(nativeArgs.files[1].lineRanges).toEqual([
-						{ start: 100, end: 150 },
-						{ start: 200, end: 250 },
-					])
-					expect(nativeArgs.files[2].lineRanges).toBeUndefined()
-				}
-			})
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBe(true)
+						const nativeArgs = result.nativeArgs as { files: Array<{ path: string }>; _legacyFormat: true }
+						expect(nativeArgs._legacyFormat).toBe(true)
+						expect(nativeArgs.files).toHaveLength(1)
+						expect(nativeArgs.files[0].path).toBe("src/legacy/file.ts")
+					}
+				})
 
-			it("should filter out invalid line_range strings", () => {
-				const toolCall = {
-					id: "toolu_123",
-					name: "read_file" as const,
-					arguments: JSON.stringify({
-						files: [
-							{
-								path: "file.ts",
-								line_ranges: ["1-50", "invalid", "100-200", "abc-def"],
-							},
-						],
-					}),
-				}
+				it("should parse legacy files array format with multiple files", () => {
+					const toolCall = {
+						id: "toolu_legacy_2",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							files: [{ path: "src/file1.ts" }, { path: "src/file2.ts" }, { path: "src/file3.ts" }],
+						}),
+					}
 
-				const result = NativeToolCallParser.parseToolCall(toolCall)
+					const result = NativeToolCallParser.parseToolCall(toolCall)
 
-				expect(result).not.toBeNull()
-				expect(result?.type).toBe("tool_use")
-				if (result?.type === "tool_use") {
-					const nativeArgs = result.nativeArgs as {
-						files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBe(true)
+						const nativeArgs = result.nativeArgs as { files: Array<{ path: string }>; _legacyFormat: true }
+						expect(nativeArgs.files).toHaveLength(3)
+						expect(nativeArgs.files[0].path).toBe("src/file1.ts")
+						expect(nativeArgs.files[1].path).toBe("src/file2.ts")
+						expect(nativeArgs.files[2].path).toBe("src/file3.ts")
 					}
-					expect(nativeArgs.files[0].lineRanges).toEqual([
-						{ start: 1, end: 50 },
-						{ start: 100, end: 200 },
-					])
-				}
+				})
+
+				it("should parse legacy line_ranges as tuples", () => {
+					const toolCall = {
+						id: "toolu_legacy_3",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							files: [
+								{
+									path: "src/task.ts",
+									line_ranges: [
+										[1, 50],
+										[100, 150],
+									],
+								},
+							],
+						}),
+					}
+
+					const result = NativeToolCallParser.parseToolCall(toolCall)
+
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBe(true)
+						const nativeArgs = result.nativeArgs as {
+							files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
+							_legacyFormat: true
+						}
+						expect(nativeArgs.files[0].lineRanges).toHaveLength(2)
+						expect(nativeArgs.files[0].lineRanges?.[0]).toEqual({ start: 1, end: 50 })
+						expect(nativeArgs.files[0].lineRanges?.[1]).toEqual({ start: 100, end: 150 })
+					}
+				})
+
+				it("should parse legacy line_ranges as objects", () => {
+					const toolCall = {
+						id: "toolu_legacy_4",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							files: [
+								{
+									path: "src/task.ts",
+									line_ranges: [
+										{ start: 10, end: 20 },
+										{ start: 30, end: 40 },
+									],
+								},
+							],
+						}),
+					}
+
+					const result = NativeToolCallParser.parseToolCall(toolCall)
+
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBe(true)
+						const nativeArgs = result.nativeArgs as {
+							files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
+						}
+						expect(nativeArgs.files[0].lineRanges).toHaveLength(2)
+						expect(nativeArgs.files[0].lineRanges?.[0]).toEqual({ start: 10, end: 20 })
+						expect(nativeArgs.files[0].lineRanges?.[1]).toEqual({ start: 30, end: 40 })
+					}
+				})
+
+				it("should parse legacy line_ranges as strings", () => {
+					const toolCall = {
+						id: "toolu_legacy_5",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							files: [
+								{
+									path: "src/task.ts",
+									line_ranges: ["1-50", "100-150"],
+								},
+							],
+						}),
+					}
+
+					const result = NativeToolCallParser.parseToolCall(toolCall)
+
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBe(true)
+						const nativeArgs = result.nativeArgs as {
+							files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
+						}
+						expect(nativeArgs.files[0].lineRanges).toHaveLength(2)
+						expect(nativeArgs.files[0].lineRanges?.[0]).toEqual({ start: 1, end: 50 })
+						expect(nativeArgs.files[0].lineRanges?.[1]).toEqual({ start: 100, end: 150 })
+					}
+				})
+
+				it("should parse double-stringified files array (model quirk)", () => {
+					// This tests the real-world case where some models double-stringify the files array
+					// e.g., { files: "[{\"path\": \"...\"}]" } instead of { files: [{path: "..."}] }
+					const toolCall = {
+						id: "toolu_double_stringify",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							files: JSON.stringify([
+								{ path: "src/services/browser/browserDiscovery.ts" },
+								{ path: "src/services/mcp/McpServerManager.ts" },
+							]),
+						}),
+					}
+
+					const result = NativeToolCallParser.parseToolCall(toolCall)
+
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBe(true)
+						const nativeArgs = result.nativeArgs as {
+							files: Array<{ path: string }>
+							_legacyFormat: true
+						}
+						expect(nativeArgs._legacyFormat).toBe(true)
+						expect(nativeArgs.files).toHaveLength(2)
+						expect(nativeArgs.files[0].path).toBe("src/services/browser/browserDiscovery.ts")
+						expect(nativeArgs.files[1].path).toBe("src/services/mcp/McpServerManager.ts")
+					}
+				})
+
+				it("should NOT set usedLegacyFormat for new format", () => {
+					const toolCall = {
+						id: "toolu_new",
+						name: "read_file" as const,
+						arguments: JSON.stringify({
+							path: "src/new/format.ts",
+							mode: "slice",
+							offset: 1,
+							limit: 100,
+						}),
+					}
+
+					const result = NativeToolCallParser.parseToolCall(toolCall)
+
+					expect(result).not.toBeNull()
+					expect(result?.type).toBe("tool_use")
+					if (result?.type === "tool_use") {
+						expect(result.usedLegacyFormat).toBeUndefined()
+					}
+				})
 			})
 		})
 	})
 
 	describe("processStreamingChunk", () => {
 		describe("read_file tool", () => {
-			it("should convert line_ranges strings to lineRanges objects during streaming", () => {
+			it("should emit a partial ToolUse with nativeArgs.path during streaming", () => {
 				const id = "toolu_streaming_123"
 				NativeToolCallParser.startStreamingToolCall(id, "read_file")
 
 				// Simulate streaming chunks
-				const fullArgs = JSON.stringify({
-					files: [
-						{
-							path: "src/test.ts",
-							line_ranges: ["10-20", "30-40"],
-						},
-					],
-				})
+				const fullArgs = JSON.stringify({ path: "src/test.ts" })
 
 				// Process the complete args as a single chunk for simplicity
 				const result = NativeToolCallParser.processStreamingChunk(id, fullArgs)
 
 				expect(result).not.toBeNull()
 				expect(result?.nativeArgs).toBeDefined()
-				const nativeArgs = result?.nativeArgs as {
-					files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
-				}
-				expect(nativeArgs.files).toHaveLength(1)
-				expect(nativeArgs.files[0].lineRanges).toEqual([
-					{ start: 10, end: 20 },
-					{ start: 30, end: 40 },
-				])
+				const nativeArgs = result?.nativeArgs as { path: string }
+				expect(nativeArgs.path).toBe("src/test.ts")
 			})
 		})
 	})
 
 	describe("finalizeStreamingToolCall", () => {
 		describe("read_file tool", () => {
-			it("should convert line_ranges strings to lineRanges objects on finalize", () => {
+			it("should parse read_file args on finalize", () => {
 				const id = "toolu_finalize_123"
 				NativeToolCallParser.startStreamingToolCall(id, "read_file")
 
@@ -215,12 +323,10 @@ describe("NativeToolCallParser", () => {
 				NativeToolCallParser.processStreamingChunk(
 					id,
 					JSON.stringify({
-						files: [
-							{
-								path: "finalized.ts",
-								line_ranges: ["500-600"],
-							},
-						],
+						path: "finalized.ts",
+						mode: "slice",
+						offset: 1,
+						limit: 10,
 					}),
 				)
 
@@ -229,11 +335,10 @@ describe("NativeToolCallParser", () => {
 				expect(result).not.toBeNull()
 				expect(result?.type).toBe("tool_use")
 				if (result?.type === "tool_use") {
-					const nativeArgs = result.nativeArgs as {
-						files: Array<{ path: string; lineRanges?: Array<{ start: number; end: number }> }>
-					}
-					expect(nativeArgs.files[0].path).toBe("finalized.ts")
-					expect(nativeArgs.files[0].lineRanges).toEqual([{ start: 500, end: 600 }])
+					const nativeArgs = result.nativeArgs as { path: string; offset?: number; limit?: number }
+					expect(nativeArgs.path).toBe("finalized.ts")
+					expect(nativeArgs.offset).toBe(1)
+					expect(nativeArgs.limit).toBe(10)
 				}
 			})
 		})

+ 18 - 8
src/core/assistant-message/presentAssistantMessage.ts

@@ -2,7 +2,7 @@ import { serializeError } from "serialize-error"
 import { Anthropic } from "@anthropic-ai/sdk"
 
 import type { ToolName, ClineAsk, ToolProgressStatus } from "@roo-code/types"
-import { ConsecutiveMistakeError } from "@roo-code/types"
+import { ConsecutiveMistakeError, TelemetryEventName } from "@roo-code/types"
 import { TelemetryService } from "@roo-code/telemetry"
 import { customToolRegistry } from "@roo-code/core"
 
@@ -40,6 +40,7 @@ import { isValidToolName, validateToolUse } from "../tools/validateToolUse"
 import { codebaseSearchTool } from "../tools/CodebaseSearchTool"
 
 import { formatResponse } from "../prompts/responses"
+import { sanitizeToolUseId } from "../../utils/tool-id"
 
 /**
  * Processes and presents assistant message content to the user interface.
@@ -118,7 +119,7 @@ export async function presentAssistantMessage(cline: Task) {
 				if (toolCallId) {
 					cline.pushToolResultToUserContent({
 						type: "tool_result",
-						tool_use_id: toolCallId,
+						tool_use_id: sanitizeToolUseId(toolCallId),
 						content: errorMessage,
 						is_error: true,
 					})
@@ -169,7 +170,7 @@ export async function presentAssistantMessage(cline: Task) {
 				if (toolCallId) {
 					cline.pushToolResultToUserContent({
 						type: "tool_result",
-						tool_use_id: toolCallId,
+						tool_use_id: sanitizeToolUseId(toolCallId),
 						content: resultContent,
 					})
 
@@ -399,7 +400,7 @@ export async function presentAssistantMessage(cline: Task) {
 
 				cline.pushToolResultToUserContent({
 					type: "tool_result",
-					tool_use_id: toolCallId,
+					tool_use_id: sanitizeToolUseId(toolCallId),
 					content: errorMessage,
 					is_error: true,
 				})
@@ -436,7 +437,7 @@ export async function presentAssistantMessage(cline: Task) {
 					// continue gracefully.
 					cline.pushToolResultToUserContent({
 						type: "tool_result",
-						tool_use_id: toolCallId,
+						tool_use_id: sanitizeToolUseId(toolCallId),
 						content: formatResponse.toolError(errorMessage),
 						is_error: true,
 					})
@@ -482,7 +483,7 @@ export async function presentAssistantMessage(cline: Task) {
 
 				cline.pushToolResultToUserContent({
 					type: "tool_result",
-					tool_use_id: toolCallId,
+					tool_use_id: sanitizeToolUseId(toolCallId),
 					content: resultContent,
 				})
 
@@ -589,6 +590,15 @@ export async function presentAssistantMessage(cline: Task) {
 				const recordName = isCustomTool ? "custom_tool" : block.name
 				cline.recordToolUsage(recordName)
 				TelemetryService.instance.captureToolUsage(cline.taskId, recordName)
+
+				// Track legacy format usage for read_file tool (for migration monitoring)
+				if (block.name === "read_file" && block.usedLegacyFormat) {
+					const modelInfo = cline.api.getModel()
+					TelemetryService.instance.captureEvent(TelemetryEventName.READ_FILE_LEGACY_FORMAT_USED, {
+						taskId: cline.taskId,
+						model: modelInfo?.id,
+					})
+				}
 			}
 
 			// Validate tool use before execution - ONLY for complete (non-partial) blocks.
@@ -635,7 +645,7 @@ export async function presentAssistantMessage(cline: Task) {
 					// Push tool_result directly without setting didAlreadyUseTool
 					cline.pushToolResultToUserContent({
 						type: "tool_result",
-						tool_use_id: toolCallId,
+						tool_use_id: sanitizeToolUseId(toolCallId),
 						content: typeof errorContent === "string" ? errorContent : "(validation error)",
 						is_error: true,
 					})
@@ -939,7 +949,7 @@ export async function presentAssistantMessage(cline: Task) {
 					// This prevents the stream from being interrupted with "Response interrupted by tool use result"
 					cline.pushToolResultToUserContent({
 						type: "tool_result",
-						tool_use_id: toolCallId,
+						tool_use_id: sanitizeToolUseId(toolCallId),
 						content: formatResponse.toolError(errorMessage),
 						is_error: true,
 					})

+ 307 - 0
src/core/condense/__tests__/index.spec.ts

@@ -15,6 +15,10 @@ import {
 	cleanupAfterTruncation,
 	extractCommandBlocks,
 	injectSyntheticToolResults,
+	toolUseToText,
+	toolResultToText,
+	convertToolBlocksToText,
+	transformMessagesForCondensing,
 } from "../index"
 
 vi.mock("../../../api/transform/image-cleaning", () => ({
@@ -1282,3 +1286,306 @@ describe("summarizeConversation with custom settings", () => {
 		)
 	})
 })
+
+describe("toolUseToText", () => {
+	it("should convert tool_use block with object input to text", () => {
+		const block: Anthropic.Messages.ToolUseBlockParam = {
+			type: "tool_use",
+			id: "tool-123",
+			name: "read_file",
+			input: { path: "test.ts", encoding: "utf-8" },
+		}
+
+		const result = toolUseToText(block)
+
+		expect(result).toBe("[Tool Use: read_file]\npath: test.ts\nencoding: utf-8")
+	})
+
+	it("should convert tool_use block with nested object input to text", () => {
+		const block: Anthropic.Messages.ToolUseBlockParam = {
+			type: "tool_use",
+			id: "tool-456",
+			name: "write_file",
+			input: {
+				path: "output.json",
+				content: { key: "value", nested: { a: 1 } },
+			},
+		}
+
+		const result = toolUseToText(block)
+
+		expect(result).toContain("[Tool Use: write_file]")
+		expect(result).toContain("path: output.json")
+		expect(result).toContain("content:")
+		expect(result).toContain('"key"')
+		expect(result).toContain('"value"')
+	})
+
+	it("should convert tool_use block with string input to text", () => {
+		const block: Anthropic.Messages.ToolUseBlockParam = {
+			type: "tool_use",
+			id: "tool-789",
+			name: "execute_command",
+			input: "ls -la" as unknown as Record<string, unknown>,
+		}
+
+		const result = toolUseToText(block)
+
+		expect(result).toBe("[Tool Use: execute_command]\nls -la")
+	})
+
+	it("should handle empty object input", () => {
+		const block: Anthropic.Messages.ToolUseBlockParam = {
+			type: "tool_use",
+			id: "tool-empty",
+			name: "some_tool",
+			input: {},
+		}
+
+		const result = toolUseToText(block)
+
+		expect(result).toBe("[Tool Use: some_tool]\n")
+	})
+})
+
+describe("toolResultToText", () => {
+	it("should convert tool_result with string content to text", () => {
+		const block: Anthropic.Messages.ToolResultBlockParam = {
+			type: "tool_result",
+			tool_use_id: "tool-123",
+			content: "File contents here",
+		}
+
+		const result = toolResultToText(block)
+
+		expect(result).toBe("[Tool Result]\nFile contents here")
+	})
+
+	it("should convert tool_result with error flag to text", () => {
+		const block: Anthropic.Messages.ToolResultBlockParam = {
+			type: "tool_result",
+			tool_use_id: "tool-456",
+			content: "File not found",
+			is_error: true,
+		}
+
+		const result = toolResultToText(block)
+
+		expect(result).toBe("[Tool Result (Error)]\nFile not found")
+	})
+
+	it("should convert tool_result with array content to text", () => {
+		const block: Anthropic.Messages.ToolResultBlockParam = {
+			type: "tool_result",
+			tool_use_id: "tool-789",
+			content: [
+				{ type: "text", text: "First line" },
+				{ type: "text", text: "Second line" },
+			],
+		}
+
+		const result = toolResultToText(block)
+
+		expect(result).toBe("[Tool Result]\nFirst line\nSecond line")
+	})
+
+	it("should handle tool_result with image in array content", () => {
+		const block: Anthropic.Messages.ToolResultBlockParam = {
+			type: "tool_result",
+			tool_use_id: "tool-img",
+			content: [
+				{ type: "text", text: "Screenshot:" },
+				{ type: "image", source: { type: "base64", media_type: "image/png", data: "abc123" } },
+			],
+		}
+
+		const result = toolResultToText(block)
+
+		expect(result).toBe("[Tool Result]\nScreenshot:\n[Image]")
+	})
+
+	it("should handle tool_result with no content", () => {
+		const block: Anthropic.Messages.ToolResultBlockParam = {
+			type: "tool_result",
+			tool_use_id: "tool-empty",
+		}
+
+		const result = toolResultToText(block)
+
+		expect(result).toBe("[Tool Result]")
+	})
+})
+
+describe("convertToolBlocksToText", () => {
+	it("should return string content unchanged", () => {
+		const content = "Simple text content"
+
+		const result = convertToolBlocksToText(content)
+
+		expect(result).toBe("Simple text content")
+	})
+
+	it("should convert tool_use blocks to text blocks", () => {
+		const content: Anthropic.Messages.ContentBlockParam[] = [
+			{
+				type: "tool_use",
+				id: "tool-123",
+				name: "read_file",
+				input: { path: "test.ts" },
+			},
+		]
+
+		const result = convertToolBlocksToText(content)
+
+		expect(Array.isArray(result)).toBe(true)
+		expect((result as Anthropic.Messages.ContentBlockParam[])[0].type).toBe("text")
+		expect((result as Anthropic.Messages.TextBlockParam[])[0].text).toContain("[Tool Use: read_file]")
+	})
+
+	it("should convert tool_result blocks to text blocks", () => {
+		const content: Anthropic.Messages.ContentBlockParam[] = [
+			{
+				type: "tool_result",
+				tool_use_id: "tool-123",
+				content: "File contents",
+			},
+		]
+
+		const result = convertToolBlocksToText(content)
+
+		expect(Array.isArray(result)).toBe(true)
+		expect((result as Anthropic.Messages.ContentBlockParam[])[0].type).toBe("text")
+		expect((result as Anthropic.Messages.TextBlockParam[])[0].text).toContain("[Tool Result]")
+	})
+
+	it("should preserve non-tool blocks unchanged", () => {
+		const content: Anthropic.Messages.ContentBlockParam[] = [
+			{ type: "text", text: "Hello" },
+			{
+				type: "tool_use",
+				id: "tool-123",
+				name: "read_file",
+				input: { path: "test.ts" },
+			},
+			{ type: "text", text: "World" },
+		]
+
+		const result = convertToolBlocksToText(content)
+
+		expect(Array.isArray(result)).toBe(true)
+		const resultArray = result as Anthropic.Messages.ContentBlockParam[]
+		expect(resultArray).toHaveLength(3)
+		expect(resultArray[0]).toEqual({ type: "text", text: "Hello" })
+		expect(resultArray[1].type).toBe("text")
+		expect((resultArray[1] as Anthropic.Messages.TextBlockParam).text).toContain("[Tool Use: read_file]")
+		expect(resultArray[2]).toEqual({ type: "text", text: "World" })
+	})
+
+	it("should handle mixed content with multiple tool blocks", () => {
+		const content: Anthropic.Messages.ContentBlockParam[] = [
+			{
+				type: "tool_use",
+				id: "tool-1",
+				name: "read_file",
+				input: { path: "a.ts" },
+			},
+			{
+				type: "tool_result",
+				tool_use_id: "tool-1",
+				content: "contents of a.ts",
+			},
+		]
+
+		const result = convertToolBlocksToText(content)
+
+		expect(Array.isArray(result)).toBe(true)
+		const resultArray = result as Anthropic.Messages.ContentBlockParam[]
+		expect(resultArray).toHaveLength(2)
+		expect((resultArray[0] as Anthropic.Messages.TextBlockParam).text).toContain("[Tool Use: read_file]")
+		expect((resultArray[1] as Anthropic.Messages.TextBlockParam).text).toContain("[Tool Result]")
+		expect((resultArray[1] as Anthropic.Messages.TextBlockParam).text).toContain("contents of a.ts")
+	})
+})
+
+describe("transformMessagesForCondensing", () => {
+	it("should transform all messages with tool blocks to text", () => {
+		const messages = [
+			{ role: "user" as const, content: "Hello" },
+			{
+				role: "assistant" as const,
+				content: [
+					{
+						type: "tool_use" as const,
+						id: "tool-1",
+						name: "read_file",
+						input: { path: "test.ts" },
+					},
+				],
+			},
+			{
+				role: "user" as const,
+				content: [
+					{
+						type: "tool_result" as const,
+						tool_use_id: "tool-1",
+						content: "file contents",
+					},
+				],
+			},
+		]
+
+		const result = transformMessagesForCondensing(messages)
+
+		expect(result).toHaveLength(3)
+		expect(result[0].content).toBe("Hello")
+		expect(Array.isArray(result[1].content)).toBe(true)
+		expect((result[1].content as any[])[0].type).toBe("text")
+		expect((result[1].content as any[])[0].text).toContain("[Tool Use: read_file]")
+		expect(Array.isArray(result[2].content)).toBe(true)
+		expect((result[2].content as any[])[0].type).toBe("text")
+		expect((result[2].content as any[])[0].text).toContain("[Tool Result]")
+	})
+
+	it("should preserve message role and other properties", () => {
+		const messages = [
+			{
+				role: "assistant" as const,
+				content: [
+					{
+						type: "tool_use" as const,
+						id: "tool-1",
+						name: "execute",
+						input: { cmd: "ls" },
+					},
+				],
+			},
+		]
+
+		const result = transformMessagesForCondensing(messages)
+
+		expect(result[0].role).toBe("assistant")
+	})
+
+	it("should handle empty messages array", () => {
+		const result = transformMessagesForCondensing([])
+
+		expect(result).toEqual([])
+	})
+
+	it("should not mutate original messages", () => {
+		const originalContent = [
+			{
+				type: "tool_use" as const,
+				id: "tool-1",
+				name: "read_file",
+				input: { path: "test.ts" },
+			},
+		]
+		const messages = [{ role: "assistant" as const, content: originalContent }]
+
+		transformMessagesForCondensing(messages)
+
+		// Original should still have tool_use type
+		expect(messages[0].content[0].type).toBe("tool_use")
+	})
+})

+ 102 - 2
src/core/condense/index.ts

@@ -14,6 +14,100 @@ import { generateFoldedFileContext } from "./foldedFileContext"
 
 export type { FoldedFileContextResult, FoldedFileContextOptions } from "./foldedFileContext"
 
+/**
+ * Converts a tool_use block to a text representation.
+ * This allows the conversation to be summarized without requiring the tools parameter.
+ */
+export function toolUseToText(block: Anthropic.Messages.ToolUseBlockParam): string {
+	let input: string
+	if (typeof block.input === "object" && block.input !== null) {
+		input = Object.entries(block.input)
+			.map(([key, value]) => {
+				const formattedValue =
+					typeof value === "object" && value !== null ? JSON.stringify(value, null, 2) : String(value)
+				return `${key}: ${formattedValue}`
+			})
+			.join("\n")
+	} else {
+		input = String(block.input)
+	}
+	return `[Tool Use: ${block.name}]\n${input}`
+}
+
+/**
+ * Converts a tool_result block to a text representation.
+ * This allows the conversation to be summarized without requiring the tools parameter.
+ */
+export function toolResultToText(block: Anthropic.Messages.ToolResultBlockParam): string {
+	const errorSuffix = block.is_error ? " (Error)" : ""
+	if (typeof block.content === "string") {
+		return `[Tool Result${errorSuffix}]\n${block.content}`
+	} else if (Array.isArray(block.content)) {
+		const contentText = block.content
+			.map((contentBlock) => {
+				if (contentBlock.type === "text") {
+					return contentBlock.text
+				}
+				if (contentBlock.type === "image") {
+					return "[Image]"
+				}
+				// Handle any other content block types
+				return `[${(contentBlock as { type: string }).type}]`
+			})
+			.join("\n")
+		return `[Tool Result${errorSuffix}]\n${contentText}`
+	}
+	return `[Tool Result${errorSuffix}]`
+}
+
+/**
+ * Converts all tool_use and tool_result blocks in a message's content to text representations.
+ * This is necessary for providers like Bedrock that require the tools parameter when tool blocks are present.
+ * By converting to text, we can send the conversation for summarization without the tools parameter.
+ *
+ * @param content - The message content (string or array of content blocks)
+ * @returns The transformed content with tool blocks converted to text blocks
+ */
+export function convertToolBlocksToText(
+	content: string | Anthropic.Messages.ContentBlockParam[],
+): string | Anthropic.Messages.ContentBlockParam[] {
+	if (typeof content === "string") {
+		return content
+	}
+
+	return content.map((block) => {
+		if (block.type === "tool_use") {
+			return {
+				type: "text" as const,
+				text: toolUseToText(block),
+			}
+		}
+		if (block.type === "tool_result") {
+			return {
+				type: "text" as const,
+				text: toolResultToText(block),
+			}
+		}
+		return block
+	})
+}
+
+/**
+ * Transforms all messages by converting tool_use and tool_result blocks to text representations.
+ * This ensures the conversation can be sent for summarization without requiring the tools parameter.
+ *
+ * @param messages - The messages to transform
+ * @returns The transformed messages with tool blocks converted to text
+ */
+export function transformMessagesForCondensing<
+	T extends { role: string; content: string | Anthropic.Messages.ContentBlockParam[] },
+>(messages: T[]): T[] {
+	return messages.map((msg) => ({
+		...msg,
+		content: convertToolBlocksToText(msg.content),
+	}))
+}
+
 export const MIN_CONDENSE_THRESHOLD = 5 // Minimum percentage of context window to trigger condensing
 export const MAX_CONDENSE_THRESHOLD = 100 // Maximum percentage of context window to trigger condensing
 
@@ -213,10 +307,16 @@ export async function summarizeConversation(options: SummarizeConversationOption
 	// (e.g., when user triggers condense after receiving attempt_completion but before responding)
 	const messagesWithToolResults = injectSyntheticToolResults(messagesToSummarize)
 
-	const requestMessages = maybeRemoveImageBlocks([...messagesWithToolResults, finalRequestMessage], apiHandler).map(
-		({ role, content }) => ({ role, content }),
+	// Transform tool_use and tool_result blocks to text representations.
+	// This is necessary because some providers (like Bedrock via LiteLLM) require the `tools` parameter
+	// when tool blocks are present. By converting them to text, we can send the conversation for
+	// summarization without needing to pass the tools parameter.
+	const messagesWithTextToolBlocks = transformMessagesForCondensing(
+		maybeRemoveImageBlocks([...messagesWithToolResults, finalRequestMessage], apiHandler),
 	)
 
+	const requestMessages = messagesWithTextToolBlocks.map(({ role, content }) => ({ role, content }))
+
 	// Note: this doesn't need to be a stream, consider using something like apiHandler.completePrompt
 	const promptToUse = SUMMARY_PROMPT
 

+ 0 - 3
src/core/environment/getEnvironmentDetails.ts

@@ -221,13 +221,10 @@ export async function getEnvironmentDetails(cline: Task, includeFileDetails: boo
 		language: language ?? formatLanguage(vscode.env.language),
 	})
 
-	const toolFormat = "native"
-
 	details += `\n\n# Current Mode\n`
 	details += `<slug>${currentMode}</slug>\n`
 	details += `<name>${modeDetails.name}</name>\n`
 	details += `<model>${modelId}</model>\n`
-	details += `<tool_format>${toolFormat}</tool_format>\n`
 
 	// Add browser session status - Only show when active to prevent cluttering context
 	const isBrowserActive = cline.browserSession.isSessionActive()

+ 18 - 96
src/core/mentions/__tests__/processUserContentMentions.spec.ts

@@ -26,100 +26,10 @@ describe("processUserContentMentions", () => {
 		vi.mocked(parseMentions).mockImplementation(async (text) => ({
 			text: `parsed: ${text}`,
 			mode: undefined,
+			contentBlocks: [],
 		}))
 	})
 
-	describe("maxReadFileLine parameter", () => {
-		it("should pass maxReadFileLine to parseMentions when provided", async () => {
-			const userContent = [
-				{
-					type: "text" as const,
-					text: "<user_message>Read file with limit</user_message>",
-				},
-			]
-
-			await processUserContentMentions({
-				userContent,
-				cwd: "/test",
-				urlContentFetcher: mockUrlContentFetcher,
-				fileContextTracker: mockFileContextTracker,
-				rooIgnoreController: mockRooIgnoreController,
-				maxReadFileLine: 100,
-			})
-
-			expect(parseMentions).toHaveBeenCalledWith(
-				"<user_message>Read file with limit</user_message>",
-				"/test",
-				mockUrlContentFetcher,
-				mockFileContextTracker,
-				mockRooIgnoreController,
-				false,
-				true, // includeDiagnosticMessages
-				50, // maxDiagnosticMessages
-				100,
-			)
-		})
-
-		it("should pass undefined maxReadFileLine when not provided", async () => {
-			const userContent = [
-				{
-					type: "text" as const,
-					text: "<user_message>Read file without limit</user_message>",
-				},
-			]
-
-			await processUserContentMentions({
-				userContent,
-				cwd: "/test",
-				urlContentFetcher: mockUrlContentFetcher,
-				fileContextTracker: mockFileContextTracker,
-				rooIgnoreController: mockRooIgnoreController,
-			})
-
-			expect(parseMentions).toHaveBeenCalledWith(
-				"<user_message>Read file without limit</user_message>",
-				"/test",
-				mockUrlContentFetcher,
-				mockFileContextTracker,
-				mockRooIgnoreController,
-				false,
-				true, // includeDiagnosticMessages
-				50, // maxDiagnosticMessages
-				undefined,
-			)
-		})
-
-		it("should handle UNLIMITED_LINES constant correctly", async () => {
-			const userContent = [
-				{
-					type: "text" as const,
-					text: "<user_message>Read unlimited lines</user_message>",
-				},
-			]
-
-			await processUserContentMentions({
-				userContent,
-				cwd: "/test",
-				urlContentFetcher: mockUrlContentFetcher,
-				fileContextTracker: mockFileContextTracker,
-				rooIgnoreController: mockRooIgnoreController,
-				maxReadFileLine: -1,
-			})
-
-			expect(parseMentions).toHaveBeenCalledWith(
-				"<user_message>Read unlimited lines</user_message>",
-				"/test",
-				mockUrlContentFetcher,
-				mockFileContextTracker,
-				mockRooIgnoreController,
-				false,
-				true, // includeDiagnosticMessages
-				50, // maxDiagnosticMessages
-				-1,
-			)
-		})
-	})
-
 	describe("content processing", () => {
 		it("should process text blocks with <user_message> tags", async () => {
 			const userContent = [
@@ -181,10 +91,16 @@ describe("processUserContentMentions", () => {
 			})
 
 			expect(parseMentions).toHaveBeenCalled()
+			// String content is now converted to array format to support content blocks
 			expect(result.content[0]).toEqual({
 				type: "tool_result",
 				tool_use_id: "123",
-				content: "parsed: <user_message>Tool feedback</user_message>",
+				content: [
+					{
+						type: "text",
+						text: "parsed: <user_message>Tool feedback</user_message>",
+					},
+				],
 			})
 			expect(result.mode).toBeUndefined()
 		})
@@ -258,7 +174,6 @@ describe("processUserContentMentions", () => {
 				cwd: "/test",
 				urlContentFetcher: mockUrlContentFetcher,
 				fileContextTracker: mockFileContextTracker,
-				maxReadFileLine: 50,
 			})
 
 			expect(parseMentions).toHaveBeenCalledTimes(2)
@@ -268,10 +183,16 @@ describe("processUserContentMentions", () => {
 				text: "parsed: <user_message>First task</user_message>",
 			})
 			expect(result.content[1]).toEqual(userContent[1]) // Image block unchanged
+			// String content is now converted to array format to support content blocks
 			expect(result.content[2]).toEqual({
 				type: "tool_result",
 				tool_use_id: "456",
-				content: "parsed: <user_message>Feedback</user_message>",
+				content: [
+					{
+						type: "text",
+						text: "parsed: <user_message>Feedback</user_message>",
+					},
+				],
 			})
 			expect(result.mode).toBeUndefined()
 		})
@@ -302,7 +223,6 @@ describe("processUserContentMentions", () => {
 				false, // showRooIgnoredFiles should default to false
 				true, // includeDiagnosticMessages
 				50, // maxDiagnosticMessages
-				undefined,
 			)
 		})
 
@@ -331,7 +251,6 @@ describe("processUserContentMentions", () => {
 				false,
 				true, // includeDiagnosticMessages
 				50, // maxDiagnosticMessages
-				undefined,
 			)
 		})
 	})
@@ -342,6 +261,7 @@ describe("processUserContentMentions", () => {
 				text: "parsed text",
 				slashCommandHelp: "command help",
 				mode: undefined,
+				contentBlocks: [],
 			})
 
 			const userContent = [
@@ -374,6 +294,7 @@ describe("processUserContentMentions", () => {
 				text: "parsed tool output",
 				slashCommandHelp: "command help",
 				mode: undefined,
+				contentBlocks: [],
 			})
 
 			const userContent = [
@@ -413,6 +334,7 @@ describe("processUserContentMentions", () => {
 				text: "parsed array item",
 				slashCommandHelp: "command help",
 				mode: undefined,
+				contentBlocks: [],
 			})
 
 			const userContent = [

+ 147 - 54
src/core/mentions/index.ts

@@ -9,8 +9,9 @@ import { mentionRegexGlobal, commandRegexGlobal, unescapeSpaces } from "../../sh
 import { getCommitInfo, getWorkingState } from "../../utils/git"
 
 import { openFile } from "../../integrations/misc/open-file"
-import { extractTextFromFile } from "../../integrations/misc/extract-text"
+import { extractTextFromFileWithMetadata, type ExtractTextResult } from "../../integrations/misc/extract-text"
 import { diagnosticsToProblemsString } from "../../integrations/diagnostics"
+import { DEFAULT_LINE_LIMIT } from "../prompts/tools/native-tools/read_file"
 
 import { UrlContentFetcher } from "../../services/browser/UrlContentFetcher"
 
@@ -71,12 +72,59 @@ export async function openMention(cwd: string, mention?: string): Promise<void>
 	}
 }
 
+/**
+ * Represents a content block generated from an @ mention.
+ * These are returned separately from the user's text to enable
+ * proper formatting as distinct message blocks.
+ */
+export interface MentionContentBlock {
+	type: "file" | "folder" | "url" | "diagnostics" | "git_changes" | "git_commit" | "terminal" | "command"
+	/** Path for file/folder mentions */
+	path?: string
+	/** The content to display */
+	content: string
+	/** Metadata about truncation (for files) */
+	metadata?: {
+		totalLines: number
+		returnedLines: number
+		wasTruncated: boolean
+		linesShown?: [number, number]
+	}
+}
+
 export interface ParseMentionsResult {
+	/** User's text with @ mentions replaced by clean path references */
 	text: string
+	/** Separate content blocks for each mention (file content, URLs, etc.) */
+	contentBlocks: MentionContentBlock[]
 	slashCommandHelp?: string
 	mode?: string // Mode from the first slash command that has one
 }
 
+/**
+ * Formats file content to look like a read_file tool result.
+ * Includes Gemini-style truncation warning when content is truncated.
+ */
+function formatFileReadResult(filePath: string, result: ExtractTextResult): string {
+	const header = `[read_file for '${filePath}']`
+
+	if (result.wasTruncated && result.linesShown) {
+		const [start, end] = result.linesShown
+		const nextOffset = end + 1
+		return `${header}
+IMPORTANT: File content truncated.
+Status: Showing lines ${start}-${end} of ${result.totalLines} total lines.
+To read more: Use the read_file tool with offset=${nextOffset} and limit=${DEFAULT_LINE_LIMIT}.
+
+File: ${filePath}
+${result.content}`
+	}
+
+	return `${header}
+File: ${filePath}
+${result.content}`
+}
+
 export async function parseMentions(
 	text: string,
 	cwd: string,
@@ -86,10 +134,10 @@ export async function parseMentions(
 	showRooIgnoredFiles: boolean = false,
 	includeDiagnosticMessages: boolean = true,
 	maxDiagnosticMessages: number = 50,
-	maxReadFileLine?: number,
 ): Promise<ParseMentionsResult> {
 	const mentions: Set<string> = new Set()
 	const validCommands: Map<string, Command> = new Map()
+	const contentBlocks: MentionContentBlock[] = []
 	let commandMode: string | undefined // Track mode from the first slash command that has one
 
 	// First pass: check which command mentions exist and cache the results
@@ -119,7 +167,7 @@ export async function parseMentions(
 		}
 	}
 
-	// Only replace text for commands that actually exist
+	// Only replace text for commands that actually exist (keep "see below" for commands)
 	let parsedText = text
 	for (const [match, commandName] of commandMatches) {
 		if (validCommands.has(commandName)) {
@@ -127,16 +175,17 @@ export async function parseMentions(
 		}
 	}
 
-	// Second pass: handle regular mentions
+	// Second pass: handle regular mentions - replace with clean references
+	// Content will be provided as separate blocks that look like read_file results
 	parsedText = parsedText.replace(mentionRegexGlobal, (match, mention) => {
 		mentions.add(mention)
 		if (mention.startsWith("http")) {
+			// Keep old style for URLs (still XML-based)
 			return `'${mention}' (see below for site content)`
 		} else if (mention.startsWith("/")) {
+			// Clean path reference - no "see below" since we format like tool results
 			const mentionPath = mention.slice(1)
-			return mentionPath.endsWith("/")
-				? `'${mentionPath}' (see below for folder content)`
-				: `'${mentionPath}' (see below for file content)`
+			return mentionPath.endsWith("/") ? `'${mentionPath}'` : `'${mentionPath}'`
 		} else if (mention === "problems") {
 			return `Workspace Problems (see below for diagnostics)`
 		} else if (mention === "git-changes") {
@@ -189,31 +238,26 @@ export async function parseMentions(
 					result = `Error fetching content: ${rawErrorMessage}`
 				}
 			}
+			// URLs still use XML format (appended to text for backwards compat)
 			parsedText += `\n\n<url_content url="${mention}">\n${result}\n</url_content>`
 		} else if (mention.startsWith("/")) {
 			const mentionPath = mention.slice(1)
 			try {
-				const content = await getFileOrFolderContent(
+				const fileResult = await getFileOrFolderContentWithMetadata(
 					mentionPath,
 					cwd,
 					rooIgnoreController,
 					showRooIgnoredFiles,
-					maxReadFileLine,
+					fileContextTracker,
 				)
-				if (mention.endsWith("/")) {
-					parsedText += `\n\n<folder_content path="${mentionPath}">\n${content}\n</folder_content>`
-				} else {
-					parsedText += `\n\n<file_content path="${mentionPath}">\n${content}\n</file_content>`
-					if (fileContextTracker) {
-						await fileContextTracker.trackFileContext(mentionPath, "file_mentioned")
-					}
-				}
+				contentBlocks.push(fileResult)
 			} catch (error) {
-				if (mention.endsWith("/")) {
-					parsedText += `\n\n<folder_content path="${mentionPath}">\nError fetching content: ${error.message}\n</folder_content>`
-				} else {
-					parsedText += `\n\n<file_content path="${mentionPath}">\nError fetching content: ${error.message}\n</file_content>`
-				}
+				const errorMsg = error instanceof Error ? error.message : String(error)
+				contentBlocks.push({
+					type: mention.endsWith("/") ? "folder" : "file",
+					path: mentionPath,
+					content: `[read_file for '${mentionPath}']\nError: ${errorMsg}`,
+				})
 			}
 		} else if (mention === "problems") {
 			try {
@@ -269,18 +313,28 @@ export async function parseMentions(
 		}
 	}
 
-	return { text: parsedText, mode: commandMode, slashCommandHelp: slashCommandHelp.trim() || undefined }
+	return {
+		text: parsedText,
+		contentBlocks,
+		mode: commandMode,
+		slashCommandHelp: slashCommandHelp.trim() || undefined,
+	}
 }
 
-async function getFileOrFolderContent(
+/**
+ * Gets file or folder content and returns it as a MentionContentBlock
+ * formatted to look like a read_file tool result.
+ */
+async function getFileOrFolderContentWithMetadata(
 	mentionPath: string,
 	cwd: string,
 	rooIgnoreController?: any,
 	showRooIgnoredFiles: boolean = false,
-	maxReadFileLine?: number,
-): Promise<string> {
+	fileContextTracker?: FileContextTracker,
+): Promise<MentionContentBlock> {
 	const unescapedPath = unescapeSpaces(mentionPath)
 	const absPath = path.resolve(cwd, unescapedPath)
+	const isFolder = mentionPath.endsWith("/")
 
 	try {
 		const stats = await fs.stat(absPath)
@@ -290,21 +344,50 @@ async function getFileOrFolderContent(
 			// Image mentions are handled separately via image attachment flow.
 			const isBinary = await isBinaryFile(absPath).catch(() => false)
 			if (isBinary) {
-				return `(Binary file ${mentionPath} omitted)`
+				return {
+					type: "file",
+					path: mentionPath,
+					content: `[read_file for '${mentionPath}']\nNote: Binary file omitted from context.`,
+				}
 			}
 			if (rooIgnoreController && !rooIgnoreController.validateAccess(unescapedPath)) {
-				return `(File ${mentionPath} is ignored by .rooignore)`
+				return {
+					type: "file",
+					path: mentionPath,
+					content: `[read_file for '${mentionPath}']\nNote: File is ignored by .rooignore.`,
+				}
 			}
 			try {
-				const content = await extractTextFromFile(absPath, maxReadFileLine)
-				return content
+				const result = await extractTextFromFileWithMetadata(absPath)
+
+				// Track file context
+				if (fileContextTracker) {
+					await fileContextTracker.trackFileContext(mentionPath, "file_mentioned")
+				}
+
+				return {
+					type: "file",
+					path: mentionPath,
+					content: formatFileReadResult(mentionPath, result),
+					metadata: {
+						totalLines: result.totalLines,
+						returnedLines: result.returnedLines,
+						wasTruncated: result.wasTruncated,
+						linesShown: result.linesShown,
+					},
+				}
 			} catch (error) {
-				return `(Failed to read contents of ${mentionPath}): ${error.message}`
+				const errorMsg = error instanceof Error ? error.message : String(error)
+				return {
+					type: "file",
+					path: mentionPath,
+					content: `[read_file for '${mentionPath}']\nError: ${errorMsg}`,
+				}
 			}
 		} else if (stats.isDirectory()) {
 			const entries = await fs.readdir(absPath, { withFileTypes: true })
-			let folderContent = ""
-			const fileContentPromises: Promise<string | undefined>[] = []
+			let folderListing = ""
+			const fileReadResults: string[] = []
 			const LOCK_SYMBOL = "🔒"
 
 			for (let index = 0; index < entries.length; index++) {
@@ -325,38 +408,48 @@ async function getFileOrFolderContent(
 				const displayName = isIgnored ? `${LOCK_SYMBOL} ${entry.name}` : entry.name
 
 				if (entry.isFile()) {
-					folderContent += `${linePrefix}${displayName}\n`
+					folderListing += `${linePrefix}${displayName}\n`
 					if (!isIgnored) {
 						const filePath = path.join(mentionPath, entry.name)
 						const absoluteFilePath = path.resolve(absPath, entry.name)
-						fileContentPromises.push(
-							(async () => {
-								try {
-									const isBinary = await isBinaryFile(absoluteFilePath).catch(() => false)
-									if (isBinary) {
-										return undefined
-									}
-									const content = await extractTextFromFile(absoluteFilePath, maxReadFileLine)
-									return `<file_content path="${filePath.toPosix()}">\n${content}\n</file_content>`
-								} catch (error) {
-									return undefined
-								}
-							})(),
-						)
+						try {
+							const isBinary = await isBinaryFile(absoluteFilePath).catch(() => false)
+							if (!isBinary) {
+								const result = await extractTextFromFileWithMetadata(absoluteFilePath)
+								fileReadResults.push(formatFileReadResult(filePath.toPosix(), result))
+							}
+						} catch (error) {
+							// Skip files that can't be read
+						}
 					}
 				} else if (entry.isDirectory()) {
-					folderContent += `${linePrefix}${displayName}/\n`
+					folderListing += `${linePrefix}${displayName}/\n`
 				} else {
-					folderContent += `${linePrefix}${displayName}\n`
+					folderListing += `${linePrefix}${displayName}\n`
 				}
 			}
-			const fileContents = (await Promise.all(fileContentPromises)).filter((content) => content)
-			return `${folderContent}\n${fileContents.join("\n\n")}`.trim()
+
+			// Format folder content similar to read_file output
+			let content = `[read_file for folder '${mentionPath}']\nFolder listing:\n${folderListing}`
+			if (fileReadResults.length > 0) {
+				content += `\n\n--- File Contents ---\n\n${fileReadResults.join("\n\n")}`
+			}
+
+			return {
+				type: "folder",
+				path: mentionPath,
+				content,
+			}
 		} else {
-			return `(Failed to read contents of ${mentionPath})`
+			return {
+				type: isFolder ? "folder" : "file",
+				path: mentionPath,
+				content: `[read_file for '${mentionPath}']\nError: Unable to read (not a file or directory)`,
+			}
 		}
 	} catch (error) {
-		throw new Error(`Failed to access path "${mentionPath}": ${error.message}`)
+		const errorMsg = error instanceof Error ? error.message : String(error)
+		throw new Error(`Failed to access path "${mentionPath}": ${errorMsg}`)
 	}
 }
 

+ 64 - 22
src/core/mentions/processUserContentMentions.ts

@@ -1,5 +1,5 @@
 import { Anthropic } from "@anthropic-ai/sdk"
-import { parseMentions, ParseMentionsResult } from "./index"
+import { parseMentions, ParseMentionsResult, MentionContentBlock } from "./index"
 import { UrlContentFetcher } from "../../services/browser/UrlContentFetcher"
 import { FileContextTracker } from "../context-tracking/FileContextTracker"
 
@@ -9,7 +9,23 @@ export interface ProcessUserContentMentionsResult {
 }
 
 /**
- * Process mentions in user content, specifically within task and feedback tags
+ * Converts MentionContentBlocks to Anthropic text blocks.
+ * Each file/folder mention becomes a separate text block formatted
+ * to look like a read_file tool result.
+ */
+function contentBlocksToAnthropicBlocks(contentBlocks: MentionContentBlock[]): Anthropic.Messages.TextBlockParam[] {
+	return contentBlocks.map((block) => ({
+		type: "text" as const,
+		text: block.content,
+	}))
+}
+
+/**
+ * Process mentions in user content, specifically within task and feedback tags.
+ *
+ * File/folder @ mentions are now returned as separate text blocks that
+ * look like read_file tool results, making it clear to the model that
+ * the file has already been read.
  */
 export async function processUserContentMentions({
 	userContent,
@@ -20,7 +36,6 @@ export async function processUserContentMentions({
 	showRooIgnoredFiles = false,
 	includeDiagnosticMessages = true,
 	maxDiagnosticMessages = 50,
-	maxReadFileLine,
 }: {
 	userContent: Anthropic.Messages.ContentBlockParam[]
 	cwd: string
@@ -30,7 +45,6 @@ export async function processUserContentMentions({
 	showRooIgnoredFiles?: boolean
 	includeDiagnosticMessages?: boolean
 	maxDiagnosticMessages?: number
-	maxReadFileLine?: number
 }): Promise<ProcessUserContentMentionsResult> {
 	// Track the first mode found from slash commands
 	let commandMode: string | undefined
@@ -58,18 +72,28 @@ export async function processUserContentMentions({
 							showRooIgnoredFiles,
 							includeDiagnosticMessages,
 							maxDiagnosticMessages,
-							maxReadFileLine,
 						)
 						// Capture the first mode found
 						if (!commandMode && result.mode) {
 							commandMode = result.mode
 						}
+
+						// Build the blocks array:
+						// 1. User's text (with @ mentions replaced by clean paths)
+						// 2. File/folder content blocks (formatted like read_file results)
+						// 3. Slash command help (if any)
 						const blocks: Anthropic.Messages.ContentBlockParam[] = [
 							{
 								...block,
 								text: result.text,
 							},
 						]
+
+						// Add file/folder content as separate blocks
+						if (result.contentBlocks.length > 0) {
+							blocks.push(...contentBlocksToAnthropicBlocks(result.contentBlocks))
+						}
+
 						if (result.slashCommandHelp) {
 							blocks.push({
 								type: "text" as const,
@@ -92,30 +116,38 @@ export async function processUserContentMentions({
 								showRooIgnoredFiles,
 								includeDiagnosticMessages,
 								maxDiagnosticMessages,
-								maxReadFileLine,
 							)
 							// Capture the first mode found
 							if (!commandMode && result.mode) {
 								commandMode = result.mode
 							}
+
+							// Build content array with file blocks included
+							const contentParts: Array<{ type: "text"; text: string }> = [
+								{
+									type: "text" as const,
+									text: result.text,
+								},
+							]
+
+							// Add file/folder content blocks
+							for (const contentBlock of result.contentBlocks) {
+								contentParts.push({
+									type: "text" as const,
+									text: contentBlock.content,
+								})
+							}
+
 							if (result.slashCommandHelp) {
-								return {
-									...block,
-									content: [
-										{
-											type: "text" as const,
-											text: result.text,
-										},
-										{
-											type: "text" as const,
-											text: result.slashCommandHelp,
-										},
-									],
-								}
+								contentParts.push({
+									type: "text" as const,
+									text: result.slashCommandHelp,
+								})
 							}
+
 							return {
 								...block,
-								content: result.text,
+								content: contentParts,
 							}
 						}
 
@@ -134,18 +166,28 @@ export async function processUserContentMentions({
 											showRooIgnoredFiles,
 											includeDiagnosticMessages,
 											maxDiagnosticMessages,
-											maxReadFileLine,
 										)
 										// Capture the first mode found
 										if (!commandMode && result.mode) {
 											commandMode = result.mode
 										}
-										const blocks = [
+
+										// Build blocks array with file content
+										const blocks: Array<{ type: "text"; text: string }> = [
 											{
 												...contentBlock,
 												text: result.text,
 											},
 										]
+
+										// Add file/folder content blocks
+										for (const cb of result.contentBlocks) {
+											blocks.push({
+												type: "text" as const,
+												text: cb.content,
+											})
+										}
+
 										if (result.slashCommandHelp) {
 											blocks.push({
 												type: "text" as const,

+ 0 - 127
src/core/prompts/__tests__/__snapshots__/add-custom-instructions/partial-reads-enabled.snap

@@ -1,127 +0,0 @@
-You are Roo, an experienced technical leader who is inquisitive and an excellent planner. Your goal is to gather information and get context to create a detailed plan for accomplishing the user's task, which the user will review and approve before they switch into another mode to implement the solution.
-
-====
-
-MARKDOWN RULES
-
-ALL responses MUST show ANY `language construct` OR filename reference as clickable, exactly as [`filename OR language.declaration()`](relative/file/path.ext:line); line is required for `syntax` and optional for filename links. This applies to ALL markdown responses and ALSO those in attempt_completion
-
-====
-
-TOOL USE
-
-You have access to a set of tools that are executed upon the user's approval. Use the provider-native tool-calling mechanism. Do not include XML markup or examples. You must call at least one tool per assistant response. Prefer calling as many tools as are reasonably needed in a single response to reduce back-and-forth and complete tasks faster.
-
-	# Tool Use Guidelines
-
-1. Assess what information you already have and what information you need to proceed with the task.
-2. Choose the most appropriate tool based on the task and the tool descriptions provided. Assess if you need additional information to proceed, and which of the available tools would be most effective for gathering this information. For example using the list_files tool is more effective than running a command like `ls` in the terminal. It's critical that you think about each available tool and use the one that best fits the current step in the task.
-3. If multiple actions are needed, you may use multiple tools in a single message when appropriate, or use tools iteratively across messages. Each tool use should be informed by the results of previous tool uses. Do not assume the outcome of any tool use. Each step must be informed by the previous step's result.
-
-By carefully considering the user's response after tool executions, you can react accordingly and make informed decisions about how to proceed with the task. This iterative process helps ensure the overall success and accuracy of your work.
-
-====
-
-CAPABILITIES
-
-- You have access to tools that let you execute CLI commands on the user's computer, list files, view source code definitions, regex search, read and write files, and ask follow-up questions. These tools help you effectively accomplish a wide range of tasks, such as writing code, making edits or improvements to existing files, understanding the current state of a project, performing system operations, and much more.
-- When the user initially gives you a task, a recursive list of all filepaths in the current workspace directory ('/test/path') will be included in environment_details. This provides an overview of the project's file structure, offering key insights into the project from directory/file names (how developers conceptualize and organize their code) and file extensions (the language used). This can also guide decision-making on which files to explore further. If you need to further explore directories such as outside the current workspace directory, you can use the list_files tool. If you pass 'true' for the recursive parameter, it will list files recursively. Otherwise, it will list files at the top level, which is better suited for generic directories where you don't necessarily need the nested structure, like the Desktop.
-- You can use the execute_command tool to run commands on the user's computer whenever you feel it can help accomplish the user's task. When you need to execute a CLI command, you must provide a clear explanation of what the command does. Prefer to execute complex CLI commands over creating executable scripts, since they are more flexible and easier to run. Interactive and long-running commands are allowed, since the commands are run in the user's VSCode terminal. The user may keep commands running in the background and you will be kept updated on their status along the way. Each command you execute is run in a new terminal instance.
-
-====
-
-MODES
-
-- Test modes section
-
-====
-
-RULES
-
-- The project base directory is: /test/path
-- All file paths must be relative to this directory. However, commands may change directories in terminals, so respect working directory specified by the response to execute_command.
-- You cannot `cd` into a different directory to complete a task. You are stuck operating from '/test/path', so be sure to pass in the correct 'path' parameter when using tools that require a path.
-- Do not use the ~ character or $HOME to refer to the home directory.
-- Before using the execute_command tool, you must first think about the SYSTEM INFORMATION context provided to understand the user's environment and tailor your commands to ensure they are compatible with their system. You must also consider if the command you need to run should be executed in a specific directory outside of the current working directory '/test/path', and if so prepend with `cd`'ing into that directory && then executing the command (as one command since you are stuck operating from '/test/path'). For example, if you needed to run `npm install` in a project outside of '/test/path', you would need to prepend with a `cd` i.e. pseudocode for this would be `cd (path to project) && (command, in this case npm install)`.
-- Some modes have restrictions on which files they can edit. If you attempt to edit a restricted file, the operation will be rejected with a FileRestrictionError that will specify which file patterns are allowed for the current mode.
-- Be sure to consider the type of project (e.g. Python, JavaScript, web application) when determining the appropriate structure and files to include. Also consider what files may be most relevant to accomplishing the task, for example looking at a project's manifest file would help you understand the project's dependencies, which you could incorporate into any code you write.
-  * For example, in architect mode trying to edit app.js would be rejected because architect mode can only edit files matching "\.md$"
-- When making changes to code, always consider the context in which the code is being used. Ensure that your changes are compatible with the existing codebase and that they follow the project's coding standards and best practices.
-- Do not ask for more information than necessary. Use the tools provided to accomplish the user's request efficiently and effectively. When you've completed your task, you must use the attempt_completion tool to present the result to the user. The user may provide feedback, which you can use to make improvements and try again.
-- You are only allowed to ask the user questions using the ask_followup_question tool. Use this tool only when you need additional details to complete a task, and be sure to use a clear and concise question that will help you move forward with the task. When you ask a question, provide the user with 2-4 suggested answers based on your question so they don't need to do so much typing. The suggestions should be specific, actionable, and directly related to the completed task. They should be ordered by priority or logical sequence. However if you can use the available tools to avoid having to ask the user questions, you should do so. For example, if the user mentions a file that may be in an outside directory like the Desktop, you should use the list_files tool to list the files in the Desktop and check if the file they are talking about is there, rather than asking the user to provide the file path themselves.
-- When executing commands, if you don't see the expected output, assume the terminal executed the command successfully and proceed with the task. The user's terminal may be unable to stream the output back properly. If you absolutely need to see the actual terminal output, use the ask_followup_question tool to request the user to copy and paste it back to you.
-- The user may provide a file's contents directly in their message, in which case you shouldn't use the read_file tool to get the file contents again since you already have it.
-- Your goal is to try to accomplish the user's task, NOT engage in a back and forth conversation.
-- NEVER end attempt_completion result with a question or request to engage in further conversation! Formulate the end of your result in a way that is final and does not require further input from the user.
-- You are STRICTLY FORBIDDEN from starting your messages with "Great", "Certainly", "Okay", "Sure". You should NOT be conversational in your responses, but rather direct and to the point. For example you should NOT say "Great, I've updated the CSS" but instead something like "I've updated the CSS". It is important you be clear and technical in your messages.
-- When presented with images, utilize your vision capabilities to thoroughly examine them and extract meaningful information. Incorporate these insights into your thought process as you accomplish the user's task.
-- At the end of each user message, you will automatically receive environment_details. This information is not written by the user themselves, but is auto-generated to provide potentially relevant context about the project structure and environment. While this information can be valuable for understanding the project context, do not treat it as a direct part of the user's request or response. Use it to inform your actions and decisions, but don't assume the user is explicitly asking about or referring to this information unless they clearly do so in their message. When using environment_details, explain your actions clearly to ensure the user understands, as they may not be aware of these details.
-- Before executing commands, check the "Actively Running Terminals" section in environment_details. If present, consider how these active processes might impact your task. For example, if a local development server is already running, you wouldn't need to start it again. If no active terminals are listed, proceed with command execution as normal.
-- MCP operations should be used one at a time, similar to other tool usage. Wait for confirmation of success before proceeding with additional operations.
-- It is critical you wait for the user's response after each tool use, in order to confirm the success of the tool use. For example, if asked to make a todo app, you would create a file, wait for the user's response it was created successfully, then create another file if needed, wait for the user's response it was created successfully, etc.
-
-====
-
-SYSTEM INFORMATION
-
-Operating System: Linux
-Default Shell: /bin/zsh
-Home Directory: /home/user
-Current Workspace Directory: /test/path
-
-The Current Workspace Directory is the active VS Code project directory, and is therefore the default directory for all tool operations. New terminals will be created in the current workspace directory, however if you change directories in a terminal it will then have a different working directory; changing directories in a terminal does not modify the workspace directory, because you do not have access to change the workspace directory. When the user initially gives you a task, a recursive list of all filepaths in the current workspace directory ('/test/path') will be included in environment_details. This provides an overview of the project's file structure, offering key insights into the project from directory/file names (how developers conceptualize and organize their code) and file extensions (the language used). This can also guide decision-making on which files to explore further. If you need to further explore directories such as outside the current workspace directory, you can use the list_files tool. If you pass 'true' for the recursive parameter, it will list files recursively. Otherwise, it will list files at the top level, which is better suited for generic directories where you don't necessarily need the nested structure, like the Desktop.
-
-====
-
-OBJECTIVE
-
-You accomplish a given task iteratively, breaking it down into clear steps and working through them methodically.
-
-1. Analyze the user's task and set clear, achievable goals to accomplish it. Prioritize these goals in a logical order.
-2. Work through these goals sequentially, utilizing available tools one at a time as necessary. Each goal should correspond to a distinct step in your problem-solving process. You will be informed on the work completed and what's remaining as you go.
-3. Remember, you have extensive capabilities with access to a wide range of tools that can be used in powerful and clever ways as necessary to accomplish each goal. Before calling a tool, do some analysis. First, analyze the file structure provided in environment_details to gain context and insights for proceeding effectively. Next, think about which of the provided tools is the most relevant tool to accomplish the user's task. Go through each of the required parameters of the relevant tool and determine if the user has directly provided or given enough information to infer a value. When deciding if the parameter can be inferred, carefully consider all the context to see if it supports a specific value. If all of the required parameters are present or can be reasonably inferred, proceed with the tool use. BUT, if one of the values for a required parameter is missing, DO NOT invoke the tool (not even with fillers for the missing params) and instead, ask the user to provide the missing parameters using the ask_followup_question tool. DO NOT ask for more information on optional parameters if it is not provided.
-4. Once you've completed the user's task, you must use the attempt_completion tool to present the result of the task to the user.
-5. The user may provide feedback, which you can use to make improvements and try again. But DO NOT continue in pointless back and forth conversations, i.e. don't end your responses with questions or offers for further assistance.
-
-
-====
-
-USER'S CUSTOM INSTRUCTIONS
-
-The following additional instructions are provided by the user, and should be followed to the best of your ability without interfering with the TOOL USE guidelines.
-
-Language Preference:
-You should always speak and think in the "en" language.
-
-Mode-specific Instructions:
-1. Do some information gathering (using provided tools) to get more context about the task.
-
-2. You should also ask the user clarifying questions to get a better understanding of the task.
-
-3. Once you've gained more context about the user's request, break down the task into clear, actionable steps and create a todo list using the `update_todo_list` tool. Each todo item should be:
-   - Specific and actionable
-   - Listed in logical execution order
-   - Focused on a single, well-defined outcome
-   - Clear enough that another mode could execute it independently
-
-   **Note:** If the `update_todo_list` tool is not available, write the plan to a markdown file (e.g., `plan.md` or `todo.md`) instead.
-
-4. As you gather more information or discover new requirements, update the todo list to reflect the current understanding of what needs to be accomplished.
-
-5. Ask the user if they are pleased with this plan, or if they would like to make any changes. Think of this as a brainstorming session where you can discuss the task and refine the todo list.
-
-6. Include Mermaid diagrams if they help clarify complex workflows or system architecture. Please avoid using double quotes ("") and parentheses () inside square brackets ([]) in Mermaid diagrams, as this can cause parsing errors.
-
-7. Use the switch_mode tool to request that the user switch to another mode to implement the solution.
-
-**IMPORTANT: Focus on creating clear, actionable todo lists rather than lengthy markdown documents. Use the todo list as your primary planning tool to track and organize the work that needs to be done.**
-
-**CRITICAL: Never provide level of effort time estimates (e.g., hours, days, weeks) for tasks. Focus solely on breaking down the work into clear, actionable steps without estimating how long they will take.**
-
-Unless told otherwise, if you want to save a plan file, put it in the /plans directory
-
-Rules:
-# Rules from .clinerules-architect:
-Mock mode-specific rules
-# Rules from .clinerules:
-Mock generic rules

+ 0 - 21
src/core/prompts/__tests__/add-custom-instructions.spec.ts

@@ -264,27 +264,6 @@ describe("addCustomInstructions", () => {
 		expect(prompt).toMatchFileSnapshot("./__snapshots__/add-custom-instructions/mcp-server-creation-disabled.snap")
 	})
 
-	it("should include partial read instructions when partialReadsEnabled is true", async () => {
-		const prompt = await SYSTEM_PROMPT(
-			mockContext,
-			"/test/path",
-			false, // supportsImages
-			undefined, // mcpHub
-			undefined, // diffStrategy
-			undefined, // browserViewportSize
-			defaultModeSlug, // mode
-			undefined, // customModePrompts
-			undefined, // customModes,
-			undefined, // globalCustomInstructions
-			undefined, // experiments
-			undefined, // language
-			undefined, // rooIgnoreInstructions
-			true, // partialReadsEnabled
-		)
-
-		expect(prompt).toMatchFileSnapshot("./__snapshots__/add-custom-instructions/partial-reads-enabled.snap")
-	})
-
 	it("should prioritize mode-specific rules for code mode", async () => {
 		const instructions = await addCustomInstructions("", "", "/test/path", defaultModeSlug)
 		expect(instructions).toMatchFileSnapshot("./__snapshots__/add-custom-instructions/code-mode-rules.snap")

+ 0 - 3
src/core/prompts/__tests__/sections.spec.ts

@@ -70,7 +70,6 @@ describe("getRulesSection", () => {
 
 	it("includes vendor confidentiality section when isStealthModel is true", () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: true,
 			useAgentRules: true,
 			newTaskRequireTodos: false,
@@ -88,7 +87,6 @@ describe("getRulesSection", () => {
 
 	it("excludes vendor confidentiality section when isStealthModel is false", () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: true,
 			useAgentRules: true,
 			newTaskRequireTodos: false,
@@ -103,7 +101,6 @@ describe("getRulesSection", () => {
 
 	it("excludes vendor confidentiality section when isStealthModel is undefined", () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: true,
 			useAgentRules: true,
 			newTaskRequireTodos: false,

+ 0 - 17
src/core/prompts/__tests__/system-prompt.spec.ts

@@ -228,7 +228,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		expect(prompt).toMatchFileSnapshot("./__snapshots__/system-prompt/consistent-system-prompt.snap")
@@ -249,7 +248,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		expect(prompt).toMatchFileSnapshot("./__snapshots__/system-prompt/with-computer-use-support.snap")
@@ -272,7 +270,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		expect(prompt).toMatchFileSnapshot("./__snapshots__/system-prompt/with-mcp-hub-provided.snap")
@@ -293,7 +290,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		expect(prompt).toMatchFileSnapshot("./__snapshots__/system-prompt/with-undefined-mcp-hub.snap")
@@ -314,7 +310,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		expect(prompt).toMatchFileSnapshot("./__snapshots__/system-prompt/with-different-viewport-size.snap")
@@ -362,7 +357,6 @@ describe("SYSTEM_PROMPT", () => {
 			undefined, // experiments
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		expect(prompt).toContain("Language Preference:")
@@ -421,7 +415,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		// Role definition should be at the top
@@ -457,7 +450,6 @@ describe("SYSTEM_PROMPT", () => {
 			undefined, // experiments
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		// Role definition from promptComponent should be at the top
@@ -488,7 +480,6 @@ describe("SYSTEM_PROMPT", () => {
 			undefined, // experiments
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 		)
 
 		// Should use the default mode's role definition
@@ -497,7 +488,6 @@ describe("SYSTEM_PROMPT", () => {
 
 	it("should exclude update_todo_list tool when todoListEnabled is false", async () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: false,
 			useAgentRules: true,
 			newTaskRequireTodos: false,
@@ -517,7 +507,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 			settings, // settings
 		)
 
@@ -528,7 +517,6 @@ describe("SYSTEM_PROMPT", () => {
 
 	it("should include update_todo_list tool when todoListEnabled is true", async () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: true,
 			useAgentRules: true,
 			newTaskRequireTodos: false,
@@ -548,7 +536,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 			settings, // settings
 		)
 
@@ -559,7 +546,6 @@ describe("SYSTEM_PROMPT", () => {
 
 	it("should include update_todo_list tool when todoListEnabled is undefined", async () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: true,
 			useAgentRules: true,
 			newTaskRequireTodos: false,
@@ -579,7 +565,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 			settings, // settings
 		)
 
@@ -590,7 +575,6 @@ describe("SYSTEM_PROMPT", () => {
 
 	it("should include native tool instructions", async () => {
 		const settings = {
-			maxConcurrentFileReads: 5,
 			todoListEnabled: true,
 			useAgentRules: true,
 			newTaskRequireTodos: false,
@@ -610,7 +594,6 @@ describe("SYSTEM_PROMPT", () => {
 			experiments,
 			undefined, // language
 			undefined, // rooIgnoreInstructions
-			undefined, // partialReadsEnabled
 			settings, // settings
 		)
 

+ 132 - 8
src/core/prompts/sections/__tests__/custom-instructions.spec.ts

@@ -543,7 +543,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -575,7 +574,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: false,
 					newTaskRequireTodos: false,
@@ -636,7 +634,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -682,7 +679,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -750,7 +746,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -802,7 +797,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -856,7 +850,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -902,7 +895,6 @@ describe("addCustomInstructions", () => {
 			"test-mode",
 			{
 				settings: {
-					maxConcurrentFileReads: 5,
 					todoListEnabled: true,
 					useAgentRules: true,
 					newTaskRequireTodos: false,
@@ -1595,4 +1587,136 @@ describe("Rules directory reading", () => {
 		const result = await loadRuleFiles("/fake/path")
 		expect(result).toBe("\n# Rules from .roorules:\nfallback content\n")
 	})
+
+	it("should load AGENTS.local.md alongside AGENTS.md for personal overrides", async () => {
+		// Simulate no .roo/rules-test-mode directory
+		statMock.mockRejectedValueOnce({ code: "ENOENT" })
+
+		// Mock lstat to indicate both AGENTS.md and AGENTS.local.md exist (not symlinks)
+		lstatMock.mockImplementation((filePath: PathLike) => {
+			const pathStr = filePath.toString()
+			if (pathStr.endsWith("AGENTS.md") || pathStr.endsWith("AGENTS.local.md")) {
+				return Promise.resolve({
+					isSymbolicLink: vi.fn().mockReturnValue(false),
+				})
+			}
+			return Promise.reject({ code: "ENOENT" })
+		})
+
+		readFileMock.mockImplementation((filePath: PathLike) => {
+			const pathStr = filePath.toString()
+			if (pathStr.endsWith("AGENTS.local.md")) {
+				return Promise.resolve("Local overrides from AGENTS.local.md")
+			}
+			if (pathStr.endsWith("AGENTS.md")) {
+				return Promise.resolve("Base rules from AGENTS.md")
+			}
+			return Promise.reject({ code: "ENOENT" })
+		})
+
+		const result = await addCustomInstructions(
+			"mode instructions",
+			"global instructions",
+			"/fake/path",
+			"test-mode",
+			{
+				settings: {
+					todoListEnabled: true,
+					useAgentRules: true,
+					newTaskRequireTodos: false,
+				},
+			},
+		)
+
+		// Should contain both AGENTS.md and AGENTS.local.md content
+		expect(result).toContain("# Agent Rules Standard (AGENTS.md):")
+		expect(result).toContain("Base rules from AGENTS.md")
+		expect(result).toContain("# Agent Rules Local (AGENTS.local.md):")
+		expect(result).toContain("Local overrides from AGENTS.local.md")
+	})
+
+	it("should load AGENTS.local.md even when base AGENTS.md does not exist", async () => {
+		// Simulate no .roo/rules-test-mode directory
+		statMock.mockRejectedValueOnce({ code: "ENOENT" })
+
+		// Mock lstat to indicate only AGENTS.local.md exists (no base file)
+		lstatMock.mockImplementation((filePath: PathLike) => {
+			const pathStr = filePath.toString()
+			if (pathStr.endsWith("AGENTS.local.md")) {
+				return Promise.resolve({
+					isSymbolicLink: vi.fn().mockReturnValue(false),
+				})
+			}
+			return Promise.reject({ code: "ENOENT" })
+		})
+
+		readFileMock.mockImplementation((filePath: PathLike) => {
+			const pathStr = filePath.toString()
+			if (pathStr.endsWith("AGENTS.local.md")) {
+				return Promise.resolve("Local overrides without base file")
+			}
+			return Promise.reject({ code: "ENOENT" })
+		})
+
+		const result = await addCustomInstructions(
+			"mode instructions",
+			"global instructions",
+			"/fake/path",
+			"test-mode",
+			{
+				settings: {
+					todoListEnabled: true,
+					useAgentRules: true,
+					newTaskRequireTodos: false,
+				},
+			},
+		)
+
+		// Should contain AGENTS.local.md content even without base AGENTS.md
+		expect(result).toContain("# Agent Rules Local (AGENTS.local.md):")
+		expect(result).toContain("Local overrides without base file")
+	})
+
+	it("should load AGENTS.md without .local.md when local file does not exist", async () => {
+		// Simulate no .roo/rules-test-mode directory
+		statMock.mockRejectedValueOnce({ code: "ENOENT" })
+
+		// Mock lstat to indicate only AGENTS.md exists (no local override)
+		lstatMock.mockImplementation((filePath: PathLike) => {
+			const pathStr = filePath.toString()
+			if (pathStr.endsWith("AGENTS.md")) {
+				return Promise.resolve({
+					isSymbolicLink: vi.fn().mockReturnValue(false),
+				})
+			}
+			return Promise.reject({ code: "ENOENT" })
+		})
+
+		readFileMock.mockImplementation((filePath: PathLike) => {
+			const pathStr = filePath.toString()
+			if (pathStr.endsWith("AGENTS.md")) {
+				return Promise.resolve("Base rules from AGENTS.md only")
+			}
+			return Promise.reject({ code: "ENOENT" })
+		})
+
+		const result = await addCustomInstructions(
+			"mode instructions",
+			"global instructions",
+			"/fake/path",
+			"test-mode",
+			{
+				settings: {
+					todoListEnabled: true,
+					useAgentRules: true,
+					newTaskRequireTodos: false,
+				},
+			},
+		)
+
+		// Should contain only AGENTS.md content
+		expect(result).toContain("# Agent Rules Standard (AGENTS.md):")
+		expect(result).toContain("Base rules from AGENTS.md only")
+		expect(result).not.toContain("AGENTS.local.md")
+	})
 })

+ 64 - 29
src/core/prompts/sections/custom-instructions.ts

@@ -238,9 +238,48 @@ export async function loadRuleFiles(cwd: string, enableSubfolderRules: boolean =
 	return ""
 }
 
+/**
+ * Read content from an agent rules file (AGENTS.md, AGENT.md, etc.)
+ * Handles symlink resolution.
+ *
+ * @param filePath - Full path to the agent rules file
+ * @returns File content or empty string if file doesn't exist
+ */
+async function readAgentRulesFile(filePath: string): Promise<string> {
+	let resolvedPath = filePath
+
+	// Check if file exists and handle symlinks
+	try {
+		const stats = await fs.lstat(filePath)
+		if (stats.isSymbolicLink()) {
+			// Create a temporary fileInfo array to use with resolveSymLink
+			const fileInfo: Array<{
+				originalPath: string
+				resolvedPath: string
+			}> = []
+
+			// Use the existing resolveSymLink function to handle symlink resolution
+			await resolveSymLink(filePath, fileInfo, 0)
+
+			// Extract the resolved path from fileInfo
+			if (fileInfo.length > 0) {
+				resolvedPath = fileInfo[0].resolvedPath
+			}
+		}
+	} catch (err) {
+		// If lstat fails (file doesn't exist), return empty
+		return ""
+	}
+
+	// Read the content from the resolved path
+	return safeReadFile(resolvedPath)
+}
+
 /**
  * Load AGENTS.md or AGENT.md file from a specific directory
  * Checks for both AGENTS.md (standard) and AGENT.md (alternative) for compatibility
+ * Also loads AGENTS.local.md for personal overrides (not checked in to version control)
+ * AGENTS.local.md can be loaded even if AGENTS.md doesn't exist
  *
  * @param directory - Directory to check for AGENTS.md
  * @param showPath - Whether to include the directory path in the header
@@ -253,50 +292,46 @@ async function loadAgentRulesFileFromDirectory(
 ): Promise<string> {
 	// Try both filenames - AGENTS.md (standard) first, then AGENT.md (alternative)
 	const filenames = ["AGENTS.md", "AGENT.md"]
+	const results: string[] = []
+	const displayPath = cwd ? path.relative(cwd, directory) : directory
 
 	for (const filename of filenames) {
 		try {
 			const agentPath = path.join(directory, filename)
-			let resolvedPath = agentPath
-
-			// Check if file exists and handle symlinks
-			try {
-				const stats = await fs.lstat(agentPath)
-				if (stats.isSymbolicLink()) {
-					// Create a temporary fileInfo array to use with resolveSymLink
-					const fileInfo: Array<{
-						originalPath: string
-						resolvedPath: string
-					}> = []
-
-					// Use the existing resolveSymLink function to handle symlink resolution
-					await resolveSymLink(agentPath, fileInfo, 0)
-
-					// Extract the resolved path from fileInfo
-					if (fileInfo.length > 0) {
-						resolvedPath = fileInfo[0].resolvedPath
-					}
-				}
-			} catch (err) {
-				// If lstat fails (file doesn't exist), try next filename
-				continue
-			}
+			const content = await readAgentRulesFile(agentPath)
 
-			// Read the content from the resolved path
-			const content = await safeReadFile(resolvedPath)
 			if (content) {
 				// Compute relative path for display if cwd is provided
-				const displayPath = cwd ? path.relative(cwd, directory) : directory
 				const header = showPath
 					? `# Agent Rules Standard (${filename}) from ${displayPath}:`
 					: `# Agent Rules Standard (${filename}):`
-				return `${header}\n${content}`
+				results.push(`${header}\n${content}`)
+
+				// Found a standard file, don't check alternative
+				break
 			}
 		} catch (err) {
 			// Silently ignore errors - agent rules files are optional
 		}
 	}
-	return ""
+
+	// Always try to load AGENTS.local.md for personal overrides (even if AGENTS.md doesn't exist)
+	try {
+		const localFilename = "AGENTS.local.md"
+		const localPath = path.join(directory, localFilename)
+		const localContent = await readAgentRulesFile(localPath)
+
+		if (localContent) {
+			const localHeader = showPath
+				? `# Agent Rules Local (${localFilename}) from ${displayPath}:`
+				: `# Agent Rules Local (${localFilename}):`
+			results.push(`${localHeader}\n${localContent}`)
+		}
+	} catch (err) {
+		// Silently ignore errors - local agent rules file is optional
+	}
+
+	return results.join("\n\n")
 }
 
 /**

+ 0 - 3
src/core/prompts/system.ts

@@ -52,7 +52,6 @@ async function generatePrompt(
 	experiments?: Record<string, boolean>,
 	language?: string,
 	rooIgnoreInstructions?: string,
-	partialReadsEnabled?: boolean,
 	settings?: SystemPromptSettings,
 	todoList?: TodoItem[],
 	modelId?: string,
@@ -125,7 +124,6 @@ export const SYSTEM_PROMPT = async (
 	experiments?: Record<string, boolean>,
 	language?: string,
 	rooIgnoreInstructions?: string,
-	partialReadsEnabled?: boolean,
 	settings?: SystemPromptSettings,
 	todoList?: TodoItem[],
 	modelId?: string,
@@ -155,7 +153,6 @@ export const SYSTEM_PROMPT = async (
 		experiments,
 		language,
 		rooIgnoreInstructions,
-		partialReadsEnabled,
 		settings,
 		todoList,
 		modelId,

+ 8 - 8
src/core/prompts/tools/native-tools/__tests__/converters.spec.ts

@@ -80,27 +80,27 @@ describe("converters", () => {
 			const openAITool: OpenAI.Chat.ChatCompletionTool = {
 				type: "function",
 				function: {
-					name: "read_file",
-					description: "Read files",
+					name: "process_data",
+					description: "Process data with filters",
 					parameters: {
 						type: "object",
 						properties: {
-							files: {
+							items: {
 								type: "array",
 								items: {
 									type: "object",
 									properties: {
-										path: { type: "string" },
-										line_ranges: {
+										name: { type: "string" },
+										tags: {
 											type: ["array", "null"],
-											items: { type: "string", pattern: "^[0-9]+-[0-9]+$" },
+											items: { type: "string" },
 										},
 									},
-									required: ["path", "line_ranges"],
+									required: ["name"],
 								},
 							},
 						},
-						required: ["files"],
+						required: ["items"],
 						additionalProperties: false,
 					},
 				},

+ 24 - 145
src/core/prompts/tools/native-tools/__tests__/read_file.spec.ts

@@ -1,5 +1,5 @@
 import type OpenAI from "openai"
-import { createReadFileTool, type ReadFileToolOptions } from "../read_file"
+import { createReadFileTool } from "../read_file"
 
 // Helper type to access function tools
 type FunctionTool = OpenAI.Chat.ChatCompletionTool & { type: "function" }
@@ -8,91 +8,46 @@ type FunctionTool = OpenAI.Chat.ChatCompletionTool & { type: "function" }
 const getFunctionDef = (tool: OpenAI.Chat.ChatCompletionTool) => (tool as FunctionTool).function
 
 describe("createReadFileTool", () => {
-	describe("maxConcurrentFileReads documentation", () => {
-		it("should include default maxConcurrentFileReads limit (5) in description", () => {
+	describe("single-file-per-call documentation", () => {
+		it("should indicate single-file-per-call and suggest parallel tool calls", () => {
 			const tool = createReadFileTool()
 			const description = getFunctionDef(tool).description
 
-			expect(description).toContain("maximum of 5 files")
-			expect(description).toContain("If you need to read more files, use multiple sequential read_file requests")
-		})
-
-		it("should include custom maxConcurrentFileReads limit in description", () => {
-			const tool = createReadFileTool({ maxConcurrentFileReads: 3 })
-			const description = getFunctionDef(tool).description
-
-			expect(description).toContain("maximum of 3 files")
-			expect(description).toContain("within 3-file limit")
-		})
-
-		it("should indicate single file reads only when maxConcurrentFileReads is 1", () => {
-			const tool = createReadFileTool({ maxConcurrentFileReads: 1 })
-			const description = getFunctionDef(tool).description
-
-			expect(description).toContain("Multiple file reads are currently disabled")
-			expect(description).toContain("only read one file at a time")
-			expect(description).not.toContain("Example multiple files")
-		})
-
-		it("should use singular 'Read a file' in base description when maxConcurrentFileReads is 1", () => {
-			const tool = createReadFileTool({ maxConcurrentFileReads: 1 })
-			const description = getFunctionDef(tool).description
-
-			expect(description).toMatch(/^Read a file/)
-			expect(description).not.toContain("Read one or more files")
-		})
-
-		it("should use plural 'Read one or more files' in base description when maxConcurrentFileReads is > 1", () => {
-			const tool = createReadFileTool({ maxConcurrentFileReads: 5 })
-			const description = getFunctionDef(tool).description
-
-			expect(description).toMatch(/^Read one or more files/)
-		})
-
-		it("should not show multiple files example when maxConcurrentFileReads is 1", () => {
-			const tool = createReadFileTool({ maxConcurrentFileReads: 1, partialReadsEnabled: true })
-			const description = getFunctionDef(tool).description
-
-			expect(description).not.toContain("Example multiple files")
-		})
-
-		it("should show multiple files example when maxConcurrentFileReads is > 1", () => {
-			const tool = createReadFileTool({ maxConcurrentFileReads: 5, partialReadsEnabled: true })
-			const description = getFunctionDef(tool).description
-
-			expect(description).toContain("Example multiple files")
+			expect(description).toContain("exactly one file per call")
+			expect(description).toContain("multiple parallel read_file calls")
 		})
 	})
 
-	describe("partialReadsEnabled option", () => {
-		it("should include line_ranges in description when partialReadsEnabled is true", () => {
-			const tool = createReadFileTool({ partialReadsEnabled: true })
+	describe("indentation mode", () => {
+		it("should always include indentation mode in description", () => {
+			const tool = createReadFileTool()
 			const description = getFunctionDef(tool).description
 
-			expect(description).toContain("line_ranges")
-			expect(description).toContain("Example with line ranges")
+			expect(description).toContain("indentation")
 		})
 
-		it("should not include line_ranges in description when partialReadsEnabled is false", () => {
-			const tool = createReadFileTool({ partialReadsEnabled: false })
-			const description = getFunctionDef(tool).description
+		it("should always include indentation parameter in schema", () => {
+			const tool = createReadFileTool()
+			const schema = getFunctionDef(tool).parameters as any
 
-			expect(description).not.toContain("line_ranges")
-			expect(description).not.toContain("Example with line ranges")
+			expect(schema.properties).toHaveProperty("indentation")
 		})
 
-		it("should include line_ranges parameter in schema when partialReadsEnabled is true", () => {
-			const tool = createReadFileTool({ partialReadsEnabled: true })
+		it("should include mode parameter in schema", () => {
+			const tool = createReadFileTool()
 			const schema = getFunctionDef(tool).parameters as any
 
-			expect(schema.properties.files.items.properties).toHaveProperty("line_ranges")
+			expect(schema.properties).toHaveProperty("mode")
+			expect(schema.properties.mode.enum).toContain("slice")
+			expect(schema.properties.mode.enum).toContain("indentation")
 		})
 
-		it("should not include line_ranges parameter in schema when partialReadsEnabled is false", () => {
-			const tool = createReadFileTool({ partialReadsEnabled: false })
+		it("should include offset and limit parameters in schema", () => {
+			const tool = createReadFileTool()
 			const schema = getFunctionDef(tool).parameters as any
 
-			expect(schema.properties.files.items.properties).not.toHaveProperty("line_ranges")
+			expect(schema.properties).toHaveProperty("offset")
+			expect(schema.properties).toHaveProperty("limit")
 		})
 	})
 
@@ -138,75 +93,6 @@ describe("createReadFileTool", () => {
 		})
 	})
 
-	describe("combined options", () => {
-		it("should correctly combine low maxConcurrentFileReads with partialReadsEnabled", () => {
-			const tool = createReadFileTool({
-				maxConcurrentFileReads: 2,
-				partialReadsEnabled: true,
-			})
-			const description = getFunctionDef(tool).description
-
-			expect(description).toContain("maximum of 2 files")
-			expect(description).toContain("line_ranges")
-			expect(description).toContain("within 2-file limit")
-		})
-
-		it("should correctly handle maxConcurrentFileReads of 1 with partialReadsEnabled false", () => {
-			const tool = createReadFileTool({
-				maxConcurrentFileReads: 1,
-				partialReadsEnabled: false,
-			})
-			const description = getFunctionDef(tool).description
-
-			expect(description).toContain("only read one file at a time")
-			expect(description).not.toContain("line_ranges")
-			expect(description).not.toContain("Example multiple files")
-		})
-
-		it("should correctly combine partialReadsEnabled and supportsImages", () => {
-			const tool = createReadFileTool({
-				partialReadsEnabled: true,
-				supportsImages: true,
-			})
-			const description = getFunctionDef(tool).description
-
-			// Should have both line_ranges and image support
-			expect(description).toContain("line_ranges")
-			expect(description).toContain(
-				"Automatically processes and returns image files (PNG, JPG, JPEG, GIF, BMP, SVG, WEBP, ICO, AVIF) for visual analysis",
-			)
-		})
-
-		it("should work with partialReadsEnabled=false and supportsImages=true", () => {
-			const tool = createReadFileTool({
-				partialReadsEnabled: false,
-				supportsImages: true,
-			})
-			const description = getFunctionDef(tool).description
-
-			// Should have image support but no line_ranges
-			expect(description).not.toContain("line_ranges")
-			expect(description).toContain(
-				"Automatically processes and returns image files (PNG, JPG, JPEG, GIF, BMP, SVG, WEBP, ICO, AVIF) for visual analysis",
-			)
-		})
-
-		it("should correctly combine all three options", () => {
-			const tool = createReadFileTool({
-				maxConcurrentFileReads: 3,
-				partialReadsEnabled: true,
-				supportsImages: true,
-			})
-			const description = getFunctionDef(tool).description
-
-			expect(description).toContain("maximum of 3 files")
-			expect(description).toContain("line_ranges")
-			expect(description).toContain(
-				"Automatically processes and returns image files (PNG, JPG, JPEG, GIF, BMP, SVG, WEBP, ICO, AVIF) for visual analysis",
-			)
-		})
-	})
-
 	describe("tool structure", () => {
 		it("should have correct tool name", () => {
 			const tool = createReadFileTool()
@@ -226,18 +112,11 @@ describe("createReadFileTool", () => {
 			expect(getFunctionDef(tool).strict).toBe(true)
 		})
 
-		it("should require files parameter", () => {
+		it("should require path parameter", () => {
 			const tool = createReadFileTool()
 			const schema = getFunctionDef(tool).parameters as any
 
-			expect(schema.required).toContain("files")
-		})
-
-		it("should require path in file objects", () => {
-			const tool = createReadFileTool({ partialReadsEnabled: false })
-			const schema = getFunctionDef(tool).parameters as any
-
-			expect(schema.properties.files.items.required).toContain("path")
+			expect(schema.required).toContain("path")
 		})
 	})
 })

+ 1 - 7
src/core/prompts/tools/native-tools/index.ts

@@ -30,10 +30,6 @@ export type { ReadFileToolOptions } from "./read_file"
  * Options for customizing the native tools array.
  */
 export interface NativeToolsOptions {
-	/** Whether to include line_ranges support in read_file tool (default: true) */
-	partialReadsEnabled?: boolean
-	/** Maximum number of files that can be read in a single read_file request (default: 5) */
-	maxConcurrentFileReads?: number
 	/** Whether the model supports image processing (default: false) */
 	supportsImages?: boolean
 }
@@ -45,11 +41,9 @@ export interface NativeToolsOptions {
  * @returns Array of native tool definitions
  */
 export function getNativeTools(options: NativeToolsOptions = {}): OpenAI.Chat.ChatCompletionTool[] {
-	const { partialReadsEnabled = true, maxConcurrentFileReads = 5, supportsImages = false } = options
+	const { supportsImages = false } = options
 
 	const readFileOptions: ReadFileToolOptions = {
-		partialReadsEnabled,
-		maxConcurrentFileReads,
 		supportsImages,
 	}
 

+ 104 - 69
src/core/prompts/tools/native-tools/read_file.ts

@@ -1,5 +1,18 @@
 import type OpenAI from "openai"
 
+// ─── Constants ────────────────────────────────────────────────────────────────
+
+/** Default maximum lines to return per file (Codex-inspired predictable limit) */
+export const DEFAULT_LINE_LIMIT = 2000
+
+/** Maximum characters per line before truncation */
+export const MAX_LINE_LENGTH = 2000
+
+/** Default indentation levels to include above anchor (0 = unlimited) */
+export const DEFAULT_MAX_LEVELS = 0
+
+// ─── Helper Functions ─────────────────────────────────────────────────────────
+
 /**
  * Generates the file support note, optionally including image format support.
  *
@@ -13,86 +26,117 @@ function getReadFileSupportsNote(supportsImages: boolean): string {
 	return `Supports text extraction from PDF and DOCX files, but may not handle other binary files properly.`
 }
 
+// ─── Types ────────────────────────────────────────────────────────────────────
+
 /**
  * Options for creating the read_file tool definition.
  */
 export interface ReadFileToolOptions {
-	/** Whether to include line_ranges parameter (default: true) */
-	partialReadsEnabled?: boolean
-	/** Maximum number of files that can be read in a single request (default: 5) */
-	maxConcurrentFileReads?: number
 	/** Whether the model supports image processing (default: false) */
 	supportsImages?: boolean
 }
 
+// ─── Schema Builder ───────────────────────────────────────────────────────────
+
 /**
- * Creates the read_file tool definition, optionally including line_ranges support
- * based on whether partial reads are enabled.
+ * Creates the read_file tool definition with Codex-inspired modes.
+ *
+ * Two reading modes are supported:
+ *
+ * 1. **Slice Mode** (default): Simple offset/limit reading
+ *    - Reads contiguous lines starting from `offset` (1-based, default: 1)
+ *    - Limited to `limit` lines (default: 2000)
+ *    - Predictable and efficient for agent planning
+ *
+ * 2. **Indentation Mode**: Semantic code block extraction
+ *    - Anchored on a specific line number (1-based)
+ *    - Extracts the block containing that line plus context
+ *    - Respects code structure based on indentation hierarchy
+ *    - Useful for extracting functions, classes, or logical blocks
  *
  * @param options - Configuration options for the tool
  * @returns Native tool definition for read_file
  */
 export function createReadFileTool(options: ReadFileToolOptions = {}): OpenAI.Chat.ChatCompletionTool {
-	const { partialReadsEnabled = true, maxConcurrentFileReads = 5, supportsImages = false } = options
-	const isMultipleReadsEnabled = maxConcurrentFileReads > 1
+	const { supportsImages = false } = options
 
-	// Build description intro with concurrent reads limit message
-	const descriptionIntro = isMultipleReadsEnabled
-		? `Read one or more files and return their contents with line numbers for diffing or discussion. IMPORTANT: You can read a maximum of ${maxConcurrentFileReads} files in a single request. If you need to read more files, use multiple sequential read_file requests. `
-		: "Read a file and return its contents with line numbers for diffing or discussion. IMPORTANT: Multiple file reads are currently disabled. You can only read one file at a time. "
+	// Build description based on capabilities
+	const descriptionIntro =
+		"Read a file and return its contents with line numbers for diffing or discussion. IMPORTANT: This tool reads exactly one file per call. If you need multiple files, issue multiple parallel read_file calls."
 
-	const baseDescription =
-		descriptionIntro +
-		"Structure: { files: [{ path: 'relative/path.ts'" +
-		(partialReadsEnabled ? ", line_ranges: [[1, 50], [100, 150]]" : "") +
-		" }] }. " +
-		"The 'path' is required and relative to workspace. "
-
-	const optionalRangesDescription = partialReadsEnabled
-		? "The 'line_ranges' is optional for reading specific sections. Each range is a [start, end] tuple (1-based inclusive). "
-		: ""
-
-	const examples = partialReadsEnabled
-		? "Example single file: { files: [{ path: 'src/app.ts' }] }. " +
-			"Example with line ranges: { files: [{ path: 'src/app.ts', line_ranges: [[1, 50], [100, 150]] }] }. " +
-			(isMultipleReadsEnabled
-				? `Example multiple files (within ${maxConcurrentFileReads}-file limit): { files: [{ path: 'file1.ts', line_ranges: [[1, 50]] }, { path: 'file2.ts' }] }`
-				: "")
-		: "Example single file: { files: [{ path: 'src/app.ts' }] }. " +
-			(isMultipleReadsEnabled
-				? `Example multiple files (within ${maxConcurrentFileReads}-file limit): { files: [{ path: 'file1.ts' }, { path: 'file2.ts' }] }`
-				: "")
+	const modeDescription =
+		` Supports two modes: 'slice' (default) reads lines sequentially with offset/limit; 'indentation' extracts complete semantic code blocks around an anchor line based on indentation hierarchy.` +
+		` Slice mode is ideal for initial file exploration, understanding overall structure, reading configuration/data files, or when you need a specific line range. Use it when you don't have a target line number.` +
+		` PREFER indentation mode when you have a specific line number from search results, error messages, or definition lookups - it guarantees complete, syntactically valid code blocks without mid-function truncation.` +
+		` IMPORTANT: Indentation mode requires anchor_line to be useful. Without it, only header content (imports) is returned.`
+
+	const limitNote = ` By default, returns up to ${DEFAULT_LINE_LIMIT} lines per file. Lines longer than ${MAX_LINE_LENGTH} characters are truncated.`
 
 	const description =
-		baseDescription + optionalRangesDescription + getReadFileSupportsNote(supportsImages) + " " + examples
+		descriptionIntro +
+		modeDescription +
+		limitNote +
+		" " +
+		getReadFileSupportsNote(supportsImages) +
+		` Example: { path: 'src/app.ts' }` +
+		` Example (indentation mode): { path: 'src/app.ts', mode: 'indentation', indentation: { anchor_line: 42 } }`
+
+	const indentationProperties: Record<string, unknown> = {
+		anchor_line: {
+			type: "integer",
+			description:
+				"1-based line number to anchor the extraction. REQUIRED for meaningful indentation mode results. The extractor finds the semantic block (function, method, class) containing this line and returns it completely. Without anchor_line, indentation mode defaults to line 1 and returns only imports/header content. Obtain anchor_line from: search results, error stack traces, definition lookups, codebase_search results, or condensed file summaries (e.g., '14--28 | export class UserService' means anchor_line=14).",
+		},
+		max_levels: {
+			type: "integer",
+			description: `Maximum indentation levels to include above the anchor (indentation mode, 0 = unlimited (default)). Higher values include more parent context.`,
+		},
+		include_siblings: {
+			type: "boolean",
+			description:
+				"Include sibling blocks at the same indentation level as the anchor block (indentation mode, default: false). Useful for seeing related methods in a class.",
+		},
+		include_header: {
+			type: "boolean",
+			description:
+				"Include file header content (imports, module-level comments) at the top of output (indentation mode, default: true).",
+		},
+		max_lines: {
+			type: "integer",
+			description:
+				"Hard cap on lines returned for indentation mode. Acts as a separate limit from the top-level 'limit' parameter.",
+		},
+	}
 
-	// Build the properties object conditionally
-	const fileProperties: Record<string, any> = {
+	const properties: Record<string, unknown> = {
 		path: {
 			type: "string",
 			description: "Path to the file to read, relative to the workspace",
 		},
-	}
-
-	// Only include line_ranges if partial reads are enabled
-	if (partialReadsEnabled) {
-		fileProperties.line_ranges = {
-			type: ["array", "null"],
+		mode: {
+			type: "string",
+			enum: ["slice", "indentation"],
 			description:
-				"Optional line ranges to read. Each range is a [start, end] tuple with 1-based inclusive line numbers. Use multiple ranges for non-contiguous sections.",
-			items: {
-				type: "array",
-				items: { type: "integer" },
-				minItems: 2,
-				maxItems: 2,
-			},
-		}
+				"Reading mode. 'slice' (default): read lines sequentially with offset/limit - use for general file exploration or when you don't have a target line number (may truncate code mid-function). 'indentation': extract complete semantic code blocks containing anchor_line - PREFERRED when you have a line number because it guarantees complete, valid code blocks. WARNING: Do not use indentation mode without specifying indentation.anchor_line, or you will only get header content.",
+		},
+		offset: {
+			type: "integer",
+			description: "1-based line offset to start reading from (slice mode, default: 1)",
+		},
+		limit: {
+			type: "integer",
+			description: `Maximum number of lines to return (slice mode, default: ${DEFAULT_LINE_LIMIT})`,
+		},
+		indentation: {
+			type: "object",
+			description:
+				"Indentation mode options. Only used when mode='indentation'. You MUST specify anchor_line for useful results - it determines which code block to extract.",
+			properties: indentationProperties,
+			required: [],
+			additionalProperties: false,
+		},
 	}
 
-	// When using strict mode, ALL properties must be in the required array
-	// Optional properties are handled by having type: ["...", "null"]
-	const fileRequiredProperties = partialReadsEnabled ? ["path", "line_ranges"] : ["path"]
-
 	return {
 		type: "function",
 		function: {
@@ -101,24 +145,15 @@ export function createReadFileTool(options: ReadFileToolOptions = {}): OpenAI.Ch
 			strict: true,
 			parameters: {
 				type: "object",
-				properties: {
-					files: {
-						type: "array",
-						description: "List of files to read; request related files together when allowed",
-						items: {
-							type: "object",
-							properties: fileProperties,
-							required: fileRequiredProperties,
-							additionalProperties: false,
-						},
-						minItems: 1,
-					},
-				},
-				required: ["files"],
+				properties,
+				required: ["path"],
 				additionalProperties: false,
 			},
 		},
 	} satisfies OpenAI.Chat.ChatCompletionTool
 }
 
-export const read_file = createReadFileTool({ partialReadsEnabled: false })
+/**
+ * Default read_file tool with all parameters
+ */
+export const read_file = createReadFileTool()

+ 0 - 1
src/core/prompts/types.ts

@@ -2,7 +2,6 @@
  * Settings passed to system prompt generation functions
  */
 export interface SystemPromptSettings {
-	maxConcurrentFileReads: number
 	todoListEnabled: boolean
 	browserToolEnabled?: boolean
 	useAgentRules: boolean

+ 86 - 0
src/core/task-persistence/__tests__/apiMessages.spec.ts

@@ -0,0 +1,86 @@
+// cd src && npx vitest run core/task-persistence/__tests__/apiMessages.spec.ts
+
+import * as os from "os"
+import * as path from "path"
+import * as fs from "fs/promises"
+
+import { readApiMessages } from "../apiMessages"
+
+let tmpBaseDir: string
+
+beforeEach(async () => {
+	tmpBaseDir = await fs.mkdtemp(path.join(os.tmpdir(), "roo-test-api-"))
+})
+
+describe("apiMessages.readApiMessages", () => {
+	it("returns empty array when api_conversation_history.json contains invalid JSON", async () => {
+		const taskId = "task-corrupt-api"
+		const taskDir = path.join(tmpBaseDir, "tasks", taskId)
+		await fs.mkdir(taskDir, { recursive: true })
+		const filePath = path.join(taskDir, "api_conversation_history.json")
+		await fs.writeFile(filePath, "<<<corrupt data>>>", "utf8")
+
+		const result = await readApiMessages({
+			taskId,
+			globalStoragePath: tmpBaseDir,
+		})
+
+		expect(result).toEqual([])
+	})
+
+	it("returns empty array when claude_messages.json fallback contains invalid JSON", async () => {
+		const taskId = "task-corrupt-fallback"
+		const taskDir = path.join(tmpBaseDir, "tasks", taskId)
+		await fs.mkdir(taskDir, { recursive: true })
+
+		// Only write the old fallback file (claude_messages.json), NOT the new one
+		const oldPath = path.join(taskDir, "claude_messages.json")
+		await fs.writeFile(oldPath, "not json at all {[!", "utf8")
+
+		const result = await readApiMessages({
+			taskId,
+			globalStoragePath: tmpBaseDir,
+		})
+
+		expect(result).toEqual([])
+
+		// The corrupted fallback file should NOT be deleted
+		const stillExists = await fs
+			.access(oldPath)
+			.then(() => true)
+			.catch(() => false)
+		expect(stillExists).toBe(true)
+	})
+
+	it("returns [] when file contains valid JSON that is not an array", async () => {
+		const taskId = "task-non-array-api"
+		const taskDir = path.join(tmpBaseDir, "tasks", taskId)
+		await fs.mkdir(taskDir, { recursive: true })
+		const filePath = path.join(taskDir, "api_conversation_history.json")
+		await fs.writeFile(filePath, JSON.stringify("hello"), "utf8")
+
+		const result = await readApiMessages({
+			taskId,
+			globalStoragePath: tmpBaseDir,
+		})
+
+		expect(result).toEqual([])
+	})
+
+	it("returns [] when fallback file contains valid JSON that is not an array", async () => {
+		const taskId = "task-non-array-fallback"
+		const taskDir = path.join(tmpBaseDir, "tasks", taskId)
+		await fs.mkdir(taskDir, { recursive: true })
+
+		// Only write the old fallback file, NOT the new one
+		const oldPath = path.join(taskDir, "claude_messages.json")
+		await fs.writeFile(oldPath, JSON.stringify({ key: "value" }), "utf8")
+
+		const result = await readApiMessages({
+			taskId,
+			globalStoragePath: tmpBaseDir,
+		})
+
+		expect(result).toEqual([])
+	})
+})

+ 34 - 1
src/core/task-persistence/__tests__/taskMessages.spec.ts

@@ -12,7 +12,7 @@ vi.mock("../../../utils/safeWriteJson", () => ({
 }))
 
 // Import after mocks
-import { saveTaskMessages } from "../taskMessages"
+import { saveTaskMessages, readTaskMessages } from "../taskMessages"
 
 let tmpBaseDir: string
 
@@ -66,3 +66,36 @@ describe("taskMessages.saveTaskMessages", () => {
 		expect(persisted).toEqual(messages)
 	})
 })
+
+describe("taskMessages.readTaskMessages", () => {
+	it("returns empty array when file contains invalid JSON", async () => {
+		const taskId = "task-corrupt-json"
+		// Manually create the task directory and write corrupted JSON
+		const taskDir = path.join(tmpBaseDir, "tasks", taskId)
+		await fs.mkdir(taskDir, { recursive: true })
+		const filePath = path.join(taskDir, "ui_messages.json")
+		await fs.writeFile(filePath, "{not valid json!!!", "utf8")
+
+		const result = await readTaskMessages({
+			taskId,
+			globalStoragePath: tmpBaseDir,
+		})
+
+		expect(result).toEqual([])
+	})
+
+	it("returns [] when file contains valid JSON that is not an array", async () => {
+		const taskId = "task-non-array-json"
+		const taskDir = path.join(tmpBaseDir, "tasks", taskId)
+		await fs.mkdir(taskDir, { recursive: true })
+		const filePath = path.join(taskDir, "ui_messages.json")
+		await fs.writeFile(filePath, JSON.stringify("hello"), "utf8")
+
+		const result = await readTaskMessages({
+			taskId,
+			globalStoragePath: tmpBaseDir,
+		})
+
+		expect(result).toEqual([])
+	})
+})

+ 21 - 9
src/core/task-persistence/apiMessages.ts

@@ -51,17 +51,23 @@ export async function readApiMessages({
 		const fileContent = await fs.readFile(filePath, "utf8")
 		try {
 			const parsedData = JSON.parse(fileContent)
-			if (Array.isArray(parsedData) && parsedData.length === 0) {
+			if (!Array.isArray(parsedData)) {
+				console.warn(
+					`[readApiMessages] Parsed data is not an array (got ${typeof parsedData}), returning empty. TaskId: ${taskId}, Path: ${filePath}`,
+				)
+				return []
+			}
+			if (parsedData.length === 0) {
 				console.error(
 					`[Roo-Debug] readApiMessages: Found API conversation history file, but it's empty (parsed as []). TaskId: ${taskId}, Path: ${filePath}`,
 				)
 			}
 			return parsedData
 		} catch (error) {
-			console.error(
-				`[Roo-Debug] readApiMessages: Error parsing API conversation history file. TaskId: ${taskId}, Path: ${filePath}, Error: ${error}`,
+			console.warn(
+				`[readApiMessages] Error parsing API conversation history file, returning empty. TaskId: ${taskId}, Path: ${filePath}, Error: ${error}`,
 			)
-			throw error
+			return []
 		}
 	} else {
 		const oldPath = path.join(taskDir, "claude_messages.json")
@@ -70,7 +76,13 @@ export async function readApiMessages({
 			const fileContent = await fs.readFile(oldPath, "utf8")
 			try {
 				const parsedData = JSON.parse(fileContent)
-				if (Array.isArray(parsedData) && parsedData.length === 0) {
+				if (!Array.isArray(parsedData)) {
+					console.warn(
+						`[readApiMessages] Parsed OLD data is not an array (got ${typeof parsedData}), returning empty. TaskId: ${taskId}, Path: ${oldPath}`,
+					)
+					return []
+				}
+				if (parsedData.length === 0) {
 					console.error(
 						`[Roo-Debug] readApiMessages: Found OLD API conversation history file (claude_messages.json), but it's empty (parsed as []). TaskId: ${taskId}, Path: ${oldPath}`,
 					)
@@ -78,11 +90,11 @@ export async function readApiMessages({
 				await fs.unlink(oldPath)
 				return parsedData
 			} catch (error) {
-				console.error(
-					`[Roo-Debug] readApiMessages: Error parsing OLD API conversation history file (claude_messages.json). TaskId: ${taskId}, Path: ${oldPath}, Error: ${error}`,
+				console.warn(
+					`[readApiMessages] Error parsing OLD API conversation history file (claude_messages.json), returning empty. TaskId: ${taskId}, Path: ${oldPath}, Error: ${error}`,
 				)
-				// DO NOT unlink oldPath if parsing failed, throw error instead.
-				throw error
+				// DO NOT unlink oldPath if parsing failed.
+				return []
 			}
 		}
 	}

+ 15 - 1
src/core/task-persistence/taskMessages.ts

@@ -23,7 +23,21 @@ export async function readTaskMessages({
 	const fileExists = await fileExistsAtPath(filePath)
 
 	if (fileExists) {
-		return JSON.parse(await fs.readFile(filePath, "utf8"))
+		try {
+			const parsedData = JSON.parse(await fs.readFile(filePath, "utf8"))
+			if (!Array.isArray(parsedData)) {
+				console.warn(
+					`[readTaskMessages] Parsed data is not an array (got ${typeof parsedData}), returning empty. TaskId: ${taskId}, Path: ${filePath}`,
+				)
+				return []
+			}
+			return parsedData
+		} catch (error) {
+			console.warn(
+				`[readTaskMessages] Failed to parse ${filePath} for task ${taskId}, returning empty: ${error instanceof Error ? error.message : String(error)}`,
+			)
+			return []
+		}
 	}
 
 	return []

+ 73 - 25
src/core/task/Task.ts

@@ -394,6 +394,7 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 	didAlreadyUseTool = false
 	didToolFailInCurrentTurn = false
 	didCompleteReadingStream = false
+	private _started = false
 	// No streaming parser is required.
 	assistantMessageParser?: undefined
 	private providerProfileChangeListener?: (config: { name: string; provider?: string }) => void
@@ -554,6 +555,7 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 
 		this.messageQueueStateChangedHandler = () => {
 			this.emit(RooCodeEventName.TaskUserMessage, this.taskId)
+			this.emit(RooCodeEventName.QueuedMessagesUpdated, this.taskId, this.messageQueueService.messages)
 			this.providerRef.deref()?.postStateToWebviewWithoutTaskHistory()
 		}
 
@@ -597,6 +599,7 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 		onCreated?.(this)
 
 		if (startTask) {
+			this._started = true
 			if (task || images) {
 				this.startTask(task, images)
 			} else if (historyItem) {
@@ -1071,10 +1074,10 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 	 * tools execute (added in recursivelyMakeClineRequests after streaming completes).
 	 * So we usually only need to flush the pending user message with tool_results.
 	 */
-	public async flushPendingToolResultsToHistory(): Promise<void> {
+	public async flushPendingToolResultsToHistory(): Promise<boolean> {
 		// Only flush if there's actually pending content to save
 		if (this.userMessageContent.length === 0) {
-			return
+			return true
 		}
 
 		// CRITICAL: Wait for the assistant message to be saved to API history first.
@@ -1104,7 +1107,7 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 
 		// If task was aborted while waiting, don't flush
 		if (this.abort) {
-			return
+			return false
 		}
 
 		// Save the user message with tool_result blocks
@@ -1121,25 +1124,58 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 		const userMessageWithTs = { ...validatedMessage, ts: Date.now() }
 		this.apiConversationHistory.push(userMessageWithTs as ApiMessage)
 
-		await this.saveApiConversationHistory()
+		const saved = await this.saveApiConversationHistory()
+
+		if (saved) {
+			// Clear the pending content since it's now saved
+			this.userMessageContent = []
+		} else {
+			console.warn(
+				`[Task#${this.taskId}] flushPendingToolResultsToHistory: save failed, retaining pending tool results in memory`,
+			)
+		}
 
-		// Clear the pending content since it's now saved
-		this.userMessageContent = []
+		return saved
 	}
 
-	private async saveApiConversationHistory() {
+	private async saveApiConversationHistory(): Promise<boolean> {
 		try {
 			await saveApiMessages({
-				messages: this.apiConversationHistory,
+				messages: structuredClone(this.apiConversationHistory),
 				taskId: this.taskId,
 				globalStoragePath: this.globalStoragePath,
 			})
+			return true
 		} catch (error) {
-			// In the off chance this fails, we don't want to stop the task.
 			console.error("Failed to save API conversation history:", error)
+			return false
 		}
 	}
 
+	/**
+	 * Public wrapper to retry saving the API conversation history.
+	 * Uses exponential backoff: up to 3 attempts with delays of 100 ms, 500 ms, 1500 ms.
+	 * Used by delegation flow when flushPendingToolResultsToHistory reports failure.
+	 */
+	public async retrySaveApiConversationHistory(): Promise<boolean> {
+		const delays = [100, 500, 1500]
+
+		for (let attempt = 0; attempt < delays.length; attempt++) {
+			await new Promise<void>((resolve) => setTimeout(resolve, delays[attempt]))
+			console.warn(
+				`[Task#${this.taskId}] retrySaveApiConversationHistory: retry attempt ${attempt + 1}/${delays.length}`,
+			)
+
+			const success = await this.saveApiConversationHistory()
+
+			if (success) {
+				return true
+			}
+		}
+
+		return false
+	}
+
 	// Cline Messages
 
 	private async getSavedClineMessages(): Promise<ClineMessage[]> {
@@ -1201,10 +1237,10 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 		}
 	}
 
-	private async saveClineMessages() {
+	private async saveClineMessages(): Promise<boolean> {
 		try {
 			await saveTaskMessages({
-				messages: this.clineMessages,
+				messages: structuredClone(this.clineMessages),
 				taskId: this.taskId,
 				globalStoragePath: this.globalStoragePath,
 			})
@@ -1234,8 +1270,10 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 			this.debouncedEmitTokenUsage(tokenUsage, this.toolUsage)
 
 			await this.providerRef.deref()?.updateTaskHistory(historyItem)
+			return true
 		} catch (error) {
 			console.error("Failed to save Roo messages:", error)
+			return false
 		}
 	}
 
@@ -1654,8 +1692,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 				customModes: state?.customModes,
 				experiments: state?.experiments,
 				apiConfiguration,
-				maxReadFileLine: state?.maxReadFileLine ?? -1,
-				maxConcurrentFileReads: state?.maxConcurrentFileReads ?? 5,
 				browserToolEnabled: state?.browserToolEnabled ?? true,
 				disabledTools: state?.disabledTools,
 				modelInfo,
@@ -1901,6 +1937,30 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 		}
 	}
 
+	/**
+	 * Manually start a **new** task when it was created with `startTask: false`.
+	 *
+	 * This fires `startTask` as a background async operation for the
+	 * `task/images` code-path only.  It does **not** handle the
+	 * `historyItem` resume path (use the constructor with `startTask: true`
+	 * for that).  The primary use-case is in the delegation flow where the
+	 * parent's metadata must be persisted to globalState **before** the
+	 * child task begins writing its own history (avoiding a read-modify-write
+	 * race on globalState).
+	 */
+	public start(): void {
+		if (this._started) {
+			return
+		}
+		this._started = true
+
+		const { task, images } = this.metadata
+
+		if (task || images) {
+			this.startTask(task ?? undefined, images ?? undefined)
+		}
+	}
+
 	private async startTask(task?: string, images?: string[]): Promise<void> {
 		try {
 			if (this.enableBridge) {
@@ -2589,7 +2649,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 				showRooIgnoredFiles = false,
 				includeDiagnosticMessages = true,
 				maxDiagnosticMessages = 50,
-				maxReadFileLine = -1,
 			} = (await this.providerRef.deref()?.getState()) ?? {}
 
 			const { content: parsedUserContent, mode: slashCommandMode } = await processUserContentMentions({
@@ -2601,7 +2660,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 				showRooIgnoredFiles,
 				includeDiagnosticMessages,
 				maxDiagnosticMessages,
-				maxReadFileLine,
 			})
 
 			// Switch mode if specified in a slash command's frontmatter
@@ -3761,8 +3819,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 			experiments,
 			browserToolEnabled,
 			language,
-			maxConcurrentFileReads,
-			maxReadFileLine,
 			apiConfiguration,
 			enableSubfolderRules,
 		} = state ?? {}
@@ -3799,9 +3855,7 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 				experiments,
 				language,
 				rooIgnoreInstructions,
-				maxReadFileLine !== -1,
 				{
-					maxConcurrentFileReads: maxConcurrentFileReads ?? 5,
 					todoListEnabled: apiConfiguration?.todoListEnabled ?? true,
 					browserToolEnabled: browserToolEnabled ?? true,
 					useAgentRules:
@@ -3864,8 +3918,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 				customModes: state?.customModes,
 				experiments: state?.experiments,
 				apiConfiguration,
-				maxReadFileLine: state?.maxReadFileLine ?? -1,
-				maxConcurrentFileReads: state?.maxConcurrentFileReads ?? 5,
 				browserToolEnabled: state?.browserToolEnabled ?? true,
 				disabledTools: state?.disabledTools,
 				modelInfo,
@@ -4081,8 +4133,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 						customModes: state?.customModes,
 						experiments: state?.experiments,
 						apiConfiguration,
-						maxReadFileLine: state?.maxReadFileLine ?? -1,
-						maxConcurrentFileReads: state?.maxConcurrentFileReads ?? 5,
 						browserToolEnabled: state?.browserToolEnabled ?? true,
 						disabledTools: state?.disabledTools,
 						modelInfo,
@@ -4248,8 +4298,6 @@ export class Task extends EventEmitter<TaskEvents> implements TaskLike {
 				customModes: state?.customModes,
 				experiments: state?.experiments,
 				apiConfiguration,
-				maxReadFileLine: state?.maxReadFileLine ?? -1,
-				maxConcurrentFileReads: state?.maxConcurrentFileReads ?? 5,
 				browserToolEnabled: state?.browserToolEnabled ?? true,
 				disabledTools: state?.disabledTools,
 				modelInfo,

+ 471 - 0
src/core/task/__tests__/Task.persistence.spec.ts

@@ -0,0 +1,471 @@
+// cd src && npx vitest run core/task/__tests__/Task.persistence.spec.ts
+
+import * as os from "os"
+import * as path from "path"
+import * as vscode from "vscode"
+
+import type { GlobalState, ProviderSettings } from "@roo-code/types"
+import { TelemetryService } from "@roo-code/telemetry"
+
+import { Task } from "../Task"
+import { ClineProvider } from "../../webview/ClineProvider"
+import { ContextProxy } from "../../config/ContextProxy"
+
+// ─── Hoisted mocks ───────────────────────────────────────────────────────────
+
+const {
+	mockSaveApiMessages,
+	mockSaveTaskMessages,
+	mockReadApiMessages,
+	mockReadTaskMessages,
+	mockTaskMetadata,
+	mockPWaitFor,
+} = vi.hoisted(() => ({
+	mockSaveApiMessages: vi.fn().mockResolvedValue(undefined),
+	mockSaveTaskMessages: vi.fn().mockResolvedValue(undefined),
+	mockReadApiMessages: vi.fn().mockResolvedValue([]),
+	mockReadTaskMessages: vi.fn().mockResolvedValue([]),
+	mockTaskMetadata: vi.fn().mockResolvedValue({
+		historyItem: { id: "test-id", ts: Date.now(), task: "test" },
+		tokenUsage: {
+			totalTokensIn: 0,
+			totalTokensOut: 0,
+			totalCacheWrites: 0,
+			totalCacheReads: 0,
+			totalCost: 0,
+			contextTokens: 0,
+		},
+	}),
+	mockPWaitFor: vi.fn().mockResolvedValue(undefined),
+}))
+
+// ─── Module mocks ────────────────────────────────────────────────────────────
+
+vi.mock("delay", () => ({
+	__esModule: true,
+	default: vi.fn().mockResolvedValue(undefined),
+}))
+
+vi.mock("execa", () => ({
+	execa: vi.fn(),
+}))
+
+vi.mock("fs/promises", async (importOriginal) => {
+	const actual = (await importOriginal()) as Record<string, any>
+	return {
+		...actual,
+		mkdir: vi.fn().mockResolvedValue(undefined),
+		writeFile: vi.fn().mockResolvedValue(undefined),
+		readFile: vi.fn().mockResolvedValue("[]"),
+		unlink: vi.fn().mockResolvedValue(undefined),
+		rmdir: vi.fn().mockResolvedValue(undefined),
+		default: {
+			mkdir: vi.fn().mockResolvedValue(undefined),
+			writeFile: vi.fn().mockResolvedValue(undefined),
+			readFile: vi.fn().mockResolvedValue("[]"),
+			unlink: vi.fn().mockResolvedValue(undefined),
+			rmdir: vi.fn().mockResolvedValue(undefined),
+		},
+	}
+})
+
+vi.mock("p-wait-for", () => ({
+	default: mockPWaitFor,
+}))
+
+vi.mock("../../task-persistence", () => ({
+	saveApiMessages: mockSaveApiMessages,
+	saveTaskMessages: mockSaveTaskMessages,
+	readApiMessages: mockReadApiMessages,
+	readTaskMessages: mockReadTaskMessages,
+	taskMetadata: mockTaskMetadata,
+}))
+
+vi.mock("vscode", () => {
+	const mockDisposable = { dispose: vi.fn() }
+	const mockEventEmitter = { event: vi.fn(), fire: vi.fn() }
+	const mockTextDocument = { uri: { fsPath: "/mock/workspace/path/file.ts" } }
+	const mockTextEditor = { document: mockTextDocument }
+	const mockTab = { input: { uri: { fsPath: "/mock/workspace/path/file.ts" } } }
+	const mockTabGroup = { tabs: [mockTab] }
+
+	return {
+		TabInputTextDiff: vi.fn(),
+		CodeActionKind: {
+			QuickFix: { value: "quickfix" },
+			RefactorRewrite: { value: "refactor.rewrite" },
+		},
+		window: {
+			createTextEditorDecorationType: vi.fn().mockReturnValue({ dispose: vi.fn() }),
+			visibleTextEditors: [mockTextEditor],
+			tabGroups: {
+				all: [mockTabGroup],
+				close: vi.fn(),
+				onDidChangeTabs: vi.fn(() => ({ dispose: vi.fn() })),
+			},
+			showErrorMessage: vi.fn(),
+		},
+		workspace: {
+			workspaceFolders: [
+				{
+					uri: { fsPath: "/mock/workspace/path" },
+					name: "mock-workspace",
+					index: 0,
+				},
+			],
+			createFileSystemWatcher: vi.fn(() => ({
+				onDidCreate: vi.fn(() => mockDisposable),
+				onDidDelete: vi.fn(() => mockDisposable),
+				onDidChange: vi.fn(() => mockDisposable),
+				dispose: vi.fn(),
+			})),
+			fs: {
+				stat: vi.fn().mockResolvedValue({ type: 1 }),
+			},
+			onDidSaveTextDocument: vi.fn(() => mockDisposable),
+			getConfiguration: vi.fn(() => ({ get: (_key: string, defaultValue: unknown) => defaultValue })),
+		},
+		env: {
+			uriScheme: "vscode",
+			language: "en",
+		},
+		EventEmitter: vi.fn().mockImplementation(() => mockEventEmitter),
+		Disposable: {
+			from: vi.fn(),
+		},
+		TabInputText: vi.fn(),
+	}
+})
+
+vi.mock("../../mentions", () => ({
+	parseMentions: vi.fn().mockImplementation((text) => {
+		return Promise.resolve({ text: `processed: ${text}`, mode: undefined, contentBlocks: [] })
+	}),
+	openMention: vi.fn(),
+	getLatestTerminalOutput: vi.fn(),
+}))
+
+vi.mock("../../../integrations/misc/extract-text", () => ({
+	extractTextFromFile: vi.fn().mockResolvedValue("Mock file content"),
+}))
+
+vi.mock("../../environment/getEnvironmentDetails", () => ({
+	getEnvironmentDetails: vi.fn().mockResolvedValue(""),
+}))
+
+vi.mock("../../ignore/RooIgnoreController")
+
+vi.mock("../../condense", async (importOriginal) => {
+	const actual = (await importOriginal()) as Record<string, unknown>
+	return {
+		...actual,
+		summarizeConversation: vi.fn().mockResolvedValue({
+			messages: [{ role: "user", content: [{ type: "text", text: "continued" }], ts: Date.now() }],
+			summary: "summary",
+			cost: 0,
+			newContextTokens: 1,
+		}),
+	}
+})
+
+vi.mock("../../../utils/storage", () => ({
+	getTaskDirectoryPath: vi
+		.fn()
+		.mockImplementation((globalStoragePath, taskId) => Promise.resolve(`${globalStoragePath}/tasks/${taskId}`)),
+	getSettingsDirectoryPath: vi
+		.fn()
+		.mockImplementation((globalStoragePath) => Promise.resolve(`${globalStoragePath}/settings`)),
+}))
+
+vi.mock("../../../utils/fs", () => ({
+	fileExistsAtPath: vi.fn().mockReturnValue(false),
+}))
+
+// ─── Test suite ──────────────────────────────────────────────────────────────
+
+describe("Task persistence", () => {
+	let mockProvider: ClineProvider & Record<string, any>
+	let mockApiConfig: ProviderSettings
+	let mockOutputChannel: vscode.OutputChannel
+	let mockExtensionContext: vscode.ExtensionContext
+
+	beforeEach(() => {
+		vi.clearAllMocks()
+
+		if (!TelemetryService.hasInstance()) {
+			TelemetryService.createInstance([])
+		}
+
+		const storageUri = { fsPath: path.join(os.tmpdir(), "test-storage") }
+
+		mockExtensionContext = {
+			globalState: {
+				get: vi.fn().mockImplementation((_key: keyof GlobalState) => undefined),
+				update: vi.fn().mockImplementation((_key, _value) => Promise.resolve()),
+				keys: vi.fn().mockReturnValue([]),
+			},
+			globalStorageUri: storageUri,
+			workspaceState: {
+				get: vi.fn().mockImplementation((_key) => undefined),
+				update: vi.fn().mockImplementation((_key, _value) => Promise.resolve()),
+				keys: vi.fn().mockReturnValue([]),
+			},
+			secrets: {
+				get: vi.fn().mockImplementation((_key) => Promise.resolve(undefined)),
+				store: vi.fn().mockImplementation((_key, _value) => Promise.resolve()),
+				delete: vi.fn().mockImplementation((_key) => Promise.resolve()),
+			},
+			extensionUri: { fsPath: "/mock/extension/path" },
+			extension: { packageJSON: { version: "1.0.0" } },
+		} as unknown as vscode.ExtensionContext
+
+		mockOutputChannel = {
+			appendLine: vi.fn(),
+			append: vi.fn(),
+			clear: vi.fn(),
+			show: vi.fn(),
+			hide: vi.fn(),
+			dispose: vi.fn(),
+		} as unknown as vscode.OutputChannel
+
+		mockProvider = new ClineProvider(
+			mockExtensionContext,
+			mockOutputChannel,
+			"sidebar",
+			new ContextProxy(mockExtensionContext),
+		) as ClineProvider & Record<string, any>
+
+		mockApiConfig = {
+			apiProvider: "anthropic",
+			apiModelId: "claude-3-5-sonnet-20241022",
+			apiKey: "test-api-key",
+		}
+
+		mockProvider.postMessageToWebview = vi.fn().mockResolvedValue(undefined)
+		mockProvider.postStateToWebview = vi.fn().mockResolvedValue(undefined)
+		mockProvider.postStateToWebviewWithoutTaskHistory = vi.fn().mockResolvedValue(undefined)
+		mockProvider.updateTaskHistory = vi.fn().mockResolvedValue(undefined)
+	})
+
+	// ── saveApiConversationHistory (via retrySaveApiConversationHistory) ──
+
+	describe("saveApiConversationHistory", () => {
+		it("returns true on success", async () => {
+			mockSaveApiMessages.mockResolvedValueOnce(undefined)
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			task.apiConversationHistory.push({
+				role: "user",
+				content: [{ type: "text", text: "hello" }],
+			})
+
+			const result = await task.retrySaveApiConversationHistory()
+			expect(result).toBe(true)
+		})
+
+		it("returns false on failure", async () => {
+			vi.useFakeTimers()
+
+			// All 3 retry attempts must fail for retrySaveApiConversationHistory to return false
+			mockSaveApiMessages
+				.mockRejectedValueOnce(new Error("fail 1"))
+				.mockRejectedValueOnce(new Error("fail 2"))
+				.mockRejectedValueOnce(new Error("fail 3"))
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			const promise = task.retrySaveApiConversationHistory()
+			await vi.runAllTimersAsync()
+			const result = await promise
+
+			expect(result).toBe(false)
+			expect(mockSaveApiMessages).toHaveBeenCalledTimes(3)
+
+			vi.useRealTimers()
+		})
+
+		it("succeeds on 2nd retry attempt", async () => {
+			vi.useFakeTimers()
+
+			mockSaveApiMessages.mockRejectedValueOnce(new Error("fail 1")).mockResolvedValueOnce(undefined) // succeeds on 2nd try
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			const promise = task.retrySaveApiConversationHistory()
+			await vi.runAllTimersAsync()
+			const result = await promise
+
+			expect(result).toBe(true)
+			expect(mockSaveApiMessages).toHaveBeenCalledTimes(2)
+
+			vi.useRealTimers()
+		})
+
+		it("snapshots the array before passing to saveApiMessages", async () => {
+			mockSaveApiMessages.mockResolvedValueOnce(undefined)
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			const originalMsg = {
+				role: "user" as const,
+				content: [{ type: "text" as const, text: "snapshot test" }],
+			}
+			task.apiConversationHistory.push(originalMsg)
+
+			await task.retrySaveApiConversationHistory()
+
+			expect(mockSaveApiMessages).toHaveBeenCalledTimes(1)
+
+			const callArgs = mockSaveApiMessages.mock.calls[0][0]
+			// The messages passed should be a COPY, not the live reference
+			expect(callArgs.messages).not.toBe(task.apiConversationHistory)
+			// But the content should be the same
+			expect(callArgs.messages).toEqual(task.apiConversationHistory)
+		})
+	})
+
+	// ── saveClineMessages ────────────────────────────────────────────────
+
+	describe("saveClineMessages", () => {
+		it("returns true on success", async () => {
+			mockSaveTaskMessages.mockResolvedValueOnce(undefined)
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			const result = await (task as Record<string, any>).saveClineMessages()
+			expect(result).toBe(true)
+		})
+
+		it("returns false on failure", async () => {
+			mockSaveTaskMessages.mockRejectedValueOnce(new Error("write error"))
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			const result = await (task as Record<string, any>).saveClineMessages()
+			expect(result).toBe(false)
+		})
+
+		it("snapshots the array before passing to saveTaskMessages", async () => {
+			mockSaveTaskMessages.mockResolvedValueOnce(undefined)
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			task.clineMessages.push({
+				type: "say",
+				say: "text",
+				text: "snapshot test",
+				ts: Date.now(),
+			})
+
+			await (task as Record<string, any>).saveClineMessages()
+
+			expect(mockSaveTaskMessages).toHaveBeenCalledTimes(1)
+
+			const callArgs = mockSaveTaskMessages.mock.calls[0][0]
+			// The messages passed should be a COPY, not the live reference
+			expect(callArgs.messages).not.toBe(task.clineMessages)
+			// But the content should be the same
+			expect(callArgs.messages).toEqual(task.clineMessages)
+		})
+	})
+
+	// ── flushPendingToolResultsToHistory — save failure/success ───────────
+
+	describe("flushPendingToolResultsToHistory persistence", () => {
+		it("retains userMessageContent on save failure", async () => {
+			mockSaveApiMessages.mockRejectedValueOnce(new Error("disk full"))
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			// Skip waiting for assistant message
+			task.assistantMessageSavedToHistory = true
+
+			task.userMessageContent = [
+				{
+					type: "tool_result",
+					tool_use_id: "tool-fail",
+					content: "Result that should be retained",
+				},
+			]
+
+			const saved = await task.flushPendingToolResultsToHistory()
+
+			expect(saved).toBe(false)
+			// userMessageContent should NOT be cleared on failure
+			expect(task.userMessageContent.length).toBeGreaterThan(0)
+			expect(task.userMessageContent[0]).toMatchObject({
+				type: "tool_result",
+				tool_use_id: "tool-fail",
+			})
+		})
+
+		it("clears userMessageContent on save success", async () => {
+			mockSaveApiMessages.mockResolvedValueOnce(undefined)
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			// Skip waiting for assistant message
+			task.assistantMessageSavedToHistory = true
+
+			task.userMessageContent = [
+				{
+					type: "tool_result",
+					tool_use_id: "tool-ok",
+					content: "Result that should be cleared",
+				},
+			]
+
+			const saved = await task.flushPendingToolResultsToHistory()
+
+			expect(saved).toBe(true)
+			// userMessageContent should be cleared on success
+			expect(task.userMessageContent).toEqual([])
+		})
+	})
+})

+ 44 - 1
src/core/task/__tests__/Task.spec.ts

@@ -140,7 +140,7 @@ vi.mock("vscode", () => {
 
 vi.mock("../../mentions", () => ({
 	parseMentions: vi.fn().mockImplementation((text) => {
-		return Promise.resolve({ text: `processed: ${text}`, mode: undefined })
+		return Promise.resolve({ text: `processed: ${text}`, mode: undefined, contentBlocks: [] })
 	}),
 	openMention: vi.fn(),
 	getLatestTerminalOutput: vi.fn(),
@@ -1820,6 +1820,49 @@ describe("Cline", () => {
 			})
 		})
 	})
+
+	describe("start()", () => {
+		it("should be a no-op if the task was already started in the constructor", () => {
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: false,
+			})
+
+			// Manually trigger start
+			const startTaskSpy = vi.spyOn(task as any, "startTask").mockImplementation(async () => {})
+			task.start()
+
+			expect(startTaskSpy).toHaveBeenCalledTimes(1)
+
+			// Calling start() again should be a no-op
+			task.start()
+			expect(startTaskSpy).toHaveBeenCalledTimes(1)
+		})
+
+		it("should not call startTask if already started via constructor", () => {
+			// Create a task that starts immediately (startTask defaults to true)
+			// but mock startTask to prevent actual execution
+			const startTaskSpy = vi.spyOn(Task.prototype as any, "startTask").mockImplementation(async () => {})
+
+			const task = new Task({
+				provider: mockProvider,
+				apiConfiguration: mockApiConfig,
+				task: "test task",
+				startTask: true,
+			})
+
+			// startTask was called by the constructor
+			expect(startTaskSpy).toHaveBeenCalledTimes(1)
+
+			// Calling start() should be a no-op since _started is already true
+			task.start()
+			expect(startTaskSpy).toHaveBeenCalledTimes(1)
+
+			startTaskSpy.mockRestore()
+		})
+	})
 })
 
 describe("Queued message processing after condense", () => {

+ 5 - 1
src/core/task/__tests__/flushPendingToolResultsToHistory.spec.ts

@@ -21,6 +21,10 @@ vi.mock("execa", () => ({
 	execa: vi.fn(),
 }))
 
+vi.mock("../../../utils/safeWriteJson", () => ({
+	safeWriteJson: vi.fn().mockResolvedValue(undefined),
+}))
+
 vi.mock("fs/promises", async (importOriginal) => {
 	const actual = (await importOriginal()) as Record<string, any>
 	const mockFunctions = {
@@ -106,7 +110,7 @@ vi.mock("vscode", () => {
 
 vi.mock("../../mentions", () => ({
 	parseMentions: vi.fn().mockImplementation((text) => {
-		return Promise.resolve(`processed: ${text}`)
+		return Promise.resolve({ text: `processed: ${text}`, mode: undefined, contentBlocks: [] })
 	}),
 	openMention: vi.fn(),
 	getLatestTerminalOutput: vi.fn(),

+ 1 - 1
src/core/task/__tests__/grace-retry-errors.spec.ts

@@ -111,7 +111,7 @@ vi.mock("vscode", () => {
 
 vi.mock("../../mentions", () => ({
 	parseMentions: vi.fn().mockImplementation((text) => {
-		return Promise.resolve(`processed: ${text}`)
+		return Promise.resolve({ text: `processed: ${text}`, mode: undefined, contentBlocks: [] })
 	}),
 	openMention: vi.fn(),
 	getLatestTerminalOutput: vi.fn(),

+ 1 - 1
src/core/task/__tests__/grounding-sources.test.ts

@@ -112,7 +112,7 @@ vi.mock("fs/promises", () => ({
 
 // Mock mentions
 vi.mock("../../mentions", () => ({
-	parseMentions: vi.fn().mockImplementation((text) => Promise.resolve(text)),
+	parseMentions: vi.fn().mockImplementation((text) => Promise.resolve({ text, mode: undefined, contentBlocks: [] })),
 	openMention: vi.fn(),
 	getLatestTerminalOutput: vi.fn(),
 }))

+ 1 - 1
src/core/task/__tests__/reasoning-preservation.test.ts

@@ -112,7 +112,7 @@ vi.mock("fs/promises", () => ({
 
 // Mock mentions
 vi.mock("../../mentions", () => ({
-	parseMentions: vi.fn().mockImplementation((text) => Promise.resolve(text)),
+	parseMentions: vi.fn().mockImplementation((text) => Promise.resolve({ text, mode: undefined, contentBlocks: [] })),
 	openMention: vi.fn(),
 	getLatestTerminalOutput: vi.fn(),
 }))

+ 0 - 9
src/core/task/build-tools.ts

@@ -22,8 +22,6 @@ interface BuildToolsOptions {
 	customModes: ModeConfig[] | undefined
 	experiments: Record<string, boolean> | undefined
 	apiConfiguration: ProviderSettings | undefined
-	maxReadFileLine: number
-	maxConcurrentFileReads: number
 	browserToolEnabled: boolean
 	disabledTools?: string[]
 	modelInfo?: ModelInfo
@@ -90,8 +88,6 @@ export async function buildNativeToolsArrayWithRestrictions(options: BuildToolsO
 		customModes,
 		experiments,
 		apiConfiguration,
-		maxReadFileLine,
-		maxConcurrentFileReads,
 		browserToolEnabled,
 		disabledTools,
 		modelInfo,
@@ -112,16 +108,11 @@ export async function buildNativeToolsArrayWithRestrictions(options: BuildToolsO
 		modelInfo,
 	}
 
-	// Determine if partial reads are enabled based on maxReadFileLine setting.
-	const partialReadsEnabled = maxReadFileLine !== -1
-
 	// Check if the model supports images for read_file tool description.
 	const supportsImages = modelInfo?.supportsImages ?? false
 
 	// Build native tools with dynamic read_file tool based on settings.
 	const nativeTools = getNativeTools({
-		partialReadsEnabled,
-		maxConcurrentFileReads,
 		supportsImages,
 	})
 

Разница между файлами не показана из-за своего большого размера
+ 497 - 486
src/core/tools/ReadFileTool.ts


+ 28 - 9
src/core/tools/UseMcpToolTool.ts

@@ -255,12 +255,14 @@ export class UseMcpToolTool extends BaseTool<"use_mcp_tool"> {
 		})
 	}
 
-	private processToolContent(toolResult: any): string {
+	private processToolContent(toolResult: any): { text: string; images: string[] } {
 		if (!toolResult?.content || toolResult.content.length === 0) {
-			return ""
+			return { text: "", images: [] }
 		}
 
-		return toolResult.content
+		const images: string[] = []
+
+		const textContent = toolResult.content
 			.map((item: any) => {
 				if (item.type === "text") {
 					return item.text
@@ -269,10 +271,23 @@ export class UseMcpToolTool extends BaseTool<"use_mcp_tool"> {
 					const { blob: _, ...rest } = item.resource
 					return JSON.stringify(rest, null, 2)
 				}
+				if (item.type === "image") {
+					// Handle image content (MCP image content has mimeType and data properties)
+					if (item.mimeType && item.data) {
+						if (item.data.startsWith("data:")) {
+							images.push(item.data)
+						} else {
+							images.push(`data:${item.mimeType};base64,${item.data}`)
+						}
+					}
+					return ""
+				}
 				return ""
 			})
 			.filter(Boolean)
 			.join("\n\n")
+
+		return { text: textContent, images }
 	}
 
 	private async executeToolAndProcessResult(
@@ -296,18 +311,22 @@ export class UseMcpToolTool extends BaseTool<"use_mcp_tool"> {
 		const toolResult = await task.providerRef.deref()?.getMcpHub()?.callTool(serverName, toolName, parsedArguments)
 
 		let toolResultPretty = "(No response)"
+		let images: string[] = []
 
 		if (toolResult) {
-			const outputText = this.processToolContent(toolResult)
+			const { text: outputText, images: extractedImages } = this.processToolContent(toolResult)
+			images = extractedImages
 
-			if (outputText) {
+			if (outputText || images.length > 0) {
 				await this.sendExecutionStatus(task, {
 					executionId,
 					status: "output",
-					response: outputText,
+					response: outputText || (images.length > 0 ? `[${images.length} image(s)]` : ""),
 				})
 
-				toolResultPretty = (toolResult.isError ? "Error:\n" : "") + outputText
+				toolResultPretty =
+					(toolResult.isError ? "Error:\n" : "") +
+					(outputText || (images.length > 0 ? `[${images.length} image(s) received]` : ""))
 			}
 
 			// Send completion status
@@ -326,8 +345,8 @@ export class UseMcpToolTool extends BaseTool<"use_mcp_tool"> {
 			})
 		}
 
-		await task.say("mcp_server_response", toolResultPretty)
-		pushToolResult(formatResponse.toolResult(toolResultPretty))
+		await task.say("mcp_server_response", toolResultPretty, images)
+		pushToolResult(formatResponse.toolResult(toolResultPretty, images))
 	}
 }
 

+ 11 - 7
src/core/tools/__tests__/ToolRepetitionDetector.spec.ts

@@ -575,7 +575,7 @@ describe("ToolRepetitionDetector", () => {
 				params: {}, // Empty for native protocol
 				partial: false,
 				nativeArgs: {
-					files: [{ path: "file1.ts" }],
+					path: "file1.ts",
 				},
 			}
 
@@ -585,7 +585,7 @@ describe("ToolRepetitionDetector", () => {
 				params: {}, // Empty for native protocol
 				partial: false,
 				nativeArgs: {
-					files: [{ path: "file2.ts" }],
+					path: "file2.ts",
 				},
 			}
 
@@ -609,7 +609,7 @@ describe("ToolRepetitionDetector", () => {
 				params: {}, // Empty for native protocol
 				partial: false,
 				nativeArgs: {
-					files: [{ path: "same-file.ts" }],
+					path: "same-file.ts",
 				},
 			}
 
@@ -625,7 +625,7 @@ describe("ToolRepetitionDetector", () => {
 			expect(result.askUser).toBeDefined()
 		})
 
-		it("should differentiate read_file calls with multiple files in different orders", () => {
+		it("should treat different slice offsets as distinct read_file calls", () => {
 			const detector = new ToolRepetitionDetector(2)
 
 			const readFile1: ToolUse = {
@@ -634,7 +634,9 @@ describe("ToolRepetitionDetector", () => {
 				params: {},
 				partial: false,
 				nativeArgs: {
-					files: [{ path: "a.ts" }, { path: "b.ts" }],
+					path: "a.ts",
+					offset: 1,
+					limit: 2000,
 				},
 			}
 
@@ -644,11 +646,13 @@ describe("ToolRepetitionDetector", () => {
 				params: {},
 				partial: false,
 				nativeArgs: {
-					files: [{ path: "b.ts" }, { path: "a.ts" }],
+					path: "a.ts",
+					offset: 2001,
+					limit: 2000,
 				},
 			}
 
-			// Different order should be treated as different calls
+			// Different offsets should be treated as different calls
 			expect(detector.check(readFile1).allowExecution).toBe(true)
 			expect(detector.check(readFile2).allowExecution).toBe(true)
 		})

+ 511 - 1783
src/core/tools/__tests__/readFileTool.spec.ts

@@ -1,14 +1,33 @@
-// npx vitest src/core/tools/__tests__/readFileTool.spec.ts
+/**
+ * Tests for ReadFileTool - Codex-inspired file reading with indentation mode support.
+ *
+ * These tests cover:
+ * - Input validation (missing path parameter)
+ * - RooIgnore blocking
+ * - Directory read error handling
+ * - Binary file handling (images, PDF, DOCX, unsupported)
+ * - Image memory limits
+ * - Approval flow (approve, deny, feedback)
+ * - Text file processing (slice and indentation modes)
+ * - Output structure formatting
+ */
+
+import path from "path"
 
-import * as path from "path"
-
-import { countFileLines } from "../../../integrations/misc/line-counter"
-import { readLines } from "../../../integrations/misc/read-lines"
-import { extractTextFromFile } from "../../../integrations/misc/extract-text"
-import { parseSourceCodeDefinitionsForFile } from "../../../services/tree-sitter"
 import { isBinaryFile } from "isbinaryfile"
-import { ReadFileToolUse, ToolResponse } from "../../../shared/tools"
-import { readFileTool } from "../ReadFileTool"
+
+import { readFileTool, ReadFileTool } from "../ReadFileTool"
+import { formatResponse } from "../../prompts/responses"
+import {
+	validateImageForProcessing,
+	processImageFile,
+	isSupportedImageFormat,
+	ImageMemoryTracker,
+} from "../helpers/imageHelpers"
+import { extractTextFromFile, addLineNumbers, getSupportedBinaryFormats } from "../../../integrations/misc/extract-text"
+import { readWithIndentation, readWithSlice } from "../../../integrations/misc/indentation-reader"
+
+// ─── Mocks ────────────────────────────────────────────────────────────────────
 
 vi.mock("path", async () => {
 	const originalPath = await vi.importActual("path")
@@ -19,76 +38,39 @@ vi.mock("path", async () => {
 	}
 })
 
-// Already mocked above with hoisted fsPromises
-
-vi.mock("isbinaryfile")
-
-vi.mock("../../../integrations/misc/line-counter")
-vi.mock("../../../integrations/misc/read-lines")
-
-// Mock fs/promises readFile for image tests
-const fsPromises = vi.hoisted(() => ({
+vi.mock("fs/promises", () => ({
 	readFile: vi.fn(),
-	stat: vi.fn().mockResolvedValue({ size: 1024 }),
+	stat: vi.fn(),
 }))
-vi.mock("fs/promises", () => fsPromises)
-
-// Mock input content for tests
-let mockInputContent = ""
 
-// Create hoisted mocks that can be used in vi.mock factories
-const { addLineNumbersMock, mockReadFileWithTokenBudget } = vi.hoisted(() => {
-	const addLineNumbersMock = vi.fn().mockImplementation((text: string, startLine = 1) => {
-		if (!text) return ""
-		const lines = typeof text === "string" ? text.split("\n") : [text]
-		return lines.map((line: string, i: number) => `${startLine + i} | ${line}`).join("\n")
-	})
-	const mockReadFileWithTokenBudget = vi.fn()
-	return { addLineNumbersMock, mockReadFileWithTokenBudget }
-})
+vi.mock("isbinaryfile")
 
-// First create all the mocks
 vi.mock("../../../integrations/misc/extract-text", () => ({
 	extractTextFromFile: vi.fn(),
-	addLineNumbers: addLineNumbersMock,
+	addLineNumbers: vi.fn().mockImplementation((text: string, startLine = 1) => {
+		if (!text) return ""
+		const lines = text.split("\n")
+		return lines.map((line, i) => `${startLine + i} | ${line}`).join("\n")
+	}),
 	getSupportedBinaryFormats: vi.fn(() => [".pdf", ".docx", ".ipynb"]),
 }))
-vi.mock("../../../services/tree-sitter")
 
-// Mock readFileWithTokenBudget - must be mocked to prevent actual file system access
-vi.mock("../../../integrations/misc/read-file-with-budget", () => ({
-	readFileWithTokenBudget: (...args: any[]) => mockReadFileWithTokenBudget(...args),
+vi.mock("../../../integrations/misc/indentation-reader", () => ({
+	readWithIndentation: vi.fn(),
+	readWithSlice: vi.fn(),
 }))
 
-const extractTextFromFileMock = vi.fn()
-const getSupportedBinaryFormatsMock = vi.fn(() => [".pdf", ".docx", ".ipynb"])
-
-// Mock formatResponse - use vi.hoisted to ensure mocks are available before vi.mock
-const { toolResultMock, imageBlocksMock } = vi.hoisted(() => {
-	const toolResultMock = vi.fn((text: string, images?: string[]) => {
-		if (images && images.length > 0) {
-			return [
-				{ type: "text", text },
-				...images.map((img) => {
-					const [header, data] = img.split(",")
-					const media_type = header.match(/:(.*?);/)?.[1] || "image/png"
-					return { type: "image", source: { type: "base64", media_type, data } }
-				}),
-			]
-		}
-		return text
-	})
-	const imageBlocksMock = vi.fn((images?: string[]) => {
-		return images
-			? images.map((img) => {
-					const [header, data] = img.split(",")
-					const media_type = header.match(/:(.*?);/)?.[1] || "image/png"
-					return { type: "image", source: { type: "base64", media_type, data } }
-				})
-			: []
-	})
-	return { toolResultMock, imageBlocksMock }
-})
+vi.mock("../helpers/imageHelpers", () => ({
+	DEFAULT_MAX_IMAGE_FILE_SIZE_MB: 5,
+	DEFAULT_MAX_TOTAL_IMAGE_SIZE_MB: 20,
+	isSupportedImageFormat: vi.fn(),
+	validateImageForProcessing: vi.fn(),
+	processImageFile: vi.fn(),
+	ImageMemoryTracker: vi.fn().mockImplementation(() => ({
+		getTotalMemoryUsed: vi.fn().mockReturnValue(0),
+		addMemoryUsage: vi.fn(),
+	})),
+}))
 
 vi.mock("../../prompts/responses", () => ({
 	formatResponse: {
@@ -102,1904 +84,650 @@ vi.mock("../../prompts/responses", () => ({
 				`The user approved this operation and responded with the message:\n<user_message>\n${feedback}\n</user_message>`,
 		),
 		rooIgnoreError: vi.fn(
-			(path: string) =>
-				`Access to ${path} is blocked by the .rooignore file settings. You must try to continue in the task without using this file, or ask the user to update the .rooignore file.`,
+			(filePath: string) =>
+				`Access to ${filePath} is blocked by the .rooignore file settings. You must try to continue in the task without using this file, or ask the user to update the .rooignore file.`,
 		),
-		toolResult: toolResultMock,
-		imageBlocks: imageBlocksMock,
-	},
-}))
-
-vi.mock("../../ignore/RooIgnoreController", () => ({
-	RooIgnoreController: class {
-		initialize() {
-			return Promise.resolve()
-		}
-		validateAccess() {
-			return true
-		}
+		toolResult: vi.fn((text: string, images?: string[]) => {
+			if (images && images.length > 0) {
+				return [
+					{ type: "text", text },
+					...images.map((img) => {
+						const [header, data] = img.split(",")
+						const media_type = header.match(/:(.*?);/)?.[1] || "image/png"
+						return { type: "image", source: { type: "base64", media_type, data } }
+					}),
+				]
+			}
+			return text
+		}),
+		imageBlocks: vi.fn((images?: string[]) => {
+			return images
+				? images.map((img) => {
+						const [header, data] = img.split(",")
+						const media_type = header.match(/:(.*?);/)?.[1] || "image/png"
+						return { type: "image", source: { type: "base64", media_type, data } }
+					})
+				: []
+		}),
 	},
 }))
 
-vi.mock("../../../utils/fs", () => ({
-	fileExistsAtPath: vi.fn().mockReturnValue(true),
-}))
-
-// Global beforeEach to ensure clean mock state between all test suites
-beforeEach(() => {
-	// NOTE: Removed vi.clearAllMocks() to prevent interference with setImageSupport calls
-	// Instead, individual suites clear their specific mocks to maintain isolation
-
-	// Explicitly reset the hoisted mock implementations to prevent cross-suite pollution
-	toolResultMock.mockImplementation((text: string, images?: string[]) => {
-		if (images && images.length > 0) {
-			return [
-				{ type: "text", text },
-				...images.map((img) => {
-					const [header, data] = img.split(",")
-					const media_type = header.match(/:(.*?);/)?.[1] || "image/png"
-					return { type: "image", source: { type: "base64", media_type, data } }
-				}),
-			]
-		}
-		return text
-	})
-
-	imageBlocksMock.mockImplementation((images?: string[]) => {
-		return images
-			? images.map((img) => {
-					const [header, data] = img.split(",")
-					const media_type = header.match(/:(.*?);/)?.[1] || "image/png"
-					return { type: "image", source: { type: "base64", media_type, data } }
-				})
-			: []
-	})
-
-	// Reset addLineNumbers mock to its default implementation (prevents cross-test pollution)
-	addLineNumbersMock.mockReset()
-	addLineNumbersMock.mockImplementation((text: string, startLine = 1) => {
-		if (!text) return ""
-		const lines = typeof text === "string" ? text.split("\n") : [text]
-		return lines.map((line: string, i: number) => `${startLine + i} | ${line}`).join("\n")
-	})
-
-	// Reset readFileWithTokenBudget mock with default implementation
-	mockReadFileWithTokenBudget.mockClear()
-	mockReadFileWithTokenBudget.mockImplementation(async (_filePath: string, _options: any) => {
-		// Default: return the mockInputContent with 5 lines
-		const lines = mockInputContent ? mockInputContent.split("\n") : []
-		return {
-			content: mockInputContent,
-			tokenCount: mockInputContent.length / 4, // rough estimate
-			lineCount: lines.length,
-			complete: true,
-		}
-	})
-})
-
-// Mock i18n translation function
-vi.mock("../../../i18n", () => ({
-	t: vi.fn((key: string, params?: Record<string, any>) => {
-		// Map translation keys to English text
-		const translations: Record<string, string> = {
-			"tools:readFile.imageWithSize": "Image file ({{size}} KB)",
-			"tools:readFile.imageTooLarge":
-				"Image file is too large ({{size}}). The maximum allowed size is {{max}} MB.",
-			"tools:readFile.linesRange": " (lines {{start}}-{{end}})",
-			"tools:readFile.definitionsOnly": " (definitions only)",
-			"tools:readFile.maxLines": " (max {{max}} lines)",
-		}
-
-		let result = translations[key] || key
-
-		// Simple template replacement
-		if (params) {
-			Object.entries(params).forEach(([param, value]) => {
-				result = result.replace(new RegExp(`{{${param}}}`, "g"), String(value))
-			})
-		}
-
-		return result
-	}),
-}))
+// Mock fs/promises
+const fsPromises = await import("fs/promises")
+const mockedFsReadFile = vi.mocked(fsPromises.readFile)
+const mockedFsStat = vi.mocked(fsPromises.stat)
+
+const mockedIsBinaryFile = vi.mocked(isBinaryFile)
+const mockedExtractTextFromFile = vi.mocked(extractTextFromFile)
+const mockedReadWithSlice = vi.mocked(readWithSlice)
+const mockedReadWithIndentation = vi.mocked(readWithIndentation)
+const mockedIsSupportedImageFormat = vi.mocked(isSupportedImageFormat)
+const mockedValidateImageForProcessing = vi.mocked(validateImageForProcessing)
+const mockedProcessImageFile = vi.mocked(processImageFile)
+
+// ─── Test Helpers ─────────────────────────────────────────────────────────────
+
+interface MockTaskOptions {
+	supportsImages?: boolean
+	rooIgnoreAllowed?: boolean
+	maxImageFileSize?: number
+	maxTotalImageSize?: number
+}
 
-// Shared mock setup function to ensure consistent state across all test suites
-function createMockCline(): any {
-	const mockProvider = {
-		getState: vi.fn(),
-		deref: vi.fn().mockReturnThis(),
-	}
+function createMockTask(options: MockTaskOptions = {}) {
+	const { supportsImages = false, rooIgnoreAllowed = true, maxImageFileSize = 5, maxTotalImageSize = 20 } = options
 
-	const mockCline: any = {
-		cwd: "/",
-		task: "Test",
-		providerRef: mockProvider,
-		rooIgnoreController: {
-			validateAccess: vi.fn().mockReturnValue(true),
+	return {
+		cwd: "/test/workspace",
+		api: {
+			getModel: vi.fn().mockReturnValue({
+				info: { supportsImages },
+			}),
 		},
+		consecutiveMistakeCount: 0,
+		didToolFailInCurrentTurn: false,
+		didRejectTool: false,
+		ask: vi.fn().mockResolvedValue({ response: "yesButtonClicked", text: undefined, images: undefined }),
 		say: vi.fn().mockResolvedValue(undefined),
-		ask: vi.fn().mockResolvedValue({ response: "yesButtonClicked" }),
-		presentAssistantMessage: vi.fn(),
-		handleError: vi.fn().mockResolvedValue(undefined),
-		pushToolResult: vi.fn(),
+		sayAndCreateMissingParamError: vi.fn().mockResolvedValue("Missing required parameter: path"),
+		recordToolError: vi.fn(),
+		rooIgnoreController: {
+			validateAccess: vi.fn().mockReturnValue(rooIgnoreAllowed),
+		},
 		fileContextTracker: {
 			trackFileContext: vi.fn().mockResolvedValue(undefined),
 		},
-		recordToolUsage: vi.fn().mockReturnValue(undefined),
-		recordToolError: vi.fn().mockReturnValue(undefined),
-		didRejectTool: false,
-		getTokenUsage: vi.fn().mockReturnValue({
-			contextTokens: 10000,
-		}),
-		apiConfiguration: {
-			apiProvider: "anthropic",
-		},
-		// CRITICAL: Always ensure image support is enabled
-		api: {
-			getModel: vi.fn().mockReturnValue({
-				id: "test-model",
-				info: {
-					supportsImages: true,
-					contextWindow: 200000,
-					maxTokens: 4096,
-					supportsPromptCache: false,
-					// (native tool support is determined at request-time; no model flag)
-				},
+		providerRef: {
+			deref: vi.fn().mockReturnValue({
+				getState: vi.fn().mockResolvedValue({
+					maxImageFileSize,
+					maxTotalImageSize,
+				}),
 			}),
 		},
 	}
-
-	return { mockCline, mockProvider }
 }
 
-// Helper function to set image support without affecting shared state
-function setImageSupport(mockCline: any, supportsImages: boolean | undefined): void {
-	mockCline.api = {
-		getModel: vi.fn().mockReturnValue({
-			id: "test-model",
-			info: { supportsImages },
-		}),
+function createMockCallbacks() {
+	return {
+		pushToolResult: vi.fn(),
+		askApproval: vi.fn(),
+		handleError: vi.fn(),
 	}
 }
 
-describe("read_file tool with maxReadFileLine setting", () => {
-	// Test data
-	const testFilePath = "test/file.txt"
-	const absoluteFilePath = "/test/file.txt"
-	const fileContent = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5"
-	const numberedFileContent = "1 | Line 1\n2 | Line 2\n3 | Line 3\n4 | Line 4\n5 | Line 5\n"
-	const sourceCodeDef = "\n\n# file.txt\n1--5 | Content"
-
-	// Mocked functions with correct types
-	const mockedCountFileLines = vi.mocked(countFileLines)
-	const mockedReadLines = vi.mocked(readLines)
-	const mockedExtractTextFromFile = vi.mocked(extractTextFromFile)
-	const mockedParseSourceCodeDefinitionsForFile = vi.mocked(parseSourceCodeDefinitionsForFile)
-
-	const mockedIsBinaryFile = vi.mocked(isBinaryFile)
-	const mockedPathResolve = vi.mocked(path.resolve)
-
-	let mockCline: any
-	let mockProvider: any
-	let toolResult: ToolResponse | undefined
+// ─── Tests ────────────────────────────────────────────────────────────────────
 
+describe("ReadFileTool", () => {
 	beforeEach(() => {
-		// Clear specific mocks (not all mocks to preserve shared state)
-		mockedCountFileLines.mockClear()
-		mockedExtractTextFromFile.mockClear()
-		mockedIsBinaryFile.mockClear()
-		mockedPathResolve.mockClear()
-		addLineNumbersMock.mockClear()
-		extractTextFromFileMock.mockClear()
-		toolResultMock.mockClear()
-
-		// Use shared mock setup function
-		const mocks = createMockCline()
-		mockCline = mocks.mockCline
-		mockProvider = mocks.mockProvider
-
-		// Explicitly disable image support for text file tests to prevent cross-suite pollution
-		setImageSupport(mockCline, false)
-
-		mockedPathResolve.mockReturnValue(absoluteFilePath)
-		mockedIsBinaryFile.mockResolvedValue(false)
+		vi.clearAllMocks()
 
-		// Mock fsPromises.stat to return a file (not directory) by default
-		fsPromises.stat.mockResolvedValue({
-			isDirectory: () => false,
-			isFile: () => true,
-			isSymbolicLink: () => false,
-		} as any)
+		// Default mock implementations
+		mockedFsStat.mockResolvedValue({ isDirectory: () => false } as any)
+		mockedIsBinaryFile.mockResolvedValue(false)
+		mockedFsReadFile.mockResolvedValue(Buffer.from("test content"))
+		mockedReadWithSlice.mockReturnValue({
+			content: "1 | test content",
+			returnedLines: 1,
+			totalLines: 1,
+			wasTruncated: false,
+			includedRanges: [[1, 1]],
+		})
+	})
 
-		mockInputContent = fileContent
+	describe("input validation", () => {
+		it("should return error when path is missing", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		// Setup the extractTextFromFile mock implementation with the current mockInputContent
-		// Reset the spy before each test
-		addLineNumbersMock.mockClear()
+			await readFileTool.execute({ path: "" } as any, mockTask as any, callbacks)
 
-		// Setup the extractTextFromFile mock to call our spy
-		mockedExtractTextFromFile.mockImplementation((_filePath) => {
-			// Call the spy and return its result
-			return Promise.resolve(addLineNumbersMock(mockInputContent))
+			expect(mockTask.consecutiveMistakeCount).toBe(1)
+			expect(mockTask.recordToolError).toHaveBeenCalledWith("read_file")
+			expect(mockTask.sayAndCreateMissingParamError).toHaveBeenCalledWith("read_file", "path")
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("Error:"))
 		})
 
-		toolResult = undefined
-	})
+		it("should return error when path is undefined", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-	/**
-	 * Helper function to execute the read file tool with different maxReadFileLine settings
-	 */
-	async function executeReadFileTool(
-		params: Partial<ReadFileToolUse["params"]> = {},
-		options: {
-			maxReadFileLine?: number
-			totalLines?: number
-			skipAddLineNumbersCheck?: boolean // Flag to skip addLineNumbers check
-			path?: string
-			start_line?: string
-			end_line?: string
-		} = {},
-	): Promise<ToolResponse | undefined> {
-		// Configure mocks based on test scenario
-		const maxReadFileLine = options.maxReadFileLine ?? 500
-		const totalLines = options.totalLines ?? 5
-
-		mockProvider.getState.mockResolvedValue({ maxReadFileLine, maxImageFileSize: 20, maxTotalImageSize: 20 })
-		mockedCountFileLines.mockResolvedValue(totalLines)
-
-		// Reset the spy before each test
-		addLineNumbersMock.mockClear()
-
-		const lineRanges =
-			options.start_line && options.end_line
-				? [
-						{
-							start: Number(options.start_line),
-							end: Number(options.end_line),
-						},
-					]
-				: []
+			await readFileTool.execute({} as any, mockTask as any, callbacks)
 
-		// Create a tool use object
-		const toolUse: ReadFileToolUse = {
-			type: "tool_use",
-			name: "read_file",
-			params: { ...params },
-			partial: false,
-			nativeArgs: {
-				files: [
-					{
-						path: options.path || testFilePath,
-						lineRanges,
-					},
-				],
-			},
-		}
-
-		await readFileTool.handle(mockCline, toolUse, {
-			askApproval: mockCline.ask,
-			handleError: vi.fn(),
-			pushToolResult: (result: ToolResponse) => {
-				toolResult = result
-			},
+			expect(mockTask.consecutiveMistakeCount).toBe(1)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("Error:"))
 		})
 
-		return toolResult
-	}
-
-	describe("when maxReadFileLine is negative", () => {
-		it("should read the entire file using extractTextFromFile", async () => {
-			// Setup - use default mockInputContent
-			mockInputContent = fileContent
+		it("should return error when offset is 0 or negative", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			// Execute
-			const result = await executeReadFileTool({}, { maxReadFileLine: -1 })
+			await readFileTool.execute({ path: "test.txt", offset: 0 }, mockTask as any, callbacks)
 
-			// Verify - check that the result contains the expected native format elements
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Lines 1-5:`)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(
+				expect.stringContaining("offset must be a 1-indexed line number"),
+			)
 		})
 
-		it("should not show line snippet in approval message when maxReadFileLine is -1", async () => {
-			// This test verifies the line snippet behavior for the approval message
-			// Setup - use default mockInputContent
-			mockInputContent = fileContent
-
-			// Execute - we'll reuse executeReadFileTool to run the tool
-			await executeReadFileTool({}, { maxReadFileLine: -1 })
+		it("should return error when offset is negative", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			// Verify the empty line snippet for full read was passed to the approval message
-			// Look at the parameters passed to the 'ask' method in the approval message
-			const askCall = mockCline.ask.mock.calls[0]
-			const completeMessage = JSON.parse(askCall[1])
+			await readFileTool.execute({ path: "test.txt", offset: -5 }, mockTask as any, callbacks)
 
-			// Verify the reason (lineSnippet) is empty or undefined for full read
-			expect(completeMessage.reason).toBeFalsy()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(
+				expect.stringContaining("offset must be a 1-indexed line number"),
+			)
 		})
-	})
 
-	describe("when maxReadFileLine is 0", () => {
-		it("should return an empty content with source code definitions", async () => {
-			// Setup - for maxReadFileLine = 0, the implementation won't call readLines
-			mockedParseSourceCodeDefinitionsForFile.mockResolvedValue(sourceCodeDef)
+		it("should return error when anchor_line is 0 or negative", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			// Execute - skip addLineNumbers check as it's not called for maxReadFileLine=0
-			const result = await executeReadFileTool(
-				{},
+			await readFileTool.execute(
 				{
-					maxReadFileLine: 0,
-					totalLines: 5,
-					skipAddLineNumbersCheck: true,
+					path: "test.txt",
+					mode: "indentation",
+					indentation: { anchor_line: 0 },
 				},
+				mockTask as any,
+				callbacks,
 			)
 
-			// Verify - native format
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Code Definitions:`)
-
-			// Verify native structure
-			expect(result).toContain("Note: Showing only 0 of 5 total lines")
-			expect(result).toContain(sourceCodeDef.trim())
-			expect(result).not.toContain("Lines 1-") // No content when maxReadFileLine is 0
-		})
-	})
-
-	describe("when maxReadFileLine is less than file length", () => {
-		it("should read only maxReadFileLine lines and add source code definitions", async () => {
-			// Setup
-			const content = "Line 1\nLine 2\nLine 3"
-			const numberedContent = "1 | Line 1\n2 | Line 2\n3 | Line 3"
-			mockedReadLines.mockResolvedValue(content)
-			mockedParseSourceCodeDefinitionsForFile.mockResolvedValue(sourceCodeDef)
-
-			// Setup addLineNumbers to always return numbered content
-			addLineNumbersMock.mockReturnValue(numberedContent)
-
-			// Execute
-			const result = await executeReadFileTool({}, { maxReadFileLine: 3 })
-
-			// Verify - native format
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Lines 1-3:`)
-			expect(result).toContain(`Code Definitions:`)
-			expect(result).toContain("Note: Showing only 3 of 5 total lines")
-		})
-
-		it("should truncate code definitions when file exceeds maxReadFileLine", async () => {
-			// Setup - file with 100 lines but we'll only read first 30
-			const content = "Line 1\nLine 2\nLine 3"
-			const numberedContent = "1 | Line 1\n2 | Line 2\n3 | Line 3"
-			const fullDefinitions = `# file.txt
-10--20 | function foo() {
-50--60 | function bar() {
-80--90 | function baz() {`
-			const truncatedDefinitions = `# file.txt
-10--20 | function foo() {`
-
-			mockedReadLines.mockResolvedValue(content)
-			mockedParseSourceCodeDefinitionsForFile.mockResolvedValue(fullDefinitions)
-			addLineNumbersMock.mockReturnValue(numberedContent)
-
-			// Execute with maxReadFileLine = 30
-			const result = await executeReadFileTool({}, { maxReadFileLine: 30, totalLines: 100 })
-
-			// Verify - native format
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Lines 1-30:`)
-			expect(result).toContain(`Code Definitions:`)
-
-			// Should include foo (starts at line 10) but not bar (starts at line 50) or baz (starts at line 80)
-			expect(result).toContain("10--20 | function foo()")
-			expect(result).not.toContain("50--60 | function bar()")
-			expect(result).not.toContain("80--90 | function baz()")
-
-			expect(result).toContain("Note: Showing only 30 of 100 total lines")
-		})
-
-		it("should handle truncation when all definitions are beyond the line limit", async () => {
-			// Setup - all definitions start after maxReadFileLine
-			const content = "Line 1\nLine 2\nLine 3"
-			const numberedContent = "1 | Line 1\n2 | Line 2\n3 | Line 3"
-			const fullDefinitions = `# file.txt
-50--60 | function foo() {
-80--90 | function bar() {`
-
-			mockedReadLines.mockResolvedValue(content)
-			mockedParseSourceCodeDefinitionsForFile.mockResolvedValue(fullDefinitions)
-			addLineNumbersMock.mockReturnValue(numberedContent)
-
-			// Execute with maxReadFileLine = 30
-			const result = await executeReadFileTool({}, { maxReadFileLine: 30, totalLines: 100 })
-
-			// Verify - native format
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Lines 1-30:`)
-			expect(result).toContain(`Code Definitions:`)
-			expect(result).toContain("# file.txt")
-			expect(result).not.toContain("50--60 | function foo()")
-			expect(result).not.toContain("80--90 | function bar()")
-		})
-	})
-
-	describe("when maxReadFileLine equals or exceeds file length", () => {
-		it("should use extractTextFromFile when maxReadFileLine > totalLines", async () => {
-			// Setup
-			mockedCountFileLines.mockResolvedValue(5) // File shorter than maxReadFileLine
-			mockInputContent = fileContent
-
-			// Execute
-			const result = await executeReadFileTool({}, { maxReadFileLine: 10, totalLines: 5 })
-
-			// Verify - native format
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Lines 1-5:`)
-		})
-
-		it("should read with extractTextFromFile when file has few lines", async () => {
-			// Setup
-			mockedCountFileLines.mockResolvedValue(3) // File shorter than maxReadFileLine
-			const threeLineContent = "Line 1\nLine 2\nLine 3"
-			mockInputContent = threeLineContent
-
-			// Configure the mock to return the correct content for this test
-			mockReadFileWithTokenBudget.mockResolvedValueOnce({
-				content: threeLineContent,
-				tokenCount: threeLineContent.length / 4,
-				lineCount: 3,
-				complete: true,
-			})
-
-			// Execute
-			const result = await executeReadFileTool({}, { maxReadFileLine: 5, totalLines: 3 })
-
-			// Verify - native format
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(result).toContain(`Lines 1-3:`)
-		})
-	})
-
-	describe("when file is binary", () => {
-		it("should always use extractTextFromFile regardless of maxReadFileLine", async () => {
-			// Setup
-			mockedIsBinaryFile.mockResolvedValue(true)
-			mockedCountFileLines.mockResolvedValue(3)
-			mockedExtractTextFromFile.mockResolvedValue("")
-
-			// Execute
-			const result = await executeReadFileTool({}, { maxReadFileLine: 3, totalLines: 3 })
-
-			// Verify - native format for binary files
-			expect(result).toContain(`File: ${testFilePath}`)
-			expect(typeof result).toBe("string")
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(
+				expect.stringContaining("anchor_line must be a 1-indexed line number"),
+			)
 		})
-	})
 
-	describe("with range parameters", () => {
-		it("should honor start_line and end_line when provided", async () => {
-			// Setup
-			mockedReadLines.mockResolvedValue("Line 2\nLine 3\nLine 4")
+		it("should return error when anchor_line is negative", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			// Execute using executeReadFileTool with range parameters
-			const rangeResult = await executeReadFileTool(
-				{},
+			await readFileTool.execute(
 				{
-					start_line: "2",
-					end_line: "4",
+					path: "test.txt",
+					mode: "indentation",
+					indentation: { anchor_line: -10 },
 				},
+				mockTask as any,
+				callbacks,
 			)
 
-			// Verify - native format
-			expect(rangeResult).toContain(`File: ${testFilePath}`)
-			expect(rangeResult).toContain(`Lines 2-4:`)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(
+				expect.stringContaining("anchor_line must be a 1-indexed line number"),
+			)
 		})
 	})
-})
 
-describe("read_file tool output structure", () => {
-	// Test basic native structure
-	const testFilePath = "test/file.txt"
-	const absoluteFilePath = "/test/file.txt"
-	const fileContent = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5"
-
-	const mockedCountFileLines = vi.mocked(countFileLines)
-	const mockedExtractTextFromFile = vi.mocked(extractTextFromFile)
-	const mockedIsBinaryFile = vi.mocked(isBinaryFile)
-	const mockedPathResolve = vi.mocked(path.resolve)
-	const mockedFsReadFile = vi.mocked(fsPromises.readFile)
-	const imageBuffer = Buffer.from(
-		"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-		"base64",
-	)
-
-	let mockCline: any
-	let mockProvider: any
-	let toolResult: ToolResponse | undefined
+	describe("RooIgnore handling", () => {
+		it("should block access to rooignore-protected files", async () => {
+			const mockTask = createMockTask({ rooIgnoreAllowed: false })
+			const callbacks = createMockCallbacks()
 
-	beforeEach(() => {
-		// Clear specific mocks (not all mocks to preserve shared state)
-		mockedCountFileLines.mockClear()
-		mockedExtractTextFromFile.mockClear()
-		mockedIsBinaryFile.mockClear()
-		mockedPathResolve.mockClear()
-		addLineNumbersMock.mockClear()
-		extractTextFromFileMock.mockClear()
-		toolResultMock.mockClear()
-
-		// CRITICAL: Reset fsPromises mocks to prevent cross-test contamination
-		fsPromises.stat.mockClear()
-		fsPromises.stat.mockResolvedValue({
-			size: 1024,
-			isDirectory: () => false,
-			isFile: () => true,
-			isSymbolicLink: () => false,
-		} as any)
-		fsPromises.readFile.mockClear()
-
-		// Use shared mock setup function
-		const mocks = createMockCline()
-		mockCline = mocks.mockCline
-		mockProvider = mocks.mockProvider
-
-		// Explicitly enable image support for this test suite (contains image memory tests)
-		setImageSupport(mockCline, true)
-
-		mockedPathResolve.mockReturnValue(absoluteFilePath)
-		mockedIsBinaryFile.mockResolvedValue(false)
+			await readFileTool.execute({ path: "secret.env" }, mockTask as any, callbacks)
 
-		// Set default implementation for extractTextFromFile
-		mockedExtractTextFromFile.mockImplementation((filePath) => {
-			return Promise.resolve(addLineNumbersMock(mockInputContent))
+			expect(mockTask.say).toHaveBeenCalledWith("rooignore_error", "secret.env")
+			expect(formatResponse.rooIgnoreError).toHaveBeenCalledWith("secret.env")
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("blocked by the .rooignore"))
 		})
-
-		mockInputContent = fileContent
-
-		// Setup mock provider with default maxReadFileLine
-		mockProvider.getState.mockResolvedValue({ maxReadFileLine: -1, maxImageFileSize: 20, maxTotalImageSize: 20 }) // Default to full file read
-
-		// Add additional properties needed for missing param validation tests
-		mockCline.sayAndCreateMissingParamError = vi.fn().mockResolvedValue("Missing required parameter")
-
-		toolResult = undefined
 	})
 
-	async function executeReadFileTool(
-		options: {
-			totalLines?: number
-			maxReadFileLine?: number
-			isBinary?: boolean
-			validateAccess?: boolean
-			filePath?: string
-		} = {},
-	): Promise<ToolResponse | undefined> {
-		// Configure mocks based on test scenario
-		const totalLines = options.totalLines ?? 5
-		const maxReadFileLine = options.maxReadFileLine ?? 500
-		const isBinary = options.isBinary ?? false
-		const validateAccess = options.validateAccess ?? true
-
-		mockProvider.getState.mockResolvedValue({ maxReadFileLine, maxImageFileSize: 20, maxTotalImageSize: 20 })
-		mockedCountFileLines.mockResolvedValue(totalLines)
-		mockedIsBinaryFile.mockResolvedValue(isBinary)
-		mockCline.rooIgnoreController.validateAccess = vi.fn().mockReturnValue(validateAccess)
-		const filePath = options.filePath ?? testFilePath
-
-		// Create a tool use object
-		const toolUse: ReadFileToolUse = {
-			type: "tool_use",
-			name: "read_file",
-			params: {},
-			partial: false,
-			nativeArgs: {
-				files: [{ path: filePath, lineRanges: [] }],
-			},
-		}
-
-		// Execute the tool
-		await readFileTool.handle(mockCline, toolUse, {
-			askApproval: mockCline.ask,
-			handleError: vi.fn(),
-			pushToolResult: (result: ToolResponse) => {
-				toolResult = result
-			},
-		})
-
-		return toolResult
-	}
-
-	describe("Basic Structure Tests", () => {
-		it("should produce native output with proper format", async () => {
-			// Setup
-			const numberedContent = "1 | Line 1\n2 | Line 2\n3 | Line 3\n4 | Line 4\n5 | Line 5"
-
-			// Configure mockReadFileWithTokenBudget to return the 5-line content
-			mockReadFileWithTokenBudget.mockResolvedValueOnce({
-				content: fileContent, // "Line 1\nLine 2\nLine 3\nLine 4\nLine 5"
-				tokenCount: fileContent.length / 4,
-				lineCount: 5,
-				complete: true,
-			})
+	describe("directory handling", () => {
+		it("should return error when trying to read a directory", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			mockProvider.getState.mockResolvedValue({
-				maxReadFileLine: -1,
-				maxImageFileSize: 20,
-				maxTotalImageSize: 20,
-			}) // Allow up to 20MB per image and total size
+			mockedFsStat.mockResolvedValue({ isDirectory: () => true } as any)
 
-			// Execute
-			const result = await executeReadFileTool()
+			await readFileTool.execute({ path: "src/utils" }, mockTask as any, callbacks)
 
-			// Verify native format
-			expect(result).toBe(`File: ${testFilePath}\nLines 1-5:\n${numberedContent}`)
+			expect(mockTask.say).toHaveBeenCalledWith(
+				"error",
+				expect.stringContaining("Cannot read 'src/utils' because it is a directory"),
+			)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("it is a directory"))
+			expect(mockTask.didToolFailInCurrentTurn).toBe(true)
 		})
+	})
 
-		it("should follow the correct native structure format", async () => {
-			// Setup
-			mockInputContent = fileContent
-			// Execute
-			const result = await executeReadFileTool({ maxReadFileLine: -1 })
-
-			// Verify using regex to check native structure
-			const nativeStructureRegex = new RegExp(`^File: ${testFilePath}\\nLines 1-5:\\n.*$`, "s")
-			expect(result).toMatch(nativeStructureRegex)
+	describe("image handling", () => {
+		beforeEach(() => {
+			mockedIsBinaryFile.mockResolvedValue(true)
+			mockedIsSupportedImageFormat.mockReturnValue(true)
 		})
 
-		it("should handle empty files correctly", async () => {
-			// Setup
-			mockedCountFileLines.mockResolvedValue(0)
+		it("should process image file when model supports images", async () => {
+			const mockTask = createMockTask({ supportsImages: true })
+			const callbacks = createMockCallbacks()
 
-			// Configure mockReadFileWithTokenBudget to return empty content
-			mockReadFileWithTokenBudget.mockResolvedValueOnce({
-				content: "",
-				tokenCount: 0,
-				lineCount: 0,
-				complete: true,
+			mockedValidateImageForProcessing.mockResolvedValue({
+				isValid: true,
+				sizeInMB: 0.5,
+			})
+			mockedProcessImageFile.mockResolvedValue({
+				dataUrl: "data:image/png;base64,abc123",
+				buffer: Buffer.from("test"),
+				sizeInKB: 512,
+				sizeInMB: 0.5,
+				notice: "Image processed successfully",
 			})
 
-			mockProvider.getState.mockResolvedValue({
-				maxReadFileLine: -1,
-				maxImageFileSize: 20,
-				maxTotalImageSize: 20,
-			}) // Allow up to 20MB per image and total size
-
-			// Execute
-			const result = await executeReadFileTool({ totalLines: 0 })
+			await readFileTool.execute({ path: "image.png" }, mockTask as any, callbacks)
 
-			// Verify native format for empty file
-			expect(result).toBe(`File: ${testFilePath}\nNote: File is empty`)
+			expect(mockedValidateImageForProcessing).toHaveBeenCalled()
+			expect(mockedProcessImageFile).toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalled()
 		})
 
-		describe("Total Image Memory Limit", () => {
-			const testImages = [
-				{ path: "test/image1.png", sizeKB: 5120 }, // 5MB
-				{ path: "test/image2.jpg", sizeKB: 10240 }, // 10MB
-				{ path: "test/image3.gif", sizeKB: 8192 }, // 8MB
-			]
-
-			// Define imageBuffer for this test suite
-			const imageBuffer = Buffer.from(
-				"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-				"base64",
-			)
+		it("should skip image when model does not support images", async () => {
+			const mockTask = createMockTask({ supportsImages: false })
+			const callbacks = createMockCallbacks()
 
-			beforeEach(() => {
-				// CRITICAL: Reset fsPromises mocks to prevent cross-test contamination within this suite
-				fsPromises.stat.mockClear()
-				fsPromises.readFile.mockClear()
+			mockedValidateImageForProcessing.mockResolvedValue({
+				isValid: false,
+				reason: "unsupported_model",
+				notice: "Model does not support image processing",
 			})
 
-			async function executeReadMultipleImagesTool(imagePaths: string[]): Promise<ToolResponse | undefined> {
-				// Ensure image support is enabled before calling the tool
-				setImageSupport(mockCline, true)
-
-				const toolUse: ReadFileToolUse = {
-					type: "tool_use",
-					name: "read_file",
-					params: {},
-					partial: false,
-					nativeArgs: {
-						files: imagePaths.map((p) => ({ path: p, lineRanges: [] })),
-					},
-				}
-
-				let localResult: ToolResponse | undefined
-				await readFileTool.handle(mockCline, toolUse, {
-					askApproval: mockCline.ask,
-					handleError: vi.fn(),
-					pushToolResult: (result: ToolResponse) => {
-						localResult = result
-					},
-				})
-				// In multi-image scenarios, the result is pushed to pushToolResult, not returned directly.
-				// We need to check the mock's calls to get the result.
-				if (mockCline.pushToolResult.mock.calls.length > 0) {
-					return mockCline.pushToolResult.mock.calls[0][0]
-				}
-
-				return localResult
-			}
-
-			it("should allow multiple images under the total memory limit", async () => {
-				// Setup required mocks (don't clear all mocks - preserve API setup)
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-
-				// Setup mockProvider
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 20,
-					maxTotalImageSize: 20,
-				}) // Allow up to 20MB per image and total size
-
-				// Setup mockCline properties (preserve existing API)
-				mockCline.cwd = "/"
-				mockCline.task = "Test"
-				mockCline.providerRef = mockProvider
-				mockCline.rooIgnoreController = {
-					validateAccess: vi.fn().mockReturnValue(true),
-				}
-				mockCline.say = vi.fn().mockResolvedValue(undefined)
-				mockCline.ask = vi.fn().mockResolvedValue({ response: "yesButtonClicked" })
-				mockCline.presentAssistantMessage = vi.fn()
-				mockCline.handleError = vi.fn().mockResolvedValue(undefined)
-				mockCline.pushToolResult = vi.fn()
-				mockCline.fileContextTracker = {
-					trackFileContext: vi.fn().mockResolvedValue(undefined),
-				}
-				mockCline.recordToolUsage = vi.fn().mockReturnValue(undefined)
-				mockCline.recordToolError = vi.fn().mockReturnValue(undefined)
-				setImageSupport(mockCline, true)
-
-				// Setup - images that fit within 20MB limit
-				const smallImages = [
-					{ path: "test/small1.png", sizeKB: 2048 }, // 2MB
-					{ path: "test/small2.jpg", sizeKB: 3072 }, // 3MB
-					{ path: "test/small3.gif", sizeKB: 4096 }, // 4MB
-				] // Total: 9MB (under 20MB limit)
-
-				// Mock file stats for each image
-				fsPromises.stat = vi.fn().mockImplementation((filePath) => {
-					const normalizedFilePath = path.normalize(filePath.toString())
-					const image = smallImages.find((img) => normalizedFilePath.includes(path.normalize(img.path)))
-					return Promise.resolve({ size: (image?.sizeKB || 1024) * 1024, isDirectory: () => false })
-				})
-
-				// Mock path.resolve for each image
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute
-				const result = await executeReadMultipleImagesTool(smallImages.map((img) => img.path))
-
-				// Verify all images were processed (should be a multi-part response)
-				expect(Array.isArray(result)).toBe(true)
-				const parts = result as any[]
-
-				// Should have text part and 3 image parts
-				const textPart = parts.find((p) => p.type === "text")?.text
-				const imageParts = parts.filter((p) => p.type === "image")
-
-				expect(textPart).toBeDefined()
-				expect(imageParts).toHaveLength(3)
-
-				// Verify no memory limit notices
-				expect(textPart).not.toContain("Total image memory would exceed")
-			})
+			await readFileTool.execute({ path: "image.png" }, mockTask as any, callbacks)
 
-			it("should skip images that would exceed the total memory limit", async () => {
-				// Setup required mocks (don't clear all mocks)
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-
-				// Setup mockProvider
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 15,
-					maxTotalImageSize: 20,
-				}) // Allow up to 15MB per image and 20MB total size
-
-				// Setup mockCline properties
-				mockCline.cwd = "/"
-				mockCline.task = "Test"
-				mockCline.providerRef = mockProvider
-				mockCline.rooIgnoreController = {
-					validateAccess: vi.fn().mockReturnValue(true),
-				}
-				mockCline.say = vi.fn().mockResolvedValue(undefined)
-				mockCline.ask = vi.fn().mockResolvedValue({ response: "yesButtonClicked" })
-				mockCline.presentAssistantMessage = vi.fn()
-				mockCline.handleError = vi.fn().mockResolvedValue(undefined)
-				mockCline.pushToolResult = vi.fn()
-				mockCline.fileContextTracker = {
-					trackFileContext: vi.fn().mockResolvedValue(undefined),
-				}
-				mockCline.recordToolUsage = vi.fn().mockReturnValue(undefined)
-				mockCline.recordToolError = vi.fn().mockReturnValue(undefined)
-				setImageSupport(mockCline, true)
-
-				// Setup - images where later ones would exceed 20MB total limit
-				// Each must be under 5MB per-file limit (5120KB)
-				const largeImages = [
-					{ path: "test/large1.png", sizeKB: 5017 }, // ~4.9MB
-					{ path: "test/large2.jpg", sizeKB: 5017 }, // ~4.9MB
-					{ path: "test/large3.gif", sizeKB: 5017 }, // ~4.9MB
-					{ path: "test/large4.png", sizeKB: 5017 }, // ~4.9MB
-					{ path: "test/large5.jpg", sizeKB: 5017 }, // ~4.9MB - This should be skipped (total would be ~24.5MB > 20MB)
-				]
-
-				// Mock file stats for each image
-				fsPromises.stat = vi.fn().mockImplementation((filePath) => {
-					const normalizedFilePath = path.normalize(filePath.toString())
-					const image = largeImages.find((img) => normalizedFilePath.includes(path.normalize(img.path)))
-					return Promise.resolve({ size: (image?.sizeKB || 1024) * 1024, isDirectory: () => false })
-				})
-
-				// Mock path.resolve for each image
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute
-				const result = await executeReadMultipleImagesTool(largeImages.map((img) => img.path))
-
-				// Verify result structure - should be a mix of successful images and skipped notices
-				expect(Array.isArray(result)).toBe(true)
-				const parts = result as any[]
-
-				const textPart = Array.isArray(result) ? result.find((p) => p.type === "text")?.text : result
-				const imageParts = Array.isArray(result) ? result.filter((p) => p.type === "image") : []
-
-				expect(textPart).toBeDefined()
+			expect(mockedValidateImageForProcessing).toHaveBeenCalled()
+			expect(mockedProcessImageFile).not.toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(
+				expect.stringContaining("Model does not support image processing"),
+			)
+		})
 
-				// Debug: Show what we actually got vs expected
-				if (imageParts.length !== 4) {
-					throw new Error(
-						`Expected 4 images, got ${imageParts.length}. Full result: ${JSON.stringify(result, null, 2)}. Text part: ${textPart}`,
-					)
-				}
-				expect(imageParts).toHaveLength(4) // First 4 images should be included (~19.6MB total)
+		it("should skip image when file exceeds size limit", async () => {
+			const mockTask = createMockTask({ supportsImages: true, maxImageFileSize: 1 })
+			const callbacks = createMockCallbacks()
 
-				// Verify memory limit notice for the fifth image
-				expect(textPart).toContain("Image skipped to avoid size limit (20MB)")
-				expect(textPart).toMatch(/Current: \d+(\.\d+)? MB/)
-				expect(textPart).toMatch(/this file: \d+(\.\d+)? MB/)
+			mockedValidateImageForProcessing.mockResolvedValue({
+				isValid: false,
+				reason: "size_limit",
+				notice: "Image file size (10 MB) exceeds the maximum allowed size (1 MB)",
 			})
 
-			it("should track memory usage correctly across multiple images", async () => {
-				// Setup mocks (don't clear all mocks)
-
-				// Setup required mocks
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-
-				// Setup mockProvider
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 15,
-					maxTotalImageSize: 20,
-				}) // Allow up to 15MB per image and 20MB total size
-
-				// Setup mockCline properties
-				mockCline.cwd = "/"
-				mockCline.task = "Test"
-				mockCline.providerRef = mockProvider
-				mockCline.rooIgnoreController = {
-					validateAccess: vi.fn().mockReturnValue(true),
-				}
-				mockCline.say = vi.fn().mockResolvedValue(undefined)
-				mockCline.ask = vi.fn().mockResolvedValue({ response: "yesButtonClicked" })
-				mockCline.presentAssistantMessage = vi.fn()
-				mockCline.handleError = vi.fn().mockResolvedValue(undefined)
-				mockCline.pushToolResult = vi.fn()
-				mockCline.fileContextTracker = {
-					trackFileContext: vi.fn().mockResolvedValue(undefined),
-				}
-				mockCline.recordToolUsage = vi.fn().mockReturnValue(undefined)
-				mockCline.recordToolError = vi.fn().mockReturnValue(undefined)
-				setImageSupport(mockCline, true)
-
-				// Setup - images that exactly reach the limit
-				const exactLimitImages = [
-					{ path: "test/exact1.png", sizeKB: 10240 }, // 10MB
-					{ path: "test/exact2.jpg", sizeKB: 10240 }, // 10MB - Total exactly 20MB
-					{ path: "test/exact3.gif", sizeKB: 1024 }, // 1MB - This should be skipped
-				]
+			await readFileTool.execute({ path: "large-image.png" }, mockTask as any, callbacks)
 
-				// Mock file stats with simpler logic
-				fsPromises.stat = vi.fn().mockImplementation((filePath) => {
-					const normalizedFilePath = path.normalize(filePath.toString())
-					const image = exactLimitImages.find((img) => normalizedFilePath.includes(path.normalize(img.path)))
-					if (image) {
-						return Promise.resolve({ size: image.sizeKB * 1024, isDirectory: () => false })
-					}
-					return Promise.resolve({ size: 1024 * 1024, isDirectory: () => false }) // Default 1MB
-				})
-
-				// Mock path.resolve
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute
-				const result = await executeReadMultipleImagesTool(exactLimitImages.map((img) => img.path))
-
-				// Verify
-				const textPart = Array.isArray(result) ? result.find((p) => p.type === "text")?.text : result
-				const imageParts = Array.isArray(result) ? result.filter((p) => p.type === "image") : []
-
-				expect(imageParts).toHaveLength(2) // First 2 images should fit
-				expect(textPart).toContain("Image skipped to avoid size limit (20MB)")
-				expect(textPart).toMatch(/Current: \d+(\.\d+)? MB/)
-				expect(textPart).toMatch(/this file: \d+(\.\d+)? MB/)
-			})
+			expect(mockedProcessImageFile).not.toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(
+				expect.stringContaining("exceeds the maximum allowed"),
+			)
+		})
 
-			it("should handle individual image size limit and total memory limit together", async () => {
-				// Setup mocks (don't clear all mocks)
-
-				// Setup required mocks
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-
-				// Setup mockProvider
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 20,
-					maxTotalImageSize: 20,
-				}) // Allow up to 20MB per image and total size
-
-				// Setup mockCline properties (complete setup)
-				mockCline.cwd = "/"
-				mockCline.task = "Test"
-				mockCline.providerRef = mockProvider
-				mockCline.rooIgnoreController = {
-					validateAccess: vi.fn().mockReturnValue(true),
-				}
-				mockCline.say = vi.fn().mockResolvedValue(undefined)
-				mockCline.ask = vi.fn().mockResolvedValue({ response: "yesButtonClicked" })
-				mockCline.presentAssistantMessage = vi.fn()
-				mockCline.handleError = vi.fn().mockResolvedValue(undefined)
-				mockCline.pushToolResult = vi.fn()
-				mockCline.fileContextTracker = {
-					trackFileContext: vi.fn().mockResolvedValue(undefined),
-				}
-				mockCline.recordToolUsage = vi.fn().mockReturnValue(undefined)
-				mockCline.recordToolError = vi.fn().mockReturnValue(undefined)
-				setImageSupport(mockCline, true)
-
-				// Setup - mix of images with individual size violations and total memory issues
-				const mixedImages = [
-					{ path: "test/ok.png", sizeKB: 3072 }, // 3MB - OK
-					{ path: "test/too-big.jpg", sizeKB: 6144 }, // 6MB - Exceeds individual 5MB limit
-					{ path: "test/ok2.gif", sizeKB: 4096 }, // 4MB - OK individually but might exceed total
-				]
+		it("should skip image when total memory limit exceeded", async () => {
+			const mockTask = createMockTask({ supportsImages: true, maxTotalImageSize: 5 })
+			const callbacks = createMockCallbacks()
 
-				// Mock file stats
-				fsPromises.stat = vi.fn().mockImplementation((filePath) => {
-					const fileName = path.basename(filePath)
-					const baseName = path.parse(fileName).name
-					const image = mixedImages.find((img) => img.path.includes(baseName))
-					return Promise.resolve({ size: (image?.sizeKB || 1024) * 1024, isDirectory: () => false })
-				})
-
-				// Mock provider state with 5MB individual limit
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 5,
-					maxTotalImageSize: 20,
-				})
-
-				// Mock path.resolve
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute
-				const result = await executeReadMultipleImagesTool(mixedImages.map((img) => img.path))
-
-				// Verify
-				expect(Array.isArray(result)).toBe(true)
-				const parts = result as any[]
-
-				const textPart = parts.find((p) => p.type === "text")?.text
-				const imageParts = parts.filter((p) => p.type === "image")
-
-				// Should have 2 images (ok.png and ok2.gif)
-				expect(imageParts).toHaveLength(2)
-
-				// Should show individual size limit violation
-				expect(textPart).toMatch(
-					/Image file is too large \(\d+(\.\d+)? MB\)\. The maximum allowed size is 5 MB\./,
-				)
+			mockedValidateImageForProcessing.mockResolvedValue({
+				isValid: false,
+				reason: "memory_limit",
+				notice: "Skipping image: would exceed total memory limit",
 			})
 
-			it("should correctly calculate total memory and skip the last image", async () => {
-				// Setup
-				const testImages = [
-					{ path: "test/image1.png", sizeMB: 8 },
-					{ path: "test/image2.png", sizeMB: 8 },
-					{ path: "test/image3.png", sizeMB: 8 }, // This one should be skipped
-				]
-
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 10, // 10MB per image
-					maxTotalImageSize: 20, // 20MB total
-				})
-
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				mockedFsReadFile.mockResolvedValue(imageBuffer)
-
-				fsPromises.stat.mockImplementation(async (filePath) => {
-					const normalizedFilePath = path.normalize(filePath.toString())
-					const file = testImages.find((f) => normalizedFilePath.includes(path.normalize(f.path)))
-					if (file) {
-						return { size: file.sizeMB * 1024 * 1024, isDirectory: () => false }
-					}
-					return { size: 1024 * 1024, isDirectory: () => false } // Default 1MB
-				})
-
-				const imagePaths = testImages.map((img) => img.path)
-				const result = await executeReadMultipleImagesTool(imagePaths)
-
-				expect(Array.isArray(result)).toBe(true)
-				const parts = result as any[]
-				const textPart = parts.find((p) => p.type === "text")?.text
-				const imageParts = parts.filter((p) => p.type === "image")
-
-				expect(imageParts).toHaveLength(2) // First two images should be processed
-				expect(textPart).toContain("Image skipped to avoid size limit (20MB)")
-				expect(textPart).toMatch(/Current: \d+(\.\d+)? MB/)
-				expect(textPart).toMatch(/this file: \d+(\.\d+)? MB/)
-			})
+			await readFileTool.execute({ path: "another-image.png" }, mockTask as any, callbacks)
 
-			it("should reset total memory tracking for each tool invocation", async () => {
-				// Setup mocks (don't clear all mocks)
-
-				// Setup required mocks for first batch
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-
-				// Setup mockProvider
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 20,
-					maxTotalImageSize: 20,
-				})
-
-				// Setup mockCline properties (complete setup)
-				mockCline.cwd = "/"
-				mockCline.task = "Test"
-				mockCline.providerRef = mockProvider
-				mockCline.rooIgnoreController = {
-					validateAccess: vi.fn().mockReturnValue(true),
-				}
-				mockCline.say = vi.fn().mockResolvedValue(undefined)
-				mockCline.ask = vi.fn().mockResolvedValue({ response: "yesButtonClicked" })
-				mockCline.presentAssistantMessage = vi.fn()
-				mockCline.handleError = vi.fn().mockResolvedValue(undefined)
-				mockCline.pushToolResult = vi.fn()
-				mockCline.fileContextTracker = {
-					trackFileContext: vi.fn().mockResolvedValue(undefined),
-				}
-				mockCline.recordToolUsage = vi.fn().mockReturnValue(undefined)
-				mockCline.recordToolError = vi.fn().mockReturnValue(undefined)
-				setImageSupport(mockCline, true)
-
-				// Setup - first call with images that use memory
-				const firstBatch = [{ path: "test/first.png", sizeKB: 10240 }] // 10MB
-
-				fsPromises.stat = vi.fn().mockResolvedValue({ size: 10240 * 1024, isDirectory: () => false })
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute first batch
-				await executeReadMultipleImagesTool(firstBatch.map((img) => img.path))
-
-				// Setup second batch (don't clear all mocks)
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 20,
-					maxTotalImageSize: 20,
-				})
-
-				// Reset path resolving for second batch
-				mockedPathResolve.mockClear()
-
-				// Re-setup mockCline properties for second batch (complete setup)
-				mockCline.cwd = "/"
-				mockCline.task = "Test"
-				mockCline.providerRef = mockProvider
-				mockCline.rooIgnoreController = {
-					validateAccess: vi.fn().mockReturnValue(true),
-				}
-				mockCline.say = vi.fn().mockResolvedValue(undefined)
-				mockCline.ask = vi.fn().mockResolvedValue({ response: "yesButtonClicked" })
-				mockCline.presentAssistantMessage = vi.fn()
-				mockCline.handleError = vi.fn().mockResolvedValue(undefined)
-				mockCline.pushToolResult = vi.fn()
-				mockCline.fileContextTracker = {
-					trackFileContext: vi.fn().mockResolvedValue(undefined),
-				}
-				mockCline.recordToolUsage = vi.fn().mockReturnValue(undefined)
-				mockCline.recordToolError = vi.fn().mockReturnValue(undefined)
-				setImageSupport(mockCline, true)
-
-				const secondBatch = [{ path: "test/second.png", sizeKB: 15360 }] // 15MB
-
-				// Clear and reset file system mocks for second batch
-				fsPromises.stat.mockClear()
-				fsPromises.readFile.mockClear()
-				mockedIsBinaryFile.mockClear()
-				mockedCountFileLines.mockClear()
-
-				// Reset mocks for second batch
-				fsPromises.stat = vi.fn().mockResolvedValue({ size: 15360 * 1024, isDirectory: () => false })
-				fsPromises.readFile.mockResolvedValue(
-					Buffer.from(
-						"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==",
-						"base64",
-					),
-				)
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute second batch
-				const result = await executeReadMultipleImagesTool(secondBatch.map((img) => img.path))
-
-				// Verify second batch is processed successfully (memory tracking was reset)
-				expect(Array.isArray(result)).toBe(true)
-				const parts = result as any[]
-				const imageParts = parts.filter((p) => p.type === "image")
-
-				expect(imageParts).toHaveLength(1) // Second image should be processed
-			})
+			expect(mockedProcessImageFile).not.toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("would exceed total memory"))
+		})
 
-			it("should handle a folder with many images just under the individual size limit", async () => {
-				// Setup - Create many images that are each just under the 5MB individual limit
-				// but together approach the 20MB total limit
-				const manyImages = [
-					{ path: "test/img1.png", sizeKB: 4900 }, // 4.78MB
-					{ path: "test/img2.png", sizeKB: 4900 }, // 4.78MB
-					{ path: "test/img3.png", sizeKB: 4900 }, // 4.78MB
-					{ path: "test/img4.png", sizeKB: 4900 }, // 4.78MB
-					{ path: "test/img5.png", sizeKB: 4900 }, // 4.78MB - This should be skipped (total would be ~23.9MB)
-				]
+		it("should handle image read errors gracefully", async () => {
+			const mockTask = createMockTask({ supportsImages: true })
+			const callbacks = createMockCallbacks()
 
-				// Setup mocks
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(imageBuffer)
-
-				// Setup provider with 5MB individual limit and 20MB total limit
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 5,
-					maxTotalImageSize: 20,
-				})
-
-				// Mock file stats for each image
-				fsPromises.stat = vi.fn().mockImplementation((filePath) => {
-					const normalizedFilePath = path.normalize(filePath.toString())
-					const image = manyImages.find((img) => normalizedFilePath.includes(path.normalize(img.path)))
-					return Promise.resolve({ size: (image?.sizeKB || 1024) * 1024, isDirectory: () => false })
-				})
-
-				// Mock path.resolve
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute
-				const result = await executeReadMultipleImagesTool(manyImages.map((img) => img.path))
-
-				// Verify
-				expect(Array.isArray(result)).toBe(true)
-				const parts = result as any[]
-				const textPart = parts.find((p) => p.type === "text")?.text
-				const imageParts = parts.filter((p) => p.type === "image")
-
-				// Should process first 4 images (total ~19.12MB, under 20MB limit)
-				expect(imageParts).toHaveLength(4)
-
-				// Should show memory limit notice for the 5th image
-				expect(textPart).toContain("Image skipped to avoid size limit (20MB)")
-				expect(textPart).toContain("test/img5.png")
-
-				// Verify memory tracking worked correctly
-				// The notice should show current memory usage around 20MB (4 * 4900KB ≈ 19.14MB, displayed as 20.1MB)
-				expect(textPart).toMatch(/Current: \d+(\.\d+)? MB/)
+			mockedValidateImageForProcessing.mockResolvedValue({
+				isValid: true,
+				sizeInMB: 0.5,
 			})
+			mockedProcessImageFile.mockRejectedValue(new Error("Failed to read image"))
 
-			it("should reset memory tracking between separate tool invocations more explicitly", async () => {
-				// This test verifies that totalImageMemoryUsed is reset between calls
-				// by making two separate tool invocations and ensuring the second one
-				// starts with fresh memory tracking
-
-				// Setup mocks
-				mockedIsBinaryFile.mockResolvedValue(true)
-				mockedCountFileLines.mockResolvedValue(0)
-				fsPromises.readFile.mockResolvedValue(imageBuffer)
-
-				// Setup provider
-				mockProvider.getState.mockResolvedValue({
-					maxReadFileLine: -1,
-					maxImageFileSize: 20,
-					maxTotalImageSize: 20,
-				})
+			await readFileTool.execute({ path: "corrupt.png" }, mockTask as any, callbacks)
 
-				// First invocation - use 15MB of memory
-				const firstBatch = [{ path: "test/large1.png", sizeKB: 15360 }] // 15MB
-
-				fsPromises.stat = vi.fn().mockResolvedValue({ size: 15360 * 1024, isDirectory: () => false })
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-
-				// Execute first batch
-				const result1 = await executeReadMultipleImagesTool(firstBatch.map((img) => img.path))
+			expect(mockTask.say).toHaveBeenCalledWith("error", expect.stringContaining("Error reading image file"))
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("Error"))
+		})
+	})
 
-				// Verify first batch processed successfully
-				expect(Array.isArray(result1)).toBe(true)
-				const parts1 = result1 as any[]
-				const imageParts1 = parts1.filter((p) => p.type === "image")
-				expect(imageParts1).toHaveLength(1)
+	describe("binary file handling", () => {
+		beforeEach(() => {
+			mockedIsBinaryFile.mockResolvedValue(true)
+			mockedIsSupportedImageFormat.mockReturnValue(false)
+		})
 
-				// Second invocation - should start with 0 memory used, not 15MB
-				// If memory tracking wasn't reset, this 18MB image would be rejected
-				const secondBatch = [{ path: "test/large2.png", sizeKB: 18432 }] // 18MB
+		it("should extract text from PDF files", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-				// Reset mocks for second invocation
-				fsPromises.stat.mockClear()
-				fsPromises.readFile.mockClear()
-				mockedPathResolve.mockClear()
+			mockedExtractTextFromFile.mockResolvedValue("PDF content here")
 
-				fsPromises.stat = vi.fn().mockResolvedValue({ size: 18432 * 1024, isDirectory: () => false })
-				fsPromises.readFile.mockResolvedValue(imageBuffer)
-				mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
+			await readFileTool.execute({ path: "document.pdf" }, mockTask as any, callbacks)
 
-				// Execute second batch
-				const result2 = await executeReadMultipleImagesTool(secondBatch.map((img) => img.path))
+			expect(mockedExtractTextFromFile).toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("PDF content here"))
+		})
 
-				// Verify second batch processed successfully
-				expect(Array.isArray(result2)).toBe(true)
-				const parts2 = result2 as any[]
-				const imageParts2 = parts2.filter((p) => p.type === "image")
-				const textPart2 = parts2.find((p) => p.type === "text")?.text
+		it("should extract text from DOCX files", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-				// The 18MB image should be processed successfully because memory was reset
-				expect(imageParts2).toHaveLength(1)
+			mockedExtractTextFromFile.mockResolvedValue("DOCX content here")
 
-				// Should NOT contain any memory limit notices
-				expect(textPart2).not.toContain("Image skipped to avoid memory limit")
+			await readFileTool.execute({ path: "document.docx" }, mockTask as any, callbacks)
 
-				// This proves memory tracking was reset between invocations
-			})
+			expect(mockedExtractTextFromFile).toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("DOCX content here"))
 		})
-	})
 
-	describe("Error Handling Tests", () => {
-		it("should include error in output for invalid path", async () => {
-			// Setup - missing path parameter
-			const toolUse: ReadFileToolUse = {
-				type: "tool_use",
-				name: "read_file",
-				params: {},
-				partial: false,
-				nativeArgs: {
-					files: [],
-				},
-			}
-
-			// Execute the tool
-			await readFileTool.handle(mockCline, toolUse, {
-				askApproval: mockCline.ask,
-				handleError: vi.fn(),
-				pushToolResult: (result: ToolResponse) => {
-					toolResult = result
-				},
-			})
+		it("should handle unsupported binary formats", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			// Verify - native format for error
-			expect(toolResult).toBe(`Error: Missing required parameter`)
-		})
+			// Return empty array to indicate .exe is not supported
+			vi.mocked(getSupportedBinaryFormats).mockReturnValue([".pdf", ".docx"])
 
-		it("should include error for RooIgnore error", async () => {
-			// Execute - skip addLineNumbers check as it returns early with an error
-			const result = await executeReadFileTool({ validateAccess: false })
+			await readFileTool.execute({ path: "program.exe" }, mockTask as any, callbacks)
 
-			// Verify - native format for error
-			expect(result).toBe(
-				`File: ${testFilePath}\nError: Access to ${testFilePath} is blocked by the .rooignore file settings. You must try to continue in the task without using this file, or ask the user to update the .rooignore file.`,
-			)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("Binary file"))
 		})
 
-		it("should provide helpful error when trying to read a directory", async () => {
-			// Setup - mock fsPromises.stat to indicate the path is a directory
-			const dirPath = "test/my-directory"
-			const absoluteDirPath = "/test/my-directory"
+		it("should handle extraction errors gracefully", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			mockedPathResolve.mockReturnValue(absoluteDirPath)
+			mockedExtractTextFromFile.mockRejectedValue(new Error("Extraction failed"))
 
-			// Mock fs/promises stat to return directory
-			fsPromises.stat.mockResolvedValue({
-				isDirectory: () => true,
-				isFile: () => false,
-				isSymbolicLink: () => false,
-			} as any)
-
-			// Mock isBinaryFile won't be called since we check directory first
-			mockedIsBinaryFile.mockResolvedValue(false)
+			await readFileTool.execute({ path: "corrupt.pdf" }, mockTask as any, callbacks)
 
-			// Execute
-			const result = await executeReadFileTool({ filePath: dirPath })
-
-			// Verify - native format for error
-			expect(result).toContain(`File: ${dirPath}`)
-			expect(result).toContain(`Error: Error reading file: Cannot read '${dirPath}' because it is a directory`)
-			expect(result).toContain("use the list_files tool instead")
-
-			// Verify that task.say was called with the error
-			expect(mockCline.say).toHaveBeenCalledWith("error", expect.stringContaining("Cannot read"))
-			expect(mockCline.say).toHaveBeenCalledWith("error", expect.stringContaining("is a directory"))
-			expect(mockCline.say).toHaveBeenCalledWith("error", expect.stringContaining("list_files tool"))
+			expect(mockTask.say).toHaveBeenCalledWith("error", expect.stringContaining("Error extracting text"))
+			expect(mockTask.didToolFailInCurrentTurn).toBe(true)
 		})
 	})
-})
 
-describe("read_file tool with image support", () => {
-	const testImagePath = "test/image.png"
-	const absoluteImagePath = "/test/image.png"
-	const base64ImageData =
-		"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg=="
-	const imageBuffer = Buffer.from(base64ImageData, "base64")
+	describe("text file processing", () => {
+		beforeEach(() => {
+			mockedIsBinaryFile.mockResolvedValue(false)
+		})
 
-	const mockedCountFileLines = vi.mocked(countFileLines)
-	const mockedIsBinaryFile = vi.mocked(isBinaryFile)
-	const mockedPathResolve = vi.mocked(path.resolve)
-	const mockedFsReadFile = vi.mocked(fsPromises.readFile)
-	const mockedExtractTextFromFile = vi.mocked(extractTextFromFile)
+		it("should read text file with slice mode (default)", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-	let localMockCline: any
-	let localMockProvider: any
-	let toolResult: ToolResponse | undefined
+			const content = "line 1\nline 2\nline 3"
+			mockedFsReadFile.mockResolvedValue(Buffer.from(content))
+			mockedReadWithSlice.mockReturnValue({
+				content: "1 | line 1\n2 | line 2\n3 | line 3",
+				returnedLines: 3,
+				totalLines: 3,
+				wasTruncated: false,
+				includedRanges: [[1, 3]],
+			})
 
-	beforeEach(() => {
-		// Clear specific mocks (not all mocks to preserve shared state)
-		mockedPathResolve.mockClear()
-		mockedIsBinaryFile.mockClear()
-		mockedCountFileLines.mockClear()
-		mockedFsReadFile.mockClear()
-		mockedExtractTextFromFile.mockClear()
-		toolResultMock.mockClear()
-
-		// CRITICAL: Reset fsPromises.stat to prevent cross-test contamination
-		fsPromises.stat.mockClear()
-		fsPromises.stat.mockResolvedValue({
-			size: 1024,
-			isDirectory: () => false,
-			isFile: () => true,
-			isSymbolicLink: () => false,
-		} as any)
-
-		// Use shared mock setup function with local variables
-		const mocks = createMockCline()
-		localMockCline = mocks.mockCline
-		localMockProvider = mocks.mockProvider
-
-		// CRITICAL: Explicitly ensure image support is enabled for all tests in this suite
-		setImageSupport(localMockCline, true)
-
-		mockedPathResolve.mockReturnValue(absoluteImagePath)
-		mockedIsBinaryFile.mockResolvedValue(true)
-		mockedCountFileLines.mockResolvedValue(0)
-		mockedFsReadFile.mockResolvedValue(imageBuffer)
-
-		// Setup mock provider with default maxReadFileLine
-		localMockProvider.getState.mockResolvedValue({ maxReadFileLine: -1 })
-
-		toolResult = undefined
-	})
+			await readFileTool.execute({ path: "test.ts" }, mockTask as any, callbacks)
 
-	async function executeReadImageTool(imagePath: string = testImagePath): Promise<ToolResponse | undefined> {
-		const toolUse: ReadFileToolUse = {
-			type: "tool_use",
-			name: "read_file",
-			params: {},
-			partial: false,
-			nativeArgs: {
-				files: [{ path: imagePath, lineRanges: [] }],
-			},
-		}
-
-		// Debug: Check if mock is working
-		console.log("Mock API:", localMockCline.api)
-		console.log("Supports images:", localMockCline.api?.getModel?.()?.info?.supportsImages)
-
-		await readFileTool.handle(localMockCline, toolUse, {
-			askApproval: localMockCline.ask,
-			handleError: vi.fn(),
-			pushToolResult: (result: ToolResponse) => {
-				toolResult = result
-			},
+			expect(mockedReadWithSlice).toHaveBeenCalled()
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("line 1"))
 		})
 
-		console.log("Result type:", Array.isArray(toolResult) ? "array" : typeof toolResult)
-		console.log("Result:", toolResult)
-
-		return toolResult
-	}
+		it("should read text file with offset and limit", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-	describe("Image Format Detection", () => {
-		it.each([
-			[".png", "image.png", "image/png"],
-			[".jpg", "photo.jpg", "image/jpeg"],
-			[".jpeg", "picture.jpeg", "image/jpeg"],
-			[".gif", "animation.gif", "image/gif"],
-			[".bmp", "bitmap.bmp", "image/bmp"],
-			[".svg", "vector.svg", "image/svg+xml"],
-			[".webp", "modern.webp", "image/webp"],
-			[".ico", "favicon.ico", "image/x-icon"],
-			[".avif", "new-format.avif", "image/avif"],
-		])("should detect %s as an image format", async (ext, filename, expectedMimeType) => {
-			// Setup
-			const imagePath = `test/${filename}`
-			const absolutePath = `/test/${filename}`
-			mockedPathResolve.mockReturnValue(absolutePath)
-
-			// Ensure API mock supports images
-			setImageSupport(localMockCline, true)
-
-			// Execute
-			const result = await executeReadImageTool(imagePath)
-
-			// Verify result is a multi-part response
-			expect(Array.isArray(result)).toBe(true)
-			const textPart = (result as any[]).find((p) => p.type === "text")?.text
-			const imagePart = (result as any[]).find((p) => p.type === "image")
-
-			// Verify text part - native format
-			expect(textPart).toContain(`File: ${imagePath}`)
-			expect(textPart).not.toContain("<image_data>")
-			expect(textPart).toContain(`Note: Image file`)
-
-			// Verify image part
-			expect(imagePart).toBeDefined()
-			expect(imagePart.source.media_type).toBe(expectedMimeType)
-			expect(imagePart.source.data).toBe(base64ImageData)
-		})
-	})
+			mockedFsReadFile.mockResolvedValue(Buffer.from("line 1\nline 2\nline 3\nline 4\nline 5"))
+			mockedReadWithSlice.mockReturnValue({
+				content: "2 | line 2\n3 | line 3",
+				returnedLines: 2,
+				totalLines: 5,
+				wasTruncated: true,
+				includedRanges: [[2, 3]],
+			})
 
-	describe("Image Reading Functionality", () => {
-		it("should read image file and return a multi-part response", async () => {
-			// Execute
-			const result = await executeReadImageTool()
-
-			// Verify result is a multi-part response
-			expect(Array.isArray(result)).toBe(true)
-			const textPart = (result as any[]).find((p) => p.type === "text")?.text
-			const imagePart = (result as any[]).find((p) => p.type === "image")
-
-			// Verify text part - native format
-			expect(textPart).toContain(`File: ${testImagePath}`)
-			expect(textPart).not.toContain(`<image_data>`)
-			expect(textPart).toContain(`Note: Image file`)
-
-			// Verify image part
-			expect(imagePart).toBeDefined()
-			expect(imagePart.source.media_type).toBe("image/png")
-			expect(imagePart.source.data).toBe(base64ImageData)
-		})
+			await readFileTool.execute(
+				{ path: "test.ts", mode: "slice", offset: 2, limit: 2 },
+				mockTask as any,
+				callbacks,
+			)
 
-		it("should call formatResponse.toolResult with text and image data", async () => {
-			// Execute
-			await executeReadImageTool()
-
-			// Verify toolResultMock was called correctly
-			expect(toolResultMock).toHaveBeenCalledTimes(1)
-			const callArgs = toolResultMock.mock.calls[0]
-			const textArg = callArgs[0]
-			const imagesArg = callArgs[1]
-
-			// Native format
-			expect(textArg).toContain(`File: ${testImagePath}`)
-			expect(imagesArg).toBeDefined()
-			expect(imagesArg).toBeInstanceOf(Array)
-			expect(imagesArg!.length).toBe(1)
-			expect(imagesArg![0]).toBe(`data:image/png;base64,${base64ImageData}`)
+			expect(mockedReadWithSlice).toHaveBeenCalledWith(expect.any(String), 1, 2) // offset converted to 0-based
 		})
 
-		it("should handle large image files", async () => {
-			// Setup - simulate a large image
-			const largeBase64 = "A".repeat(1000000) // 1MB of base64 data
-			const largeBuffer = Buffer.from(largeBase64, "base64")
-			mockedFsReadFile.mockResolvedValue(largeBuffer)
-
-			// Execute
-			const result = await executeReadImageTool()
-
-			// Verify it still works with large data
-			expect(Array.isArray(result)).toBe(true)
-			const imagePart = (result as any[]).find((p) => p.type === "image")
-			expect(imagePart).toBeDefined()
-			expect(imagePart.source.media_type).toBe("image/png")
-			expect(imagePart.source.data).toBe(largeBase64)
-		})
+		it("should read text file with indentation mode", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		it("should exclude images when model does not support images", async () => {
-			// Setup - mock API handler that doesn't support images
-			setImageSupport(localMockCline, false)
+			const content = "class Foo {\n  method() {\n    return 42\n  }\n}"
+			mockedFsReadFile.mockResolvedValue(Buffer.from(content))
+			mockedReadWithIndentation.mockReturnValue({
+				content: "1 | class Foo {\n2 |   method() {\n3 |     return 42\n4 |   }\n5 | }",
+				returnedLines: 5,
+				totalLines: 5,
+				wasTruncated: false,
+				includedRanges: [[1, 5]],
+			})
 
-			// Execute
-			const result = await executeReadImageTool()
+			await readFileTool.execute(
+				{
+					path: "test.ts",
+					mode: "indentation",
+					indentation: { anchor_line: 3 },
+				},
+				mockTask as any,
+				callbacks,
+			)
 
-			// When images are not supported, the tool should return just text (not call formatResponse.toolResult)
-			expect(toolResultMock).not.toHaveBeenCalled()
-			expect(typeof result).toBe("string")
-			// Native format
-			expect(result).toContain(`File: ${testImagePath}`)
-			expect(result).toContain(`Note: Image file`)
+			expect(mockedReadWithIndentation).toHaveBeenCalledWith(
+				content,
+				expect.objectContaining({
+					anchorLine: 3,
+				}),
+			)
 		})
 
-		it("should include images when model supports images", async () => {
-			// Setup - mock API handler that supports images
-			setImageSupport(localMockCline, true)
-
-			// Execute
-			const result = await executeReadImageTool()
-
-			// Verify toolResultMock was called with images
-			expect(toolResultMock).toHaveBeenCalledTimes(1)
-			const callArgs = toolResultMock.mock.calls[0]
-			const textArg = callArgs[0]
-			const imagesArg = callArgs[1]
-
-			// Native format
-			expect(textArg).toContain(`File: ${testImagePath}`)
-			expect(imagesArg).toBeDefined() // Images should be included
-			expect(imagesArg).toBeInstanceOf(Array)
-			expect(imagesArg!.length).toBe(1)
-			expect(imagesArg![0]).toBe(`data:image/png;base64,${base64ImageData}`)
-		})
+		it("should show truncation notice when content is truncated", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		it("should handle undefined supportsImages gracefully", async () => {
-			// Setup - mock API handler with undefined supportsImages
-			setImageSupport(localMockCline, undefined)
+			mockedFsReadFile.mockResolvedValue(Buffer.from("lots of content..."))
+			mockedReadWithSlice.mockReturnValue({
+				content: "1 | truncated content",
+				returnedLines: 100,
+				totalLines: 5000,
+				wasTruncated: true,
+				includedRanges: [[1, 100]],
+			})
 
-			// Execute
-			const result = await executeReadImageTool()
+			await readFileTool.execute({ path: "large.ts" }, mockTask as any, callbacks)
 
-			// When supportsImages is undefined, should default to false and return just text
-			expect(toolResultMock).not.toHaveBeenCalled()
-			expect(typeof result).toBe("string")
-			// Native format
-			expect(result).toContain(`File: ${testImagePath}`)
-			expect(result).toContain(`Note: Image file`)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("truncated"))
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("To read more"))
 		})
 
-		it("should handle errors when reading image files", async () => {
-			// Setup - simulate read error
-			mockedFsReadFile.mockRejectedValue(new Error("Failed to read image"))
-
-			// Execute
-			const toolUse: ReadFileToolUse = {
-				type: "tool_use",
-				name: "read_file",
-				params: {},
-				partial: false,
-				nativeArgs: {
-					files: [{ path: testImagePath, lineRanges: [] }],
-				},
-			}
+		it("should handle empty files", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-			await readFileTool.handle(localMockCline, toolUse, {
-				askApproval: localMockCline.ask,
-				handleError: vi.fn(),
-				pushToolResult: (result: ToolResponse) => {
-					toolResult = result
-				},
+			mockedFsReadFile.mockResolvedValue(Buffer.from(""))
+			mockedReadWithSlice.mockReturnValue({
+				content: "",
+				returnedLines: 0,
+				totalLines: 0,
+				wasTruncated: false,
+				includedRanges: [],
 			})
 
-			// Verify error handling - native format
-			expect(toolResult).toContain("Error: Error reading image file: Failed to read image")
-			// Verify that say was called to show error to user
-			expect(localMockCline.say).toHaveBeenCalledWith("error", expect.stringContaining("Failed to read image"))
+			await readFileTool.execute({ path: "empty.ts" }, mockTask as any, callbacks)
+
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("empty"))
 		})
 	})
 
-	describe("Binary File Handling", () => {
-		it("should not treat non-image binary files as images", async () => {
-			// Setup
-			const binaryPath = "test/document.pdf"
-			const absolutePath = "/test/document.pdf"
-			mockedPathResolve.mockReturnValue(absolutePath)
-			mockedExtractTextFromFile.mockResolvedValue("PDF content extracted")
-
-			// Execute
-			const result = await executeReadImageTool(binaryPath)
-
-			// Verify it uses extractTextFromFile instead
-			expect(result).not.toContain("<image_data>")
-			// Make the test platform-agnostic by checking the call was made (path normalization can vary)
-			expect(mockedExtractTextFromFile).toHaveBeenCalledTimes(1)
-			const callArgs = mockedExtractTextFromFile.mock.calls[0]
-			expect(callArgs[0]).toMatch(/[\\\/]test[\\\/]document\.pdf$/)
-		})
+	describe("approval flow", () => {
+		it("should approve file read when user clicks yes", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		it("should handle unknown binary formats", async () => {
-			// Setup
-			const binaryPath = "test/unknown.bin"
-			const absolutePath = "/test/unknown.bin"
-			mockedPathResolve.mockReturnValue(absolutePath)
-			mockedExtractTextFromFile.mockResolvedValue("")
+			mockTask.ask.mockResolvedValue({ response: "yesButtonClicked", text: undefined, images: undefined })
 
-			// Execute
-			const result = await executeReadImageTool(binaryPath)
+			await readFileTool.execute({ path: "test.ts" }, mockTask as any, callbacks)
 
-			// Verify - native format for binary files
-			expect(result).not.toContain("<image_data>")
-			expect(result).toContain("Binary file (bin)")
+			expect(mockTask.ask).toHaveBeenCalledWith("tool", expect.any(String), false)
+			expect(mockTask.didRejectTool).toBe(false)
 		})
-	})
 
-	describe("Edge Cases", () => {
-		it("should handle case-insensitive image extensions", async () => {
-			// Test uppercase extensions
-			const uppercasePath = "test/IMAGE.PNG"
-			const absolutePath = "/test/IMAGE.PNG"
-			mockedPathResolve.mockReturnValue(absolutePath)
-
-			// Execute
-			const result = await executeReadImageTool(uppercasePath)
-
-			// Verify
-			expect(Array.isArray(result)).toBe(true)
-			const imagePart = (result as any[]).find((p) => p.type === "image")
-			expect(imagePart).toBeDefined()
-			expect(imagePart.source.media_type).toBe("image/png")
-		})
+		it("should deny file read when user clicks no", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		it("should handle files with multiple dots in name", async () => {
-			// Setup
-			const complexPath = "test/my.photo.backup.png"
-			const absolutePath = "/test/my.photo.backup.png"
-			mockedPathResolve.mockReturnValue(absolutePath)
+			mockTask.ask.mockResolvedValue({ response: "noButtonClicked", text: undefined, images: undefined })
 
-			// Execute
-			const result = await executeReadImageTool(complexPath)
+			await readFileTool.execute({ path: "test.ts" }, mockTask as any, callbacks)
 
-			// Verify
-			expect(Array.isArray(result)).toBe(true)
-			const imagePart = (result as any[]).find((p) => p.type === "image")
-			expect(imagePart).toBeDefined()
-			expect(imagePart.source.media_type).toBe("image/png")
+			expect(mockTask.didRejectTool).toBe(true)
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("Denied by user"))
 		})
 
-		it("should handle empty image files", async () => {
-			// Setup - empty buffer
-			mockedFsReadFile.mockResolvedValue(Buffer.from(""))
+		it("should include user feedback when provided with approval", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
+
+			mockTask.ask.mockResolvedValue({
+				response: "yesButtonClicked",
+				text: "Please be careful with this file",
+				images: undefined,
+			})
+			mockedFsReadFile.mockResolvedValue(Buffer.from("content"))
+			mockedReadWithSlice.mockReturnValue({
+				content: "1 | content",
+				returnedLines: 1,
+				totalLines: 1,
+				wasTruncated: false,
+				includedRanges: [[1, 1]],
+			})
 
-			// Execute
-			const result = await executeReadImageTool()
+			await readFileTool.execute({ path: "test.ts" }, mockTask as any, callbacks)
 
-			// Verify - should still create valid data URL
-			expect(Array.isArray(result)).toBe(true)
-			const imagePart = (result as any[]).find((p) => p.type === "image")
-			expect(imagePart).toBeDefined()
-			expect(imagePart.source.media_type).toBe("image/png")
-			expect(imagePart.source.data).toBe("")
+			expect(mockTask.say).toHaveBeenCalledWith("user_feedback", "Please be careful with this file", undefined)
+			expect(formatResponse.toolApprovedWithFeedback).toHaveBeenCalledWith("Please be careful with this file")
 		})
-	})
-})
-
-describe("read_file tool concurrent file reads limit", () => {
-	const mockedCountFileLines = vi.mocked(countFileLines)
-	const mockedIsBinaryFile = vi.mocked(isBinaryFile)
-	const mockedPathResolve = vi.mocked(path.resolve)
 
-	let mockCline: any
-	let mockProvider: any
-	let toolResult: ToolResponse | undefined
+		it("should include user feedback when provided with denial", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-	beforeEach(() => {
-		// Clear specific mocks
-		mockedCountFileLines.mockClear()
-		mockedIsBinaryFile.mockClear()
-		mockedPathResolve.mockClear()
-		addLineNumbersMock.mockClear()
-		toolResultMock.mockClear()
-
-		// Use shared mock setup function
-		const mocks = createMockCline()
-		mockCline = mocks.mockCline
-		mockProvider = mocks.mockProvider
-
-		// Disable image support for these tests
-		setImageSupport(mockCline, false)
-
-		mockedPathResolve.mockImplementation((cwd, relPath) => `/${relPath}`)
-		mockedIsBinaryFile.mockResolvedValue(false)
-		mockedCountFileLines.mockResolvedValue(10)
+			mockTask.ask.mockResolvedValue({
+				response: "noButtonClicked",
+				text: "This file contains secrets",
+				images: undefined,
+			})
 
-		// Mock fsPromises.stat to return a file (not directory) by default
-		fsPromises.stat.mockResolvedValue({
-			isDirectory: () => false,
-			isFile: () => true,
-			isSymbolicLink: () => false,
-		} as any)
+			await readFileTool.execute({ path: "secrets.env" }, mockTask as any, callbacks)
 
-		toolResult = undefined
+			expect(mockTask.say).toHaveBeenCalledWith("user_feedback", "This file contains secrets", undefined)
+			expect(formatResponse.toolDeniedWithFeedback).toHaveBeenCalledWith("This file contains secrets")
+		})
 	})
 
-	async function executeReadFileToolWithLimit(
-		fileCount: number,
-		maxConcurrentFileReads: number,
-	): Promise<ToolResponse | undefined> {
-		// Setup provider state with the specified limit
-		mockProvider.getState.mockResolvedValue({
-			maxReadFileLine: -1,
-			maxConcurrentFileReads,
-			maxImageFileSize: 20,
-			maxTotalImageSize: 20,
-		})
+	describe("output structure", () => {
+		it("should include file path in output", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
+
+			mockedFsReadFile.mockResolvedValue(Buffer.from("content"))
+			mockedReadWithSlice.mockReturnValue({
+				content: "1 | content",
+				returnedLines: 1,
+				totalLines: 1,
+				wasTruncated: false,
+				includedRanges: [[1, 1]],
+			})
 
-		const toolUse: ReadFileToolUse = {
-			type: "tool_use",
-			name: "read_file",
-			params: {},
-			partial: false,
-			nativeArgs: {
-				files: Array.from({ length: fileCount }, (_, i) => ({ path: `file${i + 1}.txt`, lineRanges: [] })),
-			},
-		}
-
-		// Configure mocks for successful file reads
-		mockReadFileWithTokenBudget.mockResolvedValue({
-			content: "test content",
-			tokenCount: 10,
-			lineCount: 1,
-			complete: true,
-		})
+			await readFileTool.execute({ path: "src/app.ts" }, mockTask as any, callbacks)
 
-		await readFileTool.handle(mockCline, toolUse, {
-			askApproval: mockCline.ask,
-			handleError: vi.fn(),
-			pushToolResult: (result: ToolResponse) => {
-				toolResult = result
-			},
+			expect(callbacks.pushToolResult).toHaveBeenCalledWith(expect.stringContaining("File: src/app.ts"))
 		})
 
-		return toolResult
-	}
+		it("should track file context after successful read", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-	it("should reject when file count exceeds maxConcurrentFileReads", async () => {
-		// Try to read 6 files when limit is 5
-		const result = await executeReadFileToolWithLimit(6, 5)
+			mockedFsReadFile.mockResolvedValue(Buffer.from("content"))
+			mockedReadWithSlice.mockReturnValue({
+				content: "1 | content",
+				returnedLines: 1,
+				totalLines: 1,
+				wasTruncated: false,
+				includedRanges: [[1, 1]],
+			})
 
-		// Verify error result
-		expect(result).toContain("Error: Too many files requested")
-		expect(result).toContain("You attempted to read 6 files")
-		expect(result).toContain("but the concurrent file reads limit is 5")
-		expect(result).toContain("Please read files in batches of 5 or fewer")
+			await readFileTool.execute({ path: "test.ts" }, mockTask as any, callbacks)
 
-		// Verify error tracking
-		expect(mockCline.say).toHaveBeenCalledWith("error", expect.stringContaining("Too many files requested"))
+			expect(mockTask.fileContextTracker.trackFileContext).toHaveBeenCalledWith("test.ts", "read_tool")
+		})
 	})
 
-	it("should allow reading files when count equals maxConcurrentFileReads", async () => {
-		// Try to read exactly 5 files when limit is 5
-		const result = await executeReadFileToolWithLimit(5, 5)
+	describe("error handling", () => {
+		it("should handle file read errors gracefully", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		// Should not contain error
-		expect(result).not.toContain("Error: Too many files requested")
+			mockedFsReadFile.mockRejectedValue(new Error("ENOENT: no such file or directory"))
 
-		// Should contain file results
-		expect(typeof result === "string" ? result : JSON.stringify(result)).toContain("file1.txt")
-	})
+			await readFileTool.execute({ path: "nonexistent.ts" }, mockTask as any, callbacks)
 
-	it("should allow reading files when count is below maxConcurrentFileReads", async () => {
-		// Try to read 3 files when limit is 5
-		const result = await executeReadFileToolWithLimit(3, 5)
+			expect(mockTask.say).toHaveBeenCalledWith("error", expect.stringContaining("Error reading file"))
+			expect(mockTask.didToolFailInCurrentTurn).toBe(true)
+		})
 
-		// Should not contain error
-		expect(result).not.toContain("Error: Too many files requested")
+		it("should handle stat errors gracefully", async () => {
+			const mockTask = createMockTask()
+			const callbacks = createMockCallbacks()
 
-		// Should contain file results
-		expect(typeof result === "string" ? result : JSON.stringify(result)).toContain("file1.txt")
-	})
+			mockedFsStat.mockRejectedValue(new Error("Permission denied"))
 
-	it("should respect custom maxConcurrentFileReads value of 1", async () => {
-		// Try to read 2 files when limit is 1
-		const result = await executeReadFileToolWithLimit(2, 1)
+			await readFileTool.execute({ path: "protected.ts" }, mockTask as any, callbacks)
 
-		// Verify error result with limit of 1
-		expect(result).toContain("Error: Too many files requested")
-		expect(result).toContain("You attempted to read 2 files")
-		expect(result).toContain("but the concurrent file reads limit is 1")
+			expect(mockTask.say).toHaveBeenCalledWith("error", expect.stringContaining("Error reading file"))
+			expect(mockTask.didToolFailInCurrentTurn).toBe(true)
+		})
 	})
 
-	it("should allow single file read when maxConcurrentFileReads is 1", async () => {
-		// Try to read 1 file when limit is 1
-		const result = await executeReadFileToolWithLimit(1, 1)
+	describe("getReadFileToolDescription", () => {
+		it("should return description with path when nativeArgs provided", () => {
+			const description = readFileTool.getReadFileToolDescription("read_file", { path: "src/app.ts" })
 
-		// Should not contain error
-		expect(result).not.toContain("Error: Too many files requested")
-
-		// Should contain file result
-		expect(typeof result === "string" ? result : JSON.stringify(result)).toContain("file1.txt")
-	})
+			expect(description).toBe("[read_file for 'src/app.ts']")
+		})
 
-	it("should respect higher maxConcurrentFileReads value", async () => {
-		// Try to read 15 files when limit is 10
-		const result = await executeReadFileToolWithLimit(15, 10)
+		it("should return description with path when params provided", () => {
+			const description = readFileTool.getReadFileToolDescription("read_file", { path: "src/app.ts" })
 
-		// Verify error result
-		expect(result).toContain("Error: Too many files requested")
-		expect(result).toContain("You attempted to read 15 files")
-		expect(result).toContain("but the concurrent file reads limit is 10")
-	})
-
-	it("should use default value of 5 when maxConcurrentFileReads is not set", async () => {
-		// Setup provider state without maxConcurrentFileReads
-		mockProvider.getState.mockResolvedValue({
-			maxReadFileLine: -1,
-			maxImageFileSize: 20,
-			maxTotalImageSize: 20,
+			expect(description).toBe("[read_file for 'src/app.ts']")
 		})
 
-		const toolUse: ReadFileToolUse = {
-			type: "tool_use",
-			name: "read_file",
-			params: {},
-			partial: false,
-			nativeArgs: {
-				files: Array.from({ length: 6 }, (_, i) => ({ path: `file${i + 1}.txt`, lineRanges: [] })),
-			},
-		}
-
-		mockReadFileWithTokenBudget.mockResolvedValue({
-			content: "test content",
-			tokenCount: 10,
-			lineCount: 1,
-			complete: true,
-		})
+		it("should return description indicating missing path", () => {
+			const description = readFileTool.getReadFileToolDescription("read_file", {})
 
-		await readFileTool.handle(mockCline, toolUse, {
-			askApproval: mockCline.ask,
-			handleError: vi.fn(),
-			pushToolResult: (result: ToolResponse) => {
-				toolResult = result
-			},
+			expect(description).toBe("[read_file with missing path]")
 		})
-
-		// Should use default limit of 5 and reject 6 files
-		expect(toolResult).toContain("Error: Too many files requested")
-		expect(toolResult).toContain("but the concurrent file reads limit is 5")
 	})
 })

+ 244 - 3
src/core/tools/__tests__/useMcpToolTool.spec.ts

@@ -7,7 +7,12 @@ import { ToolUse } from "../../../shared/tools"
 // Mock dependencies
 vi.mock("../../prompts/responses", () => ({
 	formatResponse: {
-		toolResult: vi.fn((result: string) => `Tool result: ${result}`),
+		toolResult: vi.fn((result: string, images?: string[]) => {
+			if (images && images.length > 0) {
+				return `Tool result: ${result} [with ${images.length} image(s)]`
+			}
+			return `Tool result: ${result}`
+		}),
 		toolError: vi.fn((error: string) => `Tool error: ${error}`),
 		invalidMcpToolArgumentError: vi.fn((server: string, tool: string) => `Invalid args for ${server}:${tool}`),
 		unknownMcpToolError: vi.fn((server: string, tool: string, availableTools: string[]) => {
@@ -245,7 +250,7 @@ describe("useMcpToolTool", () => {
 			expect(mockTask.consecutiveMistakeCount).toBe(0)
 			expect(mockAskApproval).toHaveBeenCalled()
 			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_request_started")
-			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "Tool executed successfully")
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "Tool executed successfully", [])
 			expect(mockPushToolResult).toHaveBeenCalledWith("Tool result: Tool executed successfully")
 		})
 
@@ -483,7 +488,7 @@ describe("useMcpToolTool", () => {
 			expect(mockTask.consecutiveMistakeCount).toBe(0)
 			expect(mockTask.recordToolError).not.toHaveBeenCalled()
 			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_request_started")
-			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "Tool executed successfully")
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "Tool executed successfully", [])
 		})
 
 		it("should reject unknown server names with available servers listed", async () => {
@@ -636,4 +641,240 @@ describe("useMcpToolTool", () => {
 			expect(callToolMock).toHaveBeenCalledWith("test-server", "get-user-profile", {})
 		})
 	})
+
+	describe("image handling", () => {
+		it("should handle tool response with image content", async () => {
+			const block: ToolUse = {
+				type: "tool_use",
+				name: "use_mcp_tool",
+				params: {
+					server_name: "figma-server",
+					tool_name: "get_screenshot",
+					arguments: '{"nodeId": "123"}',
+				},
+				nativeArgs: {
+					server_name: "figma-server",
+					tool_name: "get_screenshot",
+					arguments: { nodeId: "123" },
+				},
+				partial: false,
+			}
+
+			mockAskApproval.mockResolvedValue(true)
+
+			const mockToolResult = {
+				content: [
+					{
+						type: "image",
+						mimeType: "image/png",
+						data: "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJ",
+					},
+				],
+				isError: false,
+			}
+
+			mockProviderRef.deref.mockReturnValue({
+				getMcpHub: () => ({
+					callTool: vi.fn().mockResolvedValue(mockToolResult),
+					getAllServers: vi
+						.fn()
+						.mockReturnValue([
+							{
+								name: "figma-server",
+								tools: [{ name: "get_screenshot", description: "Get screenshot" }],
+							},
+						]),
+				}),
+				postMessageToWebview: vi.fn(),
+			})
+
+			await useMcpToolTool.handle(mockTask as Task, block as any, {
+				askApproval: mockAskApproval,
+				handleError: mockHandleError,
+				pushToolResult: mockPushToolResult,
+			})
+
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_request_started")
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "[1 image(s) received]", [
+				"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJ",
+			])
+			expect(mockPushToolResult).toHaveBeenCalledWith(expect.stringContaining("with 1 image(s)"))
+		})
+
+		it("should handle tool response with both text and image content", async () => {
+			const block: ToolUse = {
+				type: "tool_use",
+				name: "use_mcp_tool",
+				params: {
+					server_name: "figma-server",
+					tool_name: "get_node_info",
+					arguments: '{"nodeId": "123"}',
+				},
+				nativeArgs: {
+					server_name: "figma-server",
+					tool_name: "get_node_info",
+					arguments: { nodeId: "123" },
+				},
+				partial: false,
+			}
+
+			mockAskApproval.mockResolvedValue(true)
+
+			const mockToolResult = {
+				content: [
+					{ type: "text", text: "Node name: Button" },
+					{
+						type: "image",
+						mimeType: "image/png",
+						data: "base64imagedata",
+					},
+				],
+				isError: false,
+			}
+
+			mockProviderRef.deref.mockReturnValue({
+				getMcpHub: () => ({
+					callTool: vi.fn().mockResolvedValue(mockToolResult),
+					getAllServers: vi
+						.fn()
+						.mockReturnValue([
+							{ name: "figma-server", tools: [{ name: "get_node_info", description: "Get node info" }] },
+						]),
+				}),
+				postMessageToWebview: vi.fn(),
+			})
+
+			await useMcpToolTool.handle(mockTask as Task, block as any, {
+				askApproval: mockAskApproval,
+				handleError: mockHandleError,
+				pushToolResult: mockPushToolResult,
+			})
+
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_request_started")
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "Node name: Button", [
+				"data:image/png;base64,base64imagedata",
+			])
+			expect(mockPushToolResult).toHaveBeenCalledWith(expect.stringContaining("with 1 image(s)"))
+		})
+
+		it("should handle image with data URL already formatted", async () => {
+			const block: ToolUse = {
+				type: "tool_use",
+				name: "use_mcp_tool",
+				params: {
+					server_name: "figma-server",
+					tool_name: "get_screenshot",
+					arguments: '{"nodeId": "123"}',
+				},
+				nativeArgs: {
+					server_name: "figma-server",
+					tool_name: "get_screenshot",
+					arguments: { nodeId: "123" },
+				},
+				partial: false,
+			}
+
+			mockAskApproval.mockResolvedValue(true)
+
+			const mockToolResult = {
+				content: [
+					{
+						type: "image",
+						mimeType: "image/jpeg",
+						data: "data:image/jpeg;base64,/9j/4AAQSkZJRg==",
+					},
+				],
+				isError: false,
+			}
+
+			mockProviderRef.deref.mockReturnValue({
+				getMcpHub: () => ({
+					callTool: vi.fn().mockResolvedValue(mockToolResult),
+					getAllServers: vi
+						.fn()
+						.mockReturnValue([
+							{
+								name: "figma-server",
+								tools: [{ name: "get_screenshot", description: "Get screenshot" }],
+							},
+						]),
+				}),
+				postMessageToWebview: vi.fn(),
+			})
+
+			await useMcpToolTool.handle(mockTask as Task, block as any, {
+				askApproval: mockAskApproval,
+				handleError: mockHandleError,
+				pushToolResult: mockPushToolResult,
+			})
+
+			// Should not double-prefix the data URL
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "[1 image(s) received]", [
+				"data:image/jpeg;base64,/9j/4AAQSkZJRg==",
+			])
+		})
+
+		it("should handle multiple images in response", async () => {
+			const block: ToolUse = {
+				type: "tool_use",
+				name: "use_mcp_tool",
+				params: {
+					server_name: "figma-server",
+					tool_name: "get_screenshots",
+					arguments: '{"nodeIds": ["1", "2"]}',
+				},
+				nativeArgs: {
+					server_name: "figma-server",
+					tool_name: "get_screenshots",
+					arguments: { nodeIds: ["1", "2"] },
+				},
+				partial: false,
+			}
+
+			mockAskApproval.mockResolvedValue(true)
+
+			const mockToolResult = {
+				content: [
+					{
+						type: "image",
+						mimeType: "image/png",
+						data: "image1data",
+					},
+					{
+						type: "image",
+						mimeType: "image/png",
+						data: "image2data",
+					},
+				],
+				isError: false,
+			}
+
+			mockProviderRef.deref.mockReturnValue({
+				getMcpHub: () => ({
+					callTool: vi.fn().mockResolvedValue(mockToolResult),
+					getAllServers: vi
+						.fn()
+						.mockReturnValue([
+							{
+								name: "figma-server",
+								tools: [{ name: "get_screenshots", description: "Get screenshots" }],
+							},
+						]),
+				}),
+				postMessageToWebview: vi.fn(),
+			})
+
+			await useMcpToolTool.handle(mockTask as Task, block as any, {
+				askApproval: mockAskApproval,
+				handleError: mockHandleError,
+				pushToolResult: mockPushToolResult,
+			})
+
+			expect(mockTask.say).toHaveBeenCalledWith("mcp_server_response", "[2 image(s) received]", [
+				"data:image/png;base64,image1data",
+				"data:image/png;base64,image2data",
+			])
+			expect(mockPushToolResult).toHaveBeenCalledWith(expect.stringContaining("with 2 image(s)"))
+		})
+	})
 })

+ 0 - 160
src/core/tools/helpers/__tests__/truncateDefinitions.spec.ts

@@ -1,160 +0,0 @@
-import { describe, it, expect } from "vitest"
-import { truncateDefinitionsToLineLimit } from "../truncateDefinitions"
-
-describe("truncateDefinitionsToLineLimit", () => {
-	it("should not truncate when maxReadFileLine is -1 (no limit)", () => {
-		const definitions = `# test.ts
-10--20 | function foo() {
-30--40 | function bar() {
-50--60 | function baz() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, -1)
-		expect(result).toBe(definitions)
-	})
-
-	it("should not truncate when maxReadFileLine is 0 (definitions only mode)", () => {
-		const definitions = `# test.ts
-10--20 | function foo() {
-30--40 | function bar() {
-50--60 | function baz() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 0)
-		expect(result).toBe(definitions)
-	})
-
-	it("should truncate definitions beyond the line limit", () => {
-		const definitions = `# test.ts
-10--20 | function foo() {
-30--40 | function bar() {
-50--60 | function baz() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 25)
-		const expected = `# test.ts
-10--20 | function foo() {`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should include definitions that start within limit even if they end beyond it", () => {
-		const definitions = `# test.ts
-10--50 | function foo() {
-60--80 | function bar() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 30)
-		const expected = `# test.ts
-10--50 | function foo() {`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should handle single-line definitions", () => {
-		const definitions = `# test.ts
-10 | const foo = 1
-20 | const bar = 2
-30 | const baz = 3`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 25)
-		const expected = `# test.ts
-10 | const foo = 1
-20 | const bar = 2`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should preserve header line when all definitions are beyond limit", () => {
-		const definitions = `# test.ts
-100--200 | function foo() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 50)
-		const expected = `# test.ts`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should handle empty definitions", () => {
-		const definitions = `# test.ts`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 50)
-		expect(result).toBe(definitions)
-	})
-
-	it("should handle definitions without header", () => {
-		const definitions = `10--20 | function foo() {
-30--40 | function bar() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 25)
-		const expected = `10--20 | function foo() {`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should not preserve empty lines (only definition lines)", () => {
-		const definitions = `# test.ts
-10--20 | function foo() {
-
-30--40 | function bar() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 25)
-		const expected = `# test.ts
-10--20 | function foo() {`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should handle mixed single and range definitions", () => {
-		const definitions = `# test.ts
-5 | const x = 1
-10--20 | function foo() {
-25 | const y = 2
-30--40 | function bar() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 26)
-		const expected = `# test.ts
-5 | const x = 1
-10--20 | function foo() {
-25 | const y = 2`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should handle definitions at exactly the limit", () => {
-		const definitions = `# test.ts
-10--20 | function foo() {
-30--40 | function bar() {
-50--60 | function baz() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 30)
-		const expected = `# test.ts
-10--20 | function foo() {
-30--40 | function bar() {`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should handle definitions with leading whitespace", () => {
-		const definitions = `# test.ts
-	 10--20 | function foo() {
-	 30--40 | function bar() {
-	 50--60 | function baz() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 25)
-		const expected = `# test.ts
-	 10--20 | function foo() {`
-
-		expect(result).toBe(expected)
-	})
-
-	it("should handle definitions with mixed whitespace patterns", () => {
-		const definitions = `# test.ts
-10--20 | function foo() {
-	 30--40 | function bar() {
-	50--60 | function baz() {`
-
-		const result = truncateDefinitionsToLineLimit(definitions, 35)
-		const expected = `# test.ts
-10--20 | function foo() {
-	 30--40 | function bar() {`
-
-		expect(result).toBe(expected)
-	})
-})

+ 0 - 9
src/core/tools/helpers/fileTokenBudget.ts

@@ -1,9 +0,0 @@
-// Re-export the new incremental token-based file reader
-export { readFileWithTokenBudget } from "../../../integrations/misc/read-file-with-budget"
-export type { ReadWithBudgetResult, ReadWithBudgetOptions } from "../../../integrations/misc/read-file-with-budget"
-
-/**
- * Percentage of available context to reserve for file reading.
- * The remaining percentage is reserved for the model's response and overhead.
- */
-export const FILE_READ_BUDGET_PERCENT = 0.6 // 60% for file, 40% for response

+ 0 - 44
src/core/tools/helpers/truncateDefinitions.ts

@@ -1,44 +0,0 @@
-/**
- * Truncate code definitions to only include those within the line limit
- * @param definitions - The full definitions string from parseSourceCodeDefinitionsForFile
- * @param maxReadFileLine - Maximum line number to include (-1 for no limit, 0 for definitions only)
- * @returns Truncated definitions string
- */
-export function truncateDefinitionsToLineLimit(definitions: string, maxReadFileLine: number): string {
-	// If no limit or definitions-only mode (0), return as-is
-	if (maxReadFileLine <= 0) {
-		return definitions
-	}
-
-	const lines = definitions.split("\n")
-	const result: string[] = []
-	let startIndex = 0
-
-	// Keep the header line (e.g., "# filename.ts")
-	if (lines.length > 0 && lines[0].startsWith("#")) {
-		result.push(lines[0])
-		startIndex = 1
-	}
-
-	// Process definition lines
-	for (let i = startIndex; i < lines.length; i++) {
-		const line = lines[i]
-
-		// Match definition format: "startLine--endLine | content" or "lineNumber | content"
-		// Allow optional leading whitespace to handle indented output or CRLF artifacts
-		const rangeMatch = line.match(/^\s*(\d+)(?:--(\d+))?\s*\|/)
-
-		if (rangeMatch) {
-			const startLine = parseInt(rangeMatch[1], 10)
-
-			// Only include definitions that start within the truncated range
-			if (startLine <= maxReadFileLine) {
-				result.push(line)
-			}
-		}
-		// Note: We don't preserve empty lines or other non-definition content
-		// as they're not part of the actual code definitions
-	}
-
-	return result.join("\n")
-}

+ 179 - 75
src/core/webview/ClineProvider.ts

@@ -146,8 +146,10 @@ export class ClineProvider
 	private taskCreationCallback: (task: Task) => void
 	private taskEventListeners: WeakMap<Task, Array<() => void>> = new WeakMap()
 	private currentWorkspacePath: string | undefined
+	private _disposed = false
 
 	private recentTasksCache?: string[]
+	private taskHistoryWriteLock: Promise<void> = Promise.resolve()
 	private pendingOperations: Map<string, PendingEditOperation> = new Map()
 	private static readonly PENDING_OPERATION_TIMEOUT_MS = 30000 // 30 seconds
 
@@ -458,7 +460,7 @@ export class ClineProvider
 
 	// Removes and destroys the top Cline instance (the current finished task),
 	// activating the previous one (resuming the parent task).
-	async removeClineFromStack() {
+	async removeClineFromStack(options?: { skipDelegationRepair?: boolean }) {
 		if (this.clineStack.length === 0) {
 			return
 		}
@@ -467,6 +469,11 @@ export class ClineProvider
 		let task = this.clineStack.pop()
 
 		if (task) {
+			// Capture delegation metadata before abort/dispose, since abortTask(true)
+			// is async and the task reference is cleared afterwards.
+			const childTaskId = task.taskId
+			const parentTaskId = task.parentTaskId
+
 			task.emit(RooCodeEventName.TaskUnfocused)
 
 			try {
@@ -490,6 +497,37 @@ export class ClineProvider
 			// Make sure no reference kept, once promises end it will be
 			// garbage collected.
 			task = undefined
+
+			// Delegation-aware parent metadata repair:
+			// If the popped task was a delegated child, repair the parent's metadata
+			// so it transitions from "delegated" back to "active" and becomes resumable
+			// from the task history list.
+			// Skip when called from delegateParentAndOpenChild() during nested delegation
+			// transitions (A→B→C), where the caller intentionally replaces the active
+			// child and will update the parent to point at the new child.
+			if (parentTaskId && childTaskId && !options?.skipDelegationRepair) {
+				try {
+					const { historyItem: parentHistory } = await this.getTaskWithId(parentTaskId)
+
+					if (parentHistory.status === "delegated" && parentHistory.awaitingChildId === childTaskId) {
+						await this.updateTaskHistory({
+							...parentHistory,
+							status: "active",
+							awaitingChildId: undefined,
+						})
+						this.log(
+							`[ClineProvider#removeClineFromStack] Repaired parent ${parentTaskId} metadata: delegated → active (child ${childTaskId} removed)`,
+						)
+					}
+				} catch (err) {
+					// Non-fatal: log but do not block the pop operation.
+					this.log(
+						`[ClineProvider#removeClineFromStack] Failed to repair parent metadata for ${parentTaskId} (non-fatal): ${
+							err instanceof Error ? err.message : String(err)
+						}`,
+					)
+				}
+			}
 		}
 	}
 
@@ -582,6 +620,11 @@ export class ClineProvider
 	}
 
 	async dispose() {
+		if (this._disposed) {
+			return
+		}
+
+		this._disposed = true
 		this.log("Disposing ClineProvider...")
 
 		// Clear all tasks from the stack.
@@ -1080,7 +1123,15 @@ export class ClineProvider
 	}
 
 	public async postMessageToWebview(message: ExtensionMessage) {
-		await this.view?.webview.postMessage(message)
+		if (this._disposed) {
+			return
+		}
+
+		try {
+			await this.view?.webview.postMessage(message)
+		} catch {
+			// View disposed, drop message silently
+		}
 	}
 
 	private async getHMRHtmlContent(webview: vscode.Webview): Promise<string> {
@@ -1666,31 +1717,40 @@ export class ClineProvider
 		const history = this.getGlobalState("taskHistory") ?? []
 		const historyItem = history.find((item) => item.id === id)
 
-		if (historyItem) {
-			const { getTaskDirectoryPath } = await import("../../utils/storage")
-			const globalStoragePath = this.contextProxy.globalStorageUri.fsPath
-			const taskDirPath = await getTaskDirectoryPath(globalStoragePath, id)
-			const apiConversationHistoryFilePath = path.join(taskDirPath, GlobalFileNames.apiConversationHistory)
-			const uiMessagesFilePath = path.join(taskDirPath, GlobalFileNames.uiMessages)
-			const fileExists = await fileExistsAtPath(apiConversationHistoryFilePath)
-
-			if (fileExists) {
-				const apiConversationHistory = JSON.parse(await fs.readFile(apiConversationHistoryFilePath, "utf8"))
-
-				return {
-					historyItem,
-					taskDirPath,
-					apiConversationHistoryFilePath,
-					uiMessagesFilePath,
-					apiConversationHistory,
-				}
+		if (!historyItem) {
+			throw new Error("Task not found")
+		}
+
+		const { getTaskDirectoryPath } = await import("../../utils/storage")
+		const globalStoragePath = this.contextProxy.globalStorageUri.fsPath
+		const taskDirPath = await getTaskDirectoryPath(globalStoragePath, id)
+		const apiConversationHistoryFilePath = path.join(taskDirPath, GlobalFileNames.apiConversationHistory)
+		const uiMessagesFilePath = path.join(taskDirPath, GlobalFileNames.uiMessages)
+		const fileExists = await fileExistsAtPath(apiConversationHistoryFilePath)
+
+		let apiConversationHistory: Anthropic.MessageParam[] = []
+
+		if (fileExists) {
+			try {
+				apiConversationHistory = JSON.parse(await fs.readFile(apiConversationHistoryFilePath, "utf8"))
+			} catch (error) {
+				console.warn(
+					`[getTaskWithId] api_conversation_history.json corrupted for task ${id}, returning empty history: ${error instanceof Error ? error.message : String(error)}`,
+				)
 			}
+		} else {
+			console.warn(
+				`[getTaskWithId] api_conversation_history.json missing for task ${id}, returning empty history`,
+			)
 		}
 
-		// if we tried to get a task that doesn't exist, remove it from state
-		// FIXME: this seems to happen sometimes when the json file doesnt save to disk for some reason
-		await this.deleteTaskFromState(id)
-		throw new Error("Task not found")
+		return {
+			historyItem,
+			taskDirPath,
+			apiConversationHistoryFilePath,
+			uiMessagesFilePath,
+			apiConversationHistory,
+		}
 	}
 
 	async getTaskWithAggregatedCosts(taskId: string): Promise<{
@@ -1787,10 +1847,12 @@ export class ClineProvider
 			}
 
 			// Delete all tasks from state in one batch
-			const taskHistory = this.getGlobalState("taskHistory") ?? []
-			const updatedTaskHistory = taskHistory.filter((task) => !allIdsToDelete.includes(task.id))
-			await this.updateGlobalState("taskHistory", updatedTaskHistory)
-			this.recentTasksCache = undefined
+			await this.withTaskHistoryLock(async () => {
+				const taskHistory = this.getGlobalState("taskHistory") ?? []
+				const updatedTaskHistory = taskHistory.filter((task) => !allIdsToDelete.includes(task.id))
+				await this.updateGlobalState("taskHistory", updatedTaskHistory)
+				this.recentTasksCache = undefined
+			})
 
 			// Delete associated shadow repositories or branches and task directories
 			const globalStorageDir = this.contextProxy.globalStorageUri.fsPath
@@ -1831,10 +1893,12 @@ export class ClineProvider
 	}
 
 	async deleteTaskFromState(id: string) {
-		const taskHistory = this.getGlobalState("taskHistory") ?? []
-		const updatedTaskHistory = taskHistory.filter((task) => task.id !== id)
-		await this.updateGlobalState("taskHistory", updatedTaskHistory)
-		this.recentTasksCache = undefined
+		await this.withTaskHistoryLock(async () => {
+			const taskHistory = this.getGlobalState("taskHistory") ?? []
+			const updatedTaskHistory = taskHistory.filter((task) => task.id !== id)
+			await this.updateGlobalState("taskHistory", updatedTaskHistory)
+			this.recentTasksCache = undefined
+		})
 		await this.postStateToWebview()
 	}
 
@@ -2061,7 +2125,6 @@ export class ClineProvider
 			showRooIgnoredFiles,
 			enableSubfolderRules,
 			language,
-			maxReadFileLine,
 			maxImageFileSize,
 			maxTotalImageSize,
 			historyPreviewCollapsed,
@@ -2073,7 +2136,6 @@ export class ClineProvider
 			publicSharingEnabled,
 			organizationAllowList,
 			organizationSettingsVersion,
-			maxConcurrentFileReads,
 			customCondensingPrompt,
 			codebaseIndexConfig,
 			codebaseIndexModels,
@@ -2200,10 +2262,8 @@ export class ClineProvider
 			enableSubfolderRules: enableSubfolderRules ?? false,
 			language: language ?? formatLanguage(vscode.env.language),
 			renderContext: this.renderContext,
-			maxReadFileLine: maxReadFileLine ?? -1,
 			maxImageFileSize: maxImageFileSize ?? 5,
 			maxTotalImageSize: maxTotalImageSize ?? 20,
-			maxConcurrentFileReads: maxConcurrentFileReads ?? 5,
 			settingsImportedAt: this.settingsImportedAt,
 			historyPreviewCollapsed: historyPreviewCollapsed ?? false,
 			reasoningBlockCollapsed: reasoningBlockCollapsed ?? true,
@@ -2435,10 +2495,8 @@ export class ClineProvider
 			telemetrySetting: stateValues.telemetrySetting || "unset",
 			showRooIgnoredFiles: stateValues.showRooIgnoredFiles ?? false,
 			enableSubfolderRules: stateValues.enableSubfolderRules ?? false,
-			maxReadFileLine: stateValues.maxReadFileLine ?? -1,
 			maxImageFileSize: stateValues.maxImageFileSize ?? 5,
 			maxTotalImageSize: stateValues.maxTotalImageSize ?? 20,
-			maxConcurrentFileReads: stateValues.maxConcurrentFileReads ?? 5,
 			historyPreviewCollapsed: stateValues.historyPreviewCollapsed ?? false,
 			reasoningBlockCollapsed: stateValues.reasoningBlockCollapsed ?? true,
 			enterBehavior: stateValues.enterBehavior ?? "send",
@@ -2506,6 +2564,19 @@ export class ClineProvider
 		}
 	}
 
+	/**
+	 * Serializes all read-modify-write operations on taskHistory to prevent
+	 * concurrent interleaving that can cause entries to vanish.
+	 */
+	private withTaskHistoryLock<T>(fn: () => Promise<T>): Promise<T> {
+		const result = this.taskHistoryWriteLock.then(fn, fn) // run even if previous write errored
+		this.taskHistoryWriteLock = result.then(
+			() => {},
+			() => {},
+		) // swallow for chain continuity
+		return result
+	}
+
 	/**
 	 * Updates a task in the task history and optionally broadcasts the updated history to the webview.
 	 * @param item The history item to update or add
@@ -2513,34 +2584,36 @@ export class ClineProvider
 	 * @returns The updated task history array
 	 */
 	async updateTaskHistory(item: HistoryItem, options: { broadcast?: boolean } = {}): Promise<HistoryItem[]> {
-		const { broadcast = true } = options
-		const history = (this.getGlobalState("taskHistory") as HistoryItem[] | undefined) || []
-		const existingItemIndex = history.findIndex((h) => h.id === item.id)
-		const wasExisting = existingItemIndex !== -1
-
-		if (wasExisting) {
-			// Preserve existing metadata (e.g., delegation fields) unless explicitly overwritten.
-			// This prevents loss of status/awaitingChildId/delegatedToId when tasks are reopened,
-			// terminated, or when routine message persistence occurs.
-			history[existingItemIndex] = {
-				...history[existingItemIndex],
-				...item,
+		return this.withTaskHistoryLock(async () => {
+			const { broadcast = true } = options
+			const history = (this.getGlobalState("taskHistory") as HistoryItem[] | undefined) || []
+			const existingItemIndex = history.findIndex((h) => h.id === item.id)
+			const wasExisting = existingItemIndex !== -1
+
+			if (wasExisting) {
+				// Preserve existing metadata (e.g., delegation fields) unless explicitly overwritten.
+				// This prevents loss of status/awaitingChildId/delegatedToId when tasks are reopened,
+				// terminated, or when routine message persistence occurs.
+				history[existingItemIndex] = {
+					...history[existingItemIndex],
+					...item,
+				}
+			} else {
+				history.push(item)
 			}
-		} else {
-			history.push(item)
-		}
 
-		await this.updateGlobalState("taskHistory", history)
-		this.recentTasksCache = undefined
+			await this.updateGlobalState("taskHistory", history)
+			this.recentTasksCache = undefined
 
-		// Broadcast the updated history to the webview if requested.
-		// Prefer per-item updates to avoid repeatedly cloning/sending the full history.
-		if (broadcast && this.isViewLaunched) {
-			const updatedItem = wasExisting ? history[existingItemIndex] : item
-			await this.postMessageToWebview({ type: "taskHistoryItemUpdated", taskHistoryItem: updatedItem })
-		}
+			// Broadcast the updated history to the webview if requested.
+			// Prefer per-item updates to avoid repeatedly cloning/sending the full history.
+			if (broadcast && this.isViewLaunched) {
+				const updatedItem = wasExisting ? history[existingItemIndex] : item
+				await this.postMessageToWebview({ type: "taskHistoryItemUpdated", taskHistoryItem: updatedItem })
+			}
 
-		return history
+			return history
+		})
 	}
 
 	/**
@@ -3197,7 +3270,21 @@ export class ClineProvider
 		//    recursivelyMakeClineRequests BEFORE tools start executing. We only need to
 		//    flush the pending user message with tool_results.
 		try {
-			await parent.flushPendingToolResultsToHistory()
+			const flushSuccess = await parent.flushPendingToolResultsToHistory()
+
+			if (!flushSuccess) {
+				console.warn(`[delegateParentAndOpenChild] Flush failed for parent ${parentTaskId}, retrying...`)
+				const retrySuccess = await parent.retrySaveApiConversationHistory()
+
+				if (!retrySuccess) {
+					console.error(
+						`[delegateParentAndOpenChild] CRITICAL: Parent ${parentTaskId} API history not persisted to disk. Child return may produce stale state.`,
+					)
+					vscode.window.showWarningMessage(
+						"Warning: Parent task state could not be saved. The parent task may lose recent context when resumed.",
+					)
+				}
+			}
 		} catch (error) {
 			this.log(
 				`[delegateParentAndOpenChild] Error flushing pending tool results (non-fatal): ${
@@ -3210,7 +3297,7 @@ export class ClineProvider
 		//    This ensures we never have >1 tasks open at any time during delegation.
 		//    Await abort completion to ensure clean disposal and prevent unhandled rejections.
 		try {
-			await this.removeClineFromStack()
+			await this.removeClineFromStack({ skipDelegationRepair: true })
 		} catch (error) {
 			this.log(
 				`[delegateParentAndOpenChild] Error during parent disposal (non-fatal): ${
@@ -3238,12 +3325,20 @@ export class ClineProvider
 		// Pass initialStatus: "active" to ensure the child task's historyItem is created
 		// with status from the start, avoiding race conditions where the task might
 		// call attempt_completion before status is persisted separately.
+		//
+		// Pass startTask: false to prevent the child from beginning its task loop
+		// (and writing to globalState via saveClineMessages → updateTaskHistory)
+		// before we persist the parent's delegation metadata in step 5.
+		// Without this, the child's fire-and-forget startTask() races with step 5,
+		// and the last writer to globalState overwrites the other's changes—
+		// causing the parent's delegation fields to be lost.
 		const child = await this.createTask(message, undefined, parent as any, {
 			initialTodos,
 			initialStatus: "active",
+			startTask: false,
 		})
 
-		// 5) Persist parent delegation metadata
+		// 5) Persist parent delegation metadata BEFORE the child starts writing.
 		try {
 			const { historyItem } = await this.getTaskWithId(parentTaskId)
 			const childIds = Array.from(new Set([...(historyItem.childIds ?? []), child.taskId]))
@@ -3263,7 +3358,10 @@ export class ClineProvider
 			)
 		}
 
-		// 6) Emit TaskDelegated (provider-level)
+		// 6) Start the child task now that parent metadata is safely persisted.
+		child.start()
+
+		// 7) Emit TaskDelegated (provider-level)
 		try {
 			this.emit(RooCodeEventName.TaskDelegated, parentTaskId, child.taskId)
 		} catch {
@@ -3397,7 +3495,19 @@ export class ClineProvider
 
 		await saveApiMessages({ messages: parentApiMessages as any, taskId: parentTaskId, globalStoragePath })
 
-		// 3) Update child metadata to "completed" status
+		// 3) Close child instance if still open (single-open-task invariant).
+		//    This MUST happen BEFORE updating the child's status to "completed" because
+		//    removeClineFromStack() → abortTask(true) → saveClineMessages() writes
+		//    the historyItem with initialStatus (typically "active"), which would
+		//    overwrite a "completed" status set earlier.
+		const current = this.getCurrentTask()
+		if (current?.taskId === childTaskId) {
+			await this.removeClineFromStack()
+		}
+
+		// 4) Update child metadata to "completed" status.
+		//    This runs after the abort so it overwrites the stale "active" status
+		//    that saveClineMessages() may have written during step 3.
 		try {
 			const { historyItem: childHistory } = await this.getTaskWithId(childTaskId)
 			await this.updateTaskHistory({
@@ -3412,7 +3522,7 @@ export class ClineProvider
 			)
 		}
 
-		// 4) Update parent metadata and persist BEFORE emitting completion event
+		// 5) Update parent metadata and persist BEFORE emitting completion event
 		const childIds = Array.from(new Set([...(historyItem.childIds ?? []), childTaskId]))
 		const updatedHistory: typeof historyItem = {
 			...historyItem,
@@ -3424,19 +3534,13 @@ export class ClineProvider
 		}
 		await this.updateTaskHistory(updatedHistory)
 
-		// 5) Emit TaskDelegationCompleted (provider-level)
+		// 6) Emit TaskDelegationCompleted (provider-level)
 		try {
 			this.emit(RooCodeEventName.TaskDelegationCompleted, parentTaskId, childTaskId, completionResultSummary)
 		} catch {
 			// non-fatal
 		}
 
-		// 6) Close child instance if still open (single-open-task invariant)
-		const current = this.getCurrentTask()
-		if (current?.taskId === childTaskId) {
-			await this.removeClineFromStack()
-		}
-
 		// 7) Reopen the parent from history as the sole active task (restores saved mode)
 		//    IMPORTANT: startTask=false to suppress resume-from-history ask scheduling
 		const parentInstance = await this.createTaskWithHistoryItem(updatedHistory, { startTask: false })

+ 87 - 1
src/core/webview/__tests__/ClineProvider.spec.ts

@@ -327,6 +327,7 @@ vi.mock("@roo-code/cloud", () => ({
 		get instance() {
 			return {
 				isAuthenticated: vi.fn().mockReturnValue(false),
+				off: vi.fn(),
 			}
 		},
 	},
@@ -568,7 +569,6 @@ describe("ClineProvider", () => {
 			showRooIgnoredFiles: false,
 			enableSubfolderRules: false,
 			renderContext: "sidebar",
-			maxReadFileLine: 500,
 			maxImageFileSize: 5,
 			maxTotalImageSize: 20,
 			cloudUserInfo: null,
@@ -598,6 +598,43 @@ describe("ClineProvider", () => {
 		expect(mockPostMessage).toHaveBeenCalledWith(message)
 	})
 
+	test("postMessageToWebview does not throw when webview is disposed", async () => {
+		await provider.resolveWebviewView(mockWebviewView)
+
+		// Simulate postMessage throwing after webview disposal
+		mockPostMessage.mockRejectedValueOnce(new Error("Webview is disposed"))
+
+		const message: ExtensionMessage = { type: "action", action: "chatButtonClicked" }
+
+		// Should not throw
+		await expect(provider.postMessageToWebview(message)).resolves.toBeUndefined()
+	})
+
+	test("postMessageToWebview skips postMessage after dispose", async () => {
+		await provider.resolveWebviewView(mockWebviewView)
+
+		await provider.dispose()
+		mockPostMessage.mockClear()
+
+		const message: ExtensionMessage = { type: "action", action: "chatButtonClicked" }
+		await provider.postMessageToWebview(message)
+
+		expect(mockPostMessage).not.toHaveBeenCalled()
+	})
+
+	test("dispose is idempotent — second call is a no-op", async () => {
+		await provider.resolveWebviewView(mockWebviewView)
+
+		await provider.dispose()
+		await provider.dispose()
+
+		// dispose body runs only once: log "Disposing ClineProvider..." appears once
+		const disposeCalls = (mockOutputChannel.appendLine as ReturnType<typeof vi.fn>).mock.calls.filter(
+			([msg]) => typeof msg === "string" && msg.includes("Disposing ClineProvider..."),
+		)
+		expect(disposeCalls).toHaveLength(1)
+	})
+
 	test("handles webviewDidLaunch message", async () => {
 		await provider.resolveWebviewView(mockWebviewView)
 
@@ -3771,4 +3808,53 @@ describe("ClineProvider - Comprehensive Edit/Delete Edge Cases", () => {
 			})
 		})
 	})
+
+	describe("getTaskWithId", () => {
+		it("returns empty apiConversationHistory when file is missing", async () => {
+			const historyItem = { id: "missing-api-file-task", task: "test task", ts: Date.now() }
+			vi.mocked(mockContext.globalState.get).mockImplementation((key: string) => {
+				if (key === "taskHistory") {
+					return [historyItem]
+				}
+				return undefined
+			})
+
+			const deleteTaskSpy = vi.spyOn(provider, "deleteTaskFromState")
+
+			const result = await (provider as any).getTaskWithId("missing-api-file-task")
+
+			expect(result.historyItem).toEqual(historyItem)
+			expect(result.apiConversationHistory).toEqual([])
+			expect(deleteTaskSpy).not.toHaveBeenCalled()
+		})
+
+		it("returns empty apiConversationHistory when file contains invalid JSON", async () => {
+			const historyItem = { id: "corrupt-api-task", task: "test task", ts: Date.now() }
+			vi.mocked(mockContext.globalState.get).mockImplementation((key: string) => {
+				if (key === "taskHistory") {
+					return [historyItem]
+				}
+				return undefined
+			})
+
+			// Make fileExistsAtPath return true so the read path is exercised
+			const fsUtils = await import("../../../utils/fs")
+			vi.spyOn(fsUtils, "fileExistsAtPath").mockResolvedValue(true)
+
+			// Make readFile return corrupted JSON
+			const fsp = await import("fs/promises")
+			vi.mocked(fsp.readFile).mockResolvedValueOnce("{not valid json!!!" as never)
+
+			const deleteTaskSpy = vi.spyOn(provider, "deleteTaskFromState")
+
+			const result = await (provider as any).getTaskWithId("corrupt-api-task")
+
+			expect(result.historyItem).toEqual(historyItem)
+			expect(result.apiConversationHistory).toEqual([])
+			expect(deleteTaskSpy).not.toHaveBeenCalled()
+
+			// Restore the spy
+			vi.mocked(fsUtils.fileExistsAtPath).mockRestore()
+		})
+	})
 })

+ 161 - 0
src/core/webview/__tests__/ClineProvider.taskHistory.spec.ts

@@ -415,6 +415,74 @@ describe("ClineProvider Task History Synchronization", () => {
 			expect(taskHistoryItemUpdatedCalls.length).toBe(0)
 		})
 
+		it("preserves delegated metadata on partial update unless explicitly overwritten (UTH-02)", async () => {
+			await provider.resolveWebviewView(mockWebviewView)
+			provider.isViewLaunched = true
+
+			const initial = createHistoryItem({
+				id: "task-delegated-metadata",
+				task: "Delegated task",
+				status: "delegated",
+				delegatedToId: "child-1",
+				awaitingChildId: "child-1",
+				childIds: ["child-1"],
+			})
+
+			await provider.updateTaskHistory(initial, { broadcast: false })
+
+			// Partial update intentionally omits delegated metadata fields.
+			const partialUpdate: HistoryItem = {
+				...createHistoryItem({ id: "task-delegated-metadata", task: "Delegated task (updated)" }),
+				status: "active",
+			}
+
+			const updatedHistory = await provider.updateTaskHistory(partialUpdate, { broadcast: false })
+			const updatedItem = updatedHistory.find((item) => item.id === "task-delegated-metadata")
+
+			expect(updatedItem).toBeDefined()
+			expect(updatedItem?.status).toBe("active")
+			expect(updatedItem?.delegatedToId).toBe("child-1")
+			expect(updatedItem?.awaitingChildId).toBe("child-1")
+			expect(updatedItem?.childIds).toEqual(["child-1"])
+		})
+
+		it("invalidates recentTasksCache on updateTaskHistory (UTH-04)", async () => {
+			const workspace = provider.cwd
+			const tsBase = Date.now()
+
+			await provider.updateTaskHistory(
+				createHistoryItem({
+					id: "cache-seed",
+					task: "Cache seed",
+					workspace,
+					ts: tsBase,
+				}),
+				{ broadcast: false },
+			)
+
+			const initialRecent = provider.getRecentTasks()
+			expect(initialRecent).toContain("cache-seed")
+
+			// Prime cache and verify internal cache is set.
+			expect((provider as unknown as { recentTasksCache?: string[] }).recentTasksCache).toEqual(initialRecent)
+
+			await provider.updateTaskHistory(
+				createHistoryItem({
+					id: "cache-new",
+					task: "Cache new",
+					workspace,
+					ts: tsBase + 1,
+				}),
+				{ broadcast: false },
+			)
+
+			// Direct assertion for invalidation side-effect.
+			expect((provider as unknown as { recentTasksCache?: string[] }).recentTasksCache).toBeUndefined()
+
+			const recomputedRecent = provider.getRecentTasks()
+			expect(recomputedRecent).toContain("cache-new")
+		})
+
 		it("updates existing task in history", async () => {
 			await provider.resolveWebviewView(mockWebviewView)
 			provider.isViewLaunched = true
@@ -592,4 +660,97 @@ describe("ClineProvider Task History Synchronization", () => {
 			expect(state.taskHistory.some((item: HistoryItem) => item.workspace === "/different/workspace")).toBe(true)
 		})
 	})
+
+	describe("taskHistory write lock (mutex)", () => {
+		it("serializes concurrent updateTaskHistory calls so no entries are lost", async () => {
+			await provider.resolveWebviewView(mockWebviewView)
+
+			// Fire 5 concurrent updateTaskHistory calls
+			const items = Array.from({ length: 5 }, (_, i) =>
+				createHistoryItem({ id: `concurrent-${i}`, task: `Task ${i}` }),
+			)
+
+			await Promise.all(items.map((item) => provider.updateTaskHistory(item, { broadcast: false })))
+
+			// All 5 entries must survive
+			const history = (provider as any).contextProxy.getGlobalState("taskHistory") as HistoryItem[]
+			const ids = history.map((h: HistoryItem) => h.id)
+			for (const item of items) {
+				expect(ids).toContain(item.id)
+			}
+			expect(history.length).toBe(5)
+		})
+
+		it("serializes concurrent update and deleteTaskFromState so they don't corrupt each other", async () => {
+			await provider.resolveWebviewView(mockWebviewView)
+
+			// Seed with two items
+			const keep = createHistoryItem({ id: "keep-me", task: "Keep" })
+			const remove = createHistoryItem({ id: "remove-me", task: "Remove" })
+			await provider.updateTaskHistory(keep, { broadcast: false })
+			await provider.updateTaskHistory(remove, { broadcast: false })
+
+			// Concurrently: add a new item AND delete "remove-me"
+			const newItem = createHistoryItem({ id: "new-item", task: "New" })
+			await Promise.all([
+				provider.updateTaskHistory(newItem, { broadcast: false }),
+				provider.deleteTaskFromState("remove-me"),
+			])
+
+			const history = (provider as any).contextProxy.getGlobalState("taskHistory") as HistoryItem[]
+			const ids = history.map((h: HistoryItem) => h.id)
+			expect(ids).toContain("keep-me")
+			expect(ids).toContain("new-item")
+			expect(ids).not.toContain("remove-me")
+		})
+
+		it("does not block subsequent writes when a previous write errors", async () => {
+			await provider.resolveWebviewView(mockWebviewView)
+
+			// Temporarily make updateGlobalState throw
+			const origUpdateGlobalState = (provider as any).updateGlobalState.bind(provider)
+			let callCount = 0
+			;(provider as any).updateGlobalState = vi.fn().mockImplementation((...args: unknown[]) => {
+				callCount++
+				if (callCount === 1) {
+					return Promise.reject(new Error("simulated write failure"))
+				}
+				return origUpdateGlobalState(...args)
+			})
+
+			// First call should fail
+			const item1 = createHistoryItem({ id: "fail-item", task: "Fail" })
+			await expect(provider.updateTaskHistory(item1, { broadcast: false })).rejects.toThrow(
+				"simulated write failure",
+			)
+
+			// Second call should still succeed (lock not stuck)
+			const item2 = createHistoryItem({ id: "ok-item", task: "OK" })
+			const result = await provider.updateTaskHistory(item2, { broadcast: false })
+			expect(result.some((h) => h.id === "ok-item")).toBe(true)
+		})
+
+		it("serializes concurrent updates to the same item preserving the last write", async () => {
+			await provider.resolveWebviewView(mockWebviewView)
+
+			const base = createHistoryItem({ id: "race-item", task: "Original" })
+			await provider.updateTaskHistory(base, { broadcast: false })
+
+			// Fire two concurrent updates to the same item
+			await Promise.all([
+				provider.updateTaskHistory(createHistoryItem({ id: "race-item", task: "Original", tokensIn: 111 }), {
+					broadcast: false,
+				}),
+				provider.updateTaskHistory(createHistoryItem({ id: "race-item", task: "Original", tokensIn: 222 }), {
+					broadcast: false,
+				}),
+			])
+
+			const history = (provider as any).contextProxy.getGlobalState("taskHistory") as HistoryItem[]
+			const item = history.find((h: HistoryItem) => h.id === "race-item")
+			expect(item).toBeDefined()
+			// The second write (tokensIn: 222) should be the last one since writes are serialized
+			expect(item!.tokensIn).toBe(222)
+		})
+	})
 })

+ 0 - 2
src/core/webview/__tests__/generateSystemPrompt.browser-capability.spec.ts

@@ -62,8 +62,6 @@ function makeProviderStub() {
 			experiments: {},
 			browserToolEnabled: true, // critical: enabled in settings
 			language: "en",
-			maxReadFileLine: -1,
-			maxConcurrentFileReads: 5,
 		}),
 	} as any
 }

+ 0 - 4
src/core/webview/generateSystemPrompt.ts

@@ -19,8 +19,6 @@ export const generateSystemPrompt = async (provider: ClineProvider, message: Web
 		experiments,
 		browserToolEnabled,
 		language,
-		maxReadFileLine,
-		maxConcurrentFileReads,
 		enableSubfolderRules,
 	} = await provider.getState()
 
@@ -70,9 +68,7 @@ export const generateSystemPrompt = async (provider: ClineProvider, message: Web
 		experiments,
 		language,
 		rooIgnoreInstructions,
-		maxReadFileLine !== -1,
 		{
-			maxConcurrentFileReads: maxConcurrentFileReads ?? 5,
 			todoListEnabled: apiConfiguration?.todoListEnabled ?? true,
 			useAgentRules: vscode.workspace.getConfiguration(Package.name).get<boolean>("useAgentRules") ?? true,
 			enableSubfolderRules: enableSubfolderRules ?? false,

+ 11 - 5
src/core/webview/webviewMessageHandler.ts

@@ -490,12 +490,18 @@ export const webviewMessageHandler = async (
 						if (!checkExistKey(listApiConfig[0])) {
 							const { apiConfiguration } = await provider.getState()
 
-							await provider.providerSettingsManager.saveConfig(
-								listApiConfig[0].name ?? "default",
-								apiConfiguration,
-							)
+							// Only save if the current configuration has meaningful settings
+							// (e.g., API keys). This prevents saving a default "anthropic"
+							// fallback when no real config exists, which can happen during
+							// CLI initialization before provider settings are applied.
+							if (checkExistKey(apiConfiguration)) {
+								await provider.providerSettingsManager.saveConfig(
+									listApiConfig[0].name ?? "default",
+									apiConfiguration,
+								)
 
-							listApiConfig[0].apiProvider = apiConfiguration.apiProvider
+								listApiConfig[0].apiProvider = apiConfiguration.apiProvider
+							}
 						}
 					}
 

+ 12 - 7
src/extension.ts

@@ -1,15 +1,20 @@
 import * as vscode from "vscode"
 import * as dotenvx from "@dotenvx/dotenvx"
+import * as fs from "fs"
 import * as path from "path"
 
 // Load environment variables from .env file
-try {
-	// Specify path to .env file in the project root directory
-	const envPath = path.join(__dirname, "..", ".env")
-	dotenvx.config({ path: envPath })
-} catch (e) {
-	// Silently handle environment loading errors
-	console.warn("Failed to load environment variables:", e)
+// The extension-level .env is optional (not shipped in production builds).
+// Avoid calling dotenvx when the file doesn't exist, otherwise dotenvx emits
+// a noisy [MISSING_ENV_FILE] error to the extension host console.
+const envPath = path.join(__dirname, "..", ".env")
+if (fs.existsSync(envPath)) {
+	try {
+		dotenvx.config({ path: envPath })
+	} catch (e) {
+		// Best-effort only: never fail extension activation due to optional env loading.
+		console.warn("Failed to load environment variables:", e)
+	}
 }
 
 import type { CloudUserInfo, AuthState } from "@roo-code/types"

+ 1 - 0
src/extension/__tests__/api-send-message.spec.ts

@@ -28,6 +28,7 @@ describe("API - SendMessage Command", () => {
 			postMessageToWebview: mockPostMessageToWebview,
 			on: vi.fn(),
 			getCurrentTaskStack: vi.fn().mockReturnValue([]),
+			getCurrentTask: vi.fn().mockReturnValue(undefined),
 			viewLaunched: true,
 		} as unknown as ClineProvider
 

+ 57 - 25
src/extension/api.ts

@@ -4,6 +4,7 @@ import * as path from "path"
 import * as os from "os"
 
 import * as vscode from "vscode"
+import pWaitFor from "p-wait-for"
 
 import {
 	type RooCodeAPI,
@@ -30,7 +31,6 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 	private readonly sidebarProvider: ClineProvider
 	private readonly context: vscode.ExtensionContext
 	private readonly ipc?: IpcServer
-	private readonly taskMap = new Map<string, ClineProvider>()
 	private readonly log: (...args: unknown[]) => void
 	private logfile?: string
 
@@ -65,35 +65,37 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 			ipc.listen()
 			this.log(`[API] ipc server started: socketPath=${socketPath}, pid=${process.pid}, ppid=${process.ppid}`)
 
-			ipc.on(IpcMessageType.TaskCommand, async (_clientId, { commandName, data }) => {
-				switch (commandName) {
+			ipc.on(IpcMessageType.TaskCommand, async (_clientId, command) => {
+				switch (command.commandName) {
 					case TaskCommandName.StartNewTask:
-						this.log(`[API] StartNewTask -> ${data.text}, ${JSON.stringify(data.configuration)}`)
-						await this.startNewTask(data)
+						this.log(
+							`[API] StartNewTask -> ${command.data.text}, ${JSON.stringify(command.data.configuration)}`,
+						)
+						await this.startNewTask(command.data)
 						break
 					case TaskCommandName.CancelTask:
-						this.log(`[API] CancelTask -> ${data}`)
-						await this.cancelTask(data)
+						this.log(`[API] CancelTask`)
+						await this.cancelCurrentTask()
 						break
 					case TaskCommandName.CloseTask:
-						this.log(`[API] CloseTask -> ${data}`)
+						this.log(`[API] CloseTask`)
 						await vscode.commands.executeCommand("workbench.action.files.saveFiles")
 						await vscode.commands.executeCommand("workbench.action.closeWindow")
 						break
 					case TaskCommandName.ResumeTask:
-						this.log(`[API] ResumeTask -> ${data}`)
+						this.log(`[API] ResumeTask -> ${command.data}`)
 						try {
-							await this.resumeTask(data)
+							await this.resumeTask(command.data)
 						} catch (error) {
 							const errorMessage = error instanceof Error ? error.message : String(error)
-							this.log(`[API] ResumeTask failed for taskId ${data}: ${errorMessage}`)
+							this.log(`[API] ResumeTask failed for taskId ${command.data}: ${errorMessage}`)
 							// Don't rethrow - we want to prevent IPC server crashes
 							// The error is logged for debugging purposes
 						}
 						break
 					case TaskCommandName.SendMessage:
-						this.log(`[API] SendMessage -> ${data.text}`)
-						await this.sendMessage(data.text, data.images)
+						this.log(`[API] SendMessage -> ${command.data.text}`)
+						await this.sendMessage(command.data.text, command.data.images)
 						break
 				}
 			})
@@ -153,9 +155,19 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 	}
 
 	public async resumeTask(taskId: string): Promise<void> {
+		await vscode.commands.executeCommand(`${Package.name}.SidebarProvider.focus`)
+		await this.waitForWebviewLaunch(5_000)
+
 		const { historyItem } = await this.sidebarProvider.getTaskWithId(taskId)
 		await this.sidebarProvider.createTaskWithHistoryItem(historyItem)
-		await this.sidebarProvider.postMessageToWebview({ type: "action", action: "chatButtonClicked" })
+
+		if (this.sidebarProvider.viewLaunched) {
+			await this.sidebarProvider.postMessageToWebview({ type: "action", action: "chatButtonClicked" })
+		} else {
+			this.log(
+				`[API#resumeTask] webview not launched after resume for task ${taskId}; continuing in headless mode`,
+			)
+		}
 	}
 
 	public async isTaskInHistory(taskId: string): Promise<boolean> {
@@ -181,16 +193,22 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 		await this.sidebarProvider.cancelTask()
 	}
 
-	public async cancelTask(taskId: string) {
-		const provider = this.taskMap.get(taskId)
+	public async sendMessage(text?: string, images?: string[]) {
+		const currentTask = this.sidebarProvider.getCurrentTask()
+
+		// In headless/sandbox flows the webview may not be launched, so routing
+		// through invoke=sendMessage drops the message. Deliver directly to the
+		// task ask-response channel instead.
+		if (!this.sidebarProvider.viewLaunched) {
+			if (!currentTask) {
+				this.log("[API#sendMessage] no current task in headless mode; message dropped")
+				return
+			}
 
-		if (provider) {
-			await provider.cancelTask()
-			this.taskMap.delete(taskId)
+			await currentTask.submitUserMessage(text ?? "", images)
+			return
 		}
-	}
 
-	public async sendMessage(text?: string, images?: string[]) {
 		await this.sidebarProvider.postMessageToWebview({ type: "invoke", invoke: "sendMessage", text, images })
 	}
 
@@ -206,13 +224,26 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 		return this.sidebarProvider.viewLaunched
 	}
 
+	private async waitForWebviewLaunch(timeoutMs: number): Promise<boolean> {
+		try {
+			await pWaitFor(() => this.sidebarProvider.viewLaunched, {
+				timeout: timeoutMs,
+				interval: 50,
+			})
+
+			return true
+		} catch {
+			this.log(`[API#waitForWebviewLaunch] webview did not launch within ${timeoutMs}ms`)
+			return false
+		}
+	}
+
 	private registerListeners(provider: ClineProvider) {
 		provider.on(RooCodeEventName.TaskCreated, (task) => {
 			// Task Lifecycle
 
 			task.on(RooCodeEventName.TaskStarted, async () => {
 				this.emit(RooCodeEventName.TaskStarted, task.taskId)
-				this.taskMap.set(task.taskId, provider)
 				await this.fileLog(`[${new Date().toISOString()}] taskStarted -> ${task.taskId}\n`)
 			})
 
@@ -221,8 +252,6 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 					isSubtask: !!task.parentTaskId,
 				})
 
-				this.taskMap.delete(task.taskId)
-
 				await this.fileLog(
 					`[${new Date().toISOString()}] taskCompleted -> ${task.taskId} | ${JSON.stringify(tokenUsage, null, 2)} | ${JSON.stringify(toolUsage, null, 2)}\n`,
 				)
@@ -230,7 +259,6 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 
 			task.on(RooCodeEventName.TaskAborted, () => {
 				this.emit(RooCodeEventName.TaskAborted, task.taskId)
-				this.taskMap.delete(task.taskId)
 			})
 
 			task.on(RooCodeEventName.TaskFocused, () => {
@@ -301,6 +329,10 @@ export class API extends EventEmitter<RooCodeEvents> implements RooCodeAPI {
 				this.emit(RooCodeEventName.TaskAskResponded, task.taskId)
 			})
 
+			task.on(RooCodeEventName.QueuedMessagesUpdated, (taskId, messages) => {
+				this.emit(RooCodeEventName.QueuedMessagesUpdated, taskId, messages)
+			})
+
 			// Task Analytics
 
 			task.on(RooCodeEventName.TaskToolFailed, (taskId, tool, error) => {

+ 0 - 221
src/integrations/misc/__tests__/extract-text-large-files.spec.ts

@@ -1,221 +0,0 @@
-// npx vitest run integrations/misc/__tests__/extract-text-large-files.spec.ts
-
-import * as fs from "fs/promises"
-
-import { extractTextFromFile } from "../extract-text"
-import { countFileLines } from "../line-counter"
-import { readLines } from "../read-lines"
-import { isBinaryFile } from "isbinaryfile"
-
-// Mock all dependencies
-vi.mock("fs/promises")
-vi.mock("../line-counter")
-vi.mock("../read-lines")
-vi.mock("isbinaryfile")
-
-describe("extractTextFromFile - Large File Handling", () => {
-	// Type the mocks
-	const mockedFs = vi.mocked(fs)
-	const mockedCountFileLines = vi.mocked(countFileLines)
-	const mockedReadLines = vi.mocked(readLines)
-	const mockedIsBinaryFile = vi.mocked(isBinaryFile)
-
-	beforeEach(() => {
-		vi.clearAllMocks()
-		// Set default mock behavior
-		mockedFs.access.mockResolvedValue(undefined)
-		mockedIsBinaryFile.mockResolvedValue(false)
-	})
-
-	it("should truncate files that exceed maxReadFileLine limit", async () => {
-		const largeFileContent = Array(150)
-			.fill(null)
-			.map((_, i) => `Line ${i + 1}: This is a test line with some content`)
-			.join("\n")
-
-		mockedCountFileLines.mockResolvedValue(150)
-		mockedReadLines.mockResolvedValue(
-			Array(100)
-				.fill(null)
-				.map((_, i) => `Line ${i + 1}: This is a test line with some content`)
-				.join("\n"),
-		)
-
-		const result = await extractTextFromFile("/test/large-file.ts", 100)
-
-		// Should only include first 100 lines with line numbers
-		expect(result).toContain("  1 | Line 1: This is a test line with some content")
-		expect(result).toContain("100 | Line 100: This is a test line with some content")
-		expect(result).not.toContain("101 | Line 101: This is a test line with some content")
-
-		// Should include truncation message
-		expect(result).toContain(
-			"[File truncated: showing 100 of 150 total lines. The file is too large and may exhaust the context window if read in full.]",
-		)
-	})
-
-	it("should not truncate files within the maxReadFileLine limit", async () => {
-		const smallFileContent = Array(50)
-			.fill(null)
-			.map((_, i) => `Line ${i + 1}: This is a test line`)
-			.join("\n")
-
-		mockedCountFileLines.mockResolvedValue(50)
-		mockedFs.readFile.mockResolvedValue(smallFileContent as any)
-
-		const result = await extractTextFromFile("/test/small-file.ts", 100)
-
-		// Should include all lines with line numbers
-		expect(result).toContain(" 1 | Line 1: This is a test line")
-		expect(result).toContain("50 | Line 50: This is a test line")
-
-		// Should not include truncation message
-		expect(result).not.toContain("[File truncated:")
-	})
-
-	it("should handle files with exactly maxReadFileLine lines", async () => {
-		const exactFileContent = Array(100)
-			.fill(null)
-			.map((_, i) => `Line ${i + 1}`)
-			.join("\n")
-
-		mockedCountFileLines.mockResolvedValue(100)
-		mockedFs.readFile.mockResolvedValue(exactFileContent as any)
-
-		const result = await extractTextFromFile("/test/exact-file.ts", 100)
-
-		// Should include all lines with line numbers
-		expect(result).toContain("  1 | Line 1")
-		expect(result).toContain("100 | Line 100")
-
-		// Should not include truncation message
-		expect(result).not.toContain("[File truncated:")
-	})
-
-	it("should handle undefined maxReadFileLine by not truncating", async () => {
-		const largeFileContent = Array(200)
-			.fill(null)
-			.map((_, i) => `Line ${i + 1}`)
-			.join("\n")
-
-		mockedFs.readFile.mockResolvedValue(largeFileContent as any)
-
-		const result = await extractTextFromFile("/test/large-file.ts", undefined)
-
-		// Should include all lines with line numbers when maxReadFileLine is undefined
-		expect(result).toContain("  1 | Line 1")
-		expect(result).toContain("200 | Line 200")
-
-		// Should not include truncation message
-		expect(result).not.toContain("[File truncated:")
-	})
-
-	it("should handle empty files", async () => {
-		mockedFs.readFile.mockResolvedValue("" as any)
-
-		const result = await extractTextFromFile("/test/empty-file.ts", 100)
-
-		expect(result).toBe("")
-		expect(result).not.toContain("[File truncated:")
-	})
-
-	it("should handle files with only newlines", async () => {
-		const newlineOnlyContent = "\n\n\n\n\n"
-
-		mockedCountFileLines.mockResolvedValue(6) // 5 newlines = 6 lines
-		mockedReadLines.mockResolvedValue("\n\n")
-
-		const result = await extractTextFromFile("/test/newline-file.ts", 3)
-
-		// Should truncate at line 3
-		expect(result).toContain("[File truncated: showing 3 of 6 total lines")
-	})
-
-	it("should handle very large files efficiently", async () => {
-		// Simulate a 10,000 line file
-		mockedCountFileLines.mockResolvedValue(10000)
-		mockedReadLines.mockResolvedValue(
-			Array(500)
-				.fill(null)
-				.map((_, i) => `Line ${i + 1}: Some content here`)
-				.join("\n"),
-		)
-
-		const result = await extractTextFromFile("/test/very-large-file.ts", 500)
-
-		// Should only include first 500 lines with line numbers
-		expect(result).toContain("  1 | Line 1: Some content here")
-		expect(result).toContain("500 | Line 500: Some content here")
-		expect(result).not.toContain("501 | Line 501: Some content here")
-
-		// Should show truncation message
-		expect(result).toContain("[File truncated: showing 500 of 10000 total lines")
-	})
-
-	it("should handle maxReadFileLine of 0 by throwing an error", async () => {
-		const fileContent = "Line 1\nLine 2\nLine 3"
-
-		mockedFs.readFile.mockResolvedValue(fileContent as any)
-
-		// maxReadFileLine of 0 should throw an error
-		await expect(extractTextFromFile("/test/file.ts", 0)).rejects.toThrow(
-			"Invalid maxReadFileLine: 0. Must be a positive integer or -1 for unlimited.",
-		)
-	})
-
-	it("should handle negative maxReadFileLine by treating as undefined", async () => {
-		const fileContent = "Line 1\nLine 2\nLine 3"
-
-		mockedFs.readFile.mockResolvedValue(fileContent as any)
-
-		const result = await extractTextFromFile("/test/file.ts", -1)
-
-		// Should include all content with line numbers when negative
-		expect(result).toContain("1 | Line 1")
-		expect(result).toContain("2 | Line 2")
-		expect(result).toContain("3 | Line 3")
-		expect(result).not.toContain("[File truncated:")
-	})
-
-	it("should preserve file content structure when truncating", async () => {
-		const structuredContent = [
-			"function example() {",
-			"  const x = 1;",
-			"  const y = 2;",
-			"  return x + y;",
-			"}",
-			"",
-			"// More code below",
-		].join("\n")
-
-		mockedCountFileLines.mockResolvedValue(7)
-		mockedReadLines.mockResolvedValue(["function example() {", "  const x = 1;", "  const y = 2;"].join("\n"))
-
-		const result = await extractTextFromFile("/test/structured.ts", 3)
-
-		// Should preserve the first 3 lines with line numbers
-		expect(result).toContain("1 | function example() {")
-		expect(result).toContain("2 |   const x = 1;")
-		expect(result).toContain("3 |   const y = 2;")
-		expect(result).not.toContain("4 |   return x + y;")
-
-		// Should include truncation info
-		expect(result).toContain("[File truncated: showing 3 of 7 total lines")
-	})
-
-	it("should handle binary files by throwing an error", async () => {
-		mockedIsBinaryFile.mockResolvedValue(true)
-
-		await expect(extractTextFromFile("/test/binary.bin", 100)).rejects.toThrow(
-			"Cannot read text for file type: .bin",
-		)
-	})
-
-	it("should handle file not found errors", async () => {
-		mockedFs.access.mockRejectedValue(new Error("ENOENT"))
-
-		await expect(extractTextFromFile("/test/nonexistent.ts", 100)).rejects.toThrow(
-			"File not found: /test/nonexistent.ts",
-		)
-	})
-})

+ 639 - 0
src/integrations/misc/__tests__/indentation-reader.spec.ts

@@ -0,0 +1,639 @@
+import { describe, it, expect } from "vitest"
+import {
+	parseLines,
+	formatWithLineNumbers,
+	readWithIndentation,
+	readWithSlice,
+	computeEffectiveIndents,
+	type LineRecord,
+	type IndentationReadResult,
+} from "../indentation-reader"
+
+// ─── Test Fixtures ────────────────────────────────────────────────────────────
+
+const PYTHON_CODE = `#!/usr/bin/env python3
+"""Module docstring."""
+import os
+import sys
+from typing import List
+
+class Calculator:
+    """A simple calculator class."""
+    
+    def __init__(self, value: int = 0):
+        self.value = value
+    
+    def add(self, n: int) -> int:
+        """Add a number."""
+        self.value += n
+        return self.value
+    
+    def subtract(self, n: int) -> int:
+        """Subtract a number."""
+        self.value -= n
+        return self.value
+    
+    def reset(self):
+        """Reset to zero."""
+        self.value = 0
+
+def main():
+    calc = Calculator()
+    calc.add(5)
+    print(calc.value)
+
+if __name__ == "__main__":
+    main()
+`
+
+const TYPESCRIPT_CODE = `import { something } from "./module"
+import type { SomeType } from "./types"
+
+// Constants
+const MAX_VALUE = 100
+
+interface Config {
+    name: string
+    value: number
+}
+
+class Handler {
+    private config: Config
+
+    constructor(config: Config) {
+        this.config = config
+    }
+
+    process(input: string): string {
+        // Process the input
+        const result = input.toUpperCase()
+        if (result.length > MAX_VALUE) {
+            return result.slice(0, MAX_VALUE)
+        }
+        return result
+    }
+
+    validate(data: unknown): boolean {
+        if (typeof data !== "string") {
+            return false
+        }
+        return data.length > 0
+    }
+}
+
+export function createHandler(config: Config): Handler {
+    return new Handler(config)
+}
+`
+
+const SIMPLE_CODE = `function outer() {
+    function inner() {
+        console.log("hello")
+    }
+    inner()
+}
+`
+
+const CODE_WITH_BLANKS = `class Example:
+    def method_one(self):
+        x = 1
+        
+        y = 2
+        
+        return x + y
+    
+    def method_two(self):
+        return 42
+`
+
+// ─── parseLines Tests ─────────────────────────────────────────────────────────
+
+describe("parseLines", () => {
+	it("should parse lines with correct line numbers", () => {
+		const content = "line1\nline2\nline3"
+		const lines = parseLines(content)
+
+		expect(lines).toHaveLength(3)
+		expect(lines[0].lineNumber).toBe(1)
+		expect(lines[1].lineNumber).toBe(2)
+		expect(lines[2].lineNumber).toBe(3)
+	})
+
+	it("should calculate indentation levels correctly", () => {
+		const content = "no indent\n    one level\n        two levels\n\t\ttab indent"
+		const lines = parseLines(content)
+
+		expect(lines[0].indentLevel).toBe(0)
+		expect(lines[1].indentLevel).toBe(1) // 4 spaces = 1 level
+		expect(lines[2].indentLevel).toBe(2) // 8 spaces = 2 levels
+		expect(lines[3].indentLevel).toBe(2) // 2 tabs = 2 levels (tabs = 4 spaces each)
+	})
+
+	it("should identify blank lines", () => {
+		const content = "content\n\n   \nmore content"
+		const lines = parseLines(content)
+
+		expect(lines[0].isBlank).toBe(false)
+		expect(lines[1].isBlank).toBe(true) // empty
+		expect(lines[2].isBlank).toBe(true) // whitespace only
+		expect(lines[3].isBlank).toBe(false)
+	})
+
+	it("should identify block starts (Python style)", () => {
+		const content = "def foo():\n    pass\nclass Bar:\n    pass"
+		const lines = parseLines(content)
+
+		expect(lines[0].isBlockStart).toBe(true) // def foo():
+		expect(lines[1].isBlockStart).toBe(false) // pass
+		expect(lines[2].isBlockStart).toBe(true) // class Bar:
+	})
+
+	it("should identify block starts (C-style)", () => {
+		const content = "function foo() {\n    return\n}\nif (x) {"
+		const lines = parseLines(content)
+
+		expect(lines[0].isBlockStart).toBe(true) // function foo() {
+		expect(lines[1].isBlockStart).toBe(false) // return
+		expect(lines[2].isBlockStart).toBe(false) // }
+		expect(lines[3].isBlockStart).toBe(true) // if (x) {
+	})
+
+	it("should handle empty content", () => {
+		const lines = parseLines("")
+		expect(lines).toHaveLength(1)
+		expect(lines[0].isBlank).toBe(true)
+	})
+})
+
+// ─── computeEffectiveIndents Tests ────────────────────────────────────────────
+
+describe("computeEffectiveIndents", () => {
+	it("should return same indents for non-blank lines", () => {
+		const content = "line1\n    line2\n        line3"
+		const lines = parseLines(content)
+		const effective = computeEffectiveIndents(lines)
+
+		expect(effective[0]).toBe(0)
+		expect(effective[1]).toBe(1)
+		expect(effective[2]).toBe(2)
+	})
+
+	it("should inherit previous indent for blank lines", () => {
+		const content = "line1\n    line2\n\n    line3"
+		const lines = parseLines(content)
+		const effective = computeEffectiveIndents(lines)
+
+		expect(effective[0]).toBe(0) // line1
+		expect(effective[1]).toBe(1) // line2 (indent 1)
+		expect(effective[2]).toBe(1) // blank line inherits from line2
+		expect(effective[3]).toBe(1) // line3
+	})
+
+	it("should handle multiple consecutive blank lines", () => {
+		const content = "    start\n\n\n\n    end"
+		const lines = parseLines(content)
+		const effective = computeEffectiveIndents(lines)
+
+		expect(effective[0]).toBe(1) // start
+		expect(effective[1]).toBe(1) // blank inherits
+		expect(effective[2]).toBe(1) // blank inherits
+		expect(effective[3]).toBe(1) // blank inherits
+		expect(effective[4]).toBe(1) // end
+	})
+
+	it("should handle blank line at start", () => {
+		const content = "\n    content"
+		const lines = parseLines(content)
+		const effective = computeEffectiveIndents(lines)
+
+		expect(effective[0]).toBe(0) // blank at start has no previous, defaults to 0
+		expect(effective[1]).toBe(1) // content
+	})
+})
+
+// ─── formatWithLineNumbers Tests ──────────────────────────────────────────────
+
+describe("formatWithLineNumbers", () => {
+	it("should format lines with line numbers", () => {
+		const lines: LineRecord[] = [
+			{ lineNumber: 1, content: "first", indentLevel: 0, isBlank: false, isBlockStart: false },
+			{ lineNumber: 2, content: "second", indentLevel: 0, isBlank: false, isBlockStart: false },
+		]
+
+		const result = formatWithLineNumbers(lines)
+		expect(result).toBe("1 | first\n2 | second")
+	})
+
+	it("should pad line numbers for alignment", () => {
+		const lines: LineRecord[] = [
+			{ lineNumber: 1, content: "a", indentLevel: 0, isBlank: false, isBlockStart: false },
+			{ lineNumber: 10, content: "b", indentLevel: 0, isBlank: false, isBlockStart: false },
+			{ lineNumber: 100, content: "c", indentLevel: 0, isBlank: false, isBlockStart: false },
+		]
+
+		const result = formatWithLineNumbers(lines)
+		expect(result).toBe("  1 | a\n 10 | b\n100 | c")
+	})
+
+	it("should truncate long lines", () => {
+		const longLine = "x".repeat(600)
+		const lines: LineRecord[] = [
+			{ lineNumber: 1, content: longLine, indentLevel: 0, isBlank: false, isBlockStart: false },
+		]
+
+		const result = formatWithLineNumbers(lines, 100)
+		expect(result.length).toBeLessThan(longLine.length)
+		expect(result).toContain("...")
+	})
+
+	it("should handle empty array", () => {
+		const result = formatWithLineNumbers([])
+		expect(result).toBe("")
+	})
+})
+
+// ─── readWithSlice Tests ──────────────────────────────────────────────────────
+
+describe("readWithSlice", () => {
+	it("should read from beginning with default offset", () => {
+		const result = readWithSlice(SIMPLE_CODE, 0, 10)
+
+		expect(result.totalLines).toBe(7) // 6 lines + empty trailing
+		expect(result.returnedLines).toBe(7)
+		expect(result.wasTruncated).toBe(false)
+		expect(result.content).toContain("1 | function outer()")
+	})
+
+	it("should respect offset parameter", () => {
+		const result = readWithSlice(SIMPLE_CODE, 2, 10)
+
+		expect(result.content).not.toContain("function outer()")
+		expect(result.content).toContain("console.log")
+		expect(result.includedRanges[0][0]).toBe(3) // 1-based, offset 2 = line 3
+	})
+
+	it("should respect limit parameter", () => {
+		const result = readWithSlice(TYPESCRIPT_CODE, 0, 5)
+
+		expect(result.returnedLines).toBe(5)
+		expect(result.wasTruncated).toBe(true)
+	})
+
+	it("should handle offset beyond file end", () => {
+		const result = readWithSlice(SIMPLE_CODE, 1000, 10)
+
+		expect(result.returnedLines).toBe(0)
+		expect(result.content).toContain("Error")
+	})
+
+	it("should handle negative offset", () => {
+		const result = readWithSlice(SIMPLE_CODE, -5, 10)
+
+		// Should normalize to 0
+		expect(result.includedRanges[0][0]).toBe(1)
+	})
+})
+
+// ─── readWithIndentation Tests ────────────────────────────────────────────────
+
+describe("readWithIndentation", () => {
+	describe("basic block extraction", () => {
+		it("should extract content around the anchor line", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15, // Inside add() method
+				maxLevels: 0, // unlimited
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			expect(result.content).toContain("def add")
+			expect(result.content).toContain("self.value += n")
+			expect(result.content).toContain("return self.value")
+		})
+
+		it("should handle anchor at first line", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 1,
+				maxLevels: 0,
+				includeHeader: false,
+			})
+
+			expect(result.returnedLines).toBeGreaterThan(0)
+			expect(result.content).toContain("function outer()")
+		})
+
+		it("should handle anchor at last line", () => {
+			const lines = PYTHON_CODE.trim().split("\n").length
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: lines,
+				maxLevels: 0,
+				includeHeader: false,
+			})
+
+			expect(result.returnedLines).toBeGreaterThan(0)
+		})
+	})
+
+	describe("max_levels behavior", () => {
+		it("should include all content when maxLevels=0 (unlimited)", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 3, // Inside inner()
+				maxLevels: 0,
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// With unlimited levels, should get the whole file
+			expect(result.content).toContain("function outer()")
+			expect(result.content).toContain("function inner()")
+			expect(result.content).toContain("console.log")
+		})
+
+		it("should limit expansion when maxLevels > 0", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 3, // Inside inner()
+				maxLevels: 1,
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// With 1 level, should include inner() context but may not reach outer()
+			expect(result.content).toContain("console.log")
+		})
+
+		it("should handle deeply nested code with unlimited levels", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15, // Inside add() method body
+				maxLevels: 0, // unlimited
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// Should expand to include class context
+			expect(result.content).toContain("class Calculator")
+		})
+	})
+
+	describe("sibling blocks", () => {
+		it("should exclude siblings when includeSiblings is false", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15, // Inside add() method
+				maxLevels: 1,
+				includeSiblings: false,
+				includeHeader: false,
+			})
+
+			// Should focus on add() but not include subtract() or other siblings
+			expect(result.content).toContain("def add")
+		})
+
+		it("should include siblings when includeSiblings is true", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15, // Inside add() method
+				maxLevels: 1,
+				includeSiblings: true,
+				includeHeader: false,
+			})
+
+			// Should include sibling methods
+			expect(result.content).toContain("def add")
+			// May include other siblings depending on limit
+		})
+	})
+
+	describe("file header (includeHeader option)", () => {
+		it("should allow comment lines at min indent when includeHeader is true", () => {
+			// The Codex algorithm's includeHeader option allows comment lines at the
+			// minimum indent level to be included during upward expansion.
+			// This is different from prepending the file's import header.
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15,
+				maxLevels: 0, // unlimited - will expand to indent 0
+				includeHeader: true,
+				includeSiblings: false,
+			})
+
+			// With unlimited levels, bidirectional expansion will include content
+			// at indent level 0. includeHeader allows comment lines to be included.
+			expect(result.returnedLines).toBeGreaterThan(0)
+			expect(result.content).toContain("def add")
+		})
+
+		it("should expand to top-level content with maxLevels=0", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15,
+				maxLevels: 0, // unlimited
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// With unlimited levels, expansion goes to indent 0
+			// which includes the class definition
+			expect(result.content).toContain("class Calculator")
+		})
+
+		it("should include class content when anchored inside a method", () => {
+			const result = readWithIndentation(TYPESCRIPT_CODE, {
+				anchorLine: 20, // Inside Handler class
+				maxLevels: 0,
+				includeHeader: true,
+				includeSiblings: false,
+			})
+
+			// Should include class context
+			expect(result.content).toContain("class Handler")
+		})
+	})
+
+	describe("line limit and max_lines", () => {
+		it("should truncate output when exceeding limit", () => {
+			const result = readWithIndentation(TYPESCRIPT_CODE, {
+				anchorLine: 15,
+				maxLevels: 0,
+				includeHeader: true,
+				includeSiblings: true,
+				limit: 10,
+			})
+
+			expect(result.returnedLines).toBeLessThanOrEqual(10)
+			expect(result.wasTruncated).toBe(true)
+		})
+
+		it("should not truncate when under limit", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 3,
+				maxLevels: 1,
+				includeHeader: false,
+				limit: 100,
+			})
+
+			expect(result.wasTruncated).toBe(false)
+		})
+
+		it("should respect maxLines as separate hard cap", () => {
+			const result = readWithIndentation(TYPESCRIPT_CODE, {
+				anchorLine: 20,
+				maxLevels: 0,
+				includeHeader: true,
+				includeSiblings: true,
+				limit: 100,
+				maxLines: 5, // Hard cap at 5
+			})
+
+			expect(result.returnedLines).toBeLessThanOrEqual(5)
+		})
+
+		it("should use min of limit and maxLines", () => {
+			const result = readWithIndentation(TYPESCRIPT_CODE, {
+				anchorLine: 20,
+				maxLevels: 0,
+				includeHeader: true,
+				includeSiblings: true,
+				limit: 3, // More restrictive than maxLines
+				maxLines: 10,
+			})
+
+			expect(result.returnedLines).toBeLessThanOrEqual(3)
+		})
+	})
+
+	describe("blank line handling", () => {
+		it("should treat blank lines with inherited indentation", () => {
+			const result = readWithIndentation(CODE_WITH_BLANKS, {
+				anchorLine: 4, // blank line inside method_one
+				maxLevels: 1,
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// Blank line should inherit previous indent and be included in expansion
+			expect(result.returnedLines).toBeGreaterThan(0)
+		})
+
+		it("should trim empty lines from edges of result", () => {
+			const result = readWithIndentation(CODE_WITH_BLANKS, {
+				anchorLine: 3, // x = 1
+				maxLevels: 1,
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// Check that result doesn't start or end with blank lines
+			const lines = result.content.split("\n")
+			if (lines.length > 0) {
+				const firstLine = lines[0]
+				const lastLine = lines[lines.length - 1]
+				// Lines should have content after the line number prefix
+				expect(firstLine).toMatch(/\d+\s*\|/)
+				expect(lastLine).toMatch(/\d+\s*\|/)
+			}
+		})
+	})
+
+	describe("error handling", () => {
+		it("should handle invalid anchor line (too low)", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 0,
+				maxLevels: 1,
+			})
+
+			expect(result.content).toContain("Error")
+			expect(result.returnedLines).toBe(0)
+		})
+
+		it("should handle invalid anchor line (too high)", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 9999,
+				maxLevels: 1,
+			})
+
+			expect(result.content).toContain("Error")
+			expect(result.returnedLines).toBe(0)
+		})
+	})
+
+	describe("bidirectional expansion", () => {
+		it("should expand both up and down from anchor", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 3, // console.log("hello") - in the middle
+				maxLevels: 0,
+				includeHeader: false,
+				includeSiblings: false,
+				limit: 10,
+			})
+
+			// Should include lines both before and after anchor
+			expect(result.content).toContain("function inner()")
+			expect(result.content).toContain("console.log")
+		})
+
+		it("should return single line when limit is 1", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 3,
+				maxLevels: 0,
+				includeHeader: false,
+				includeSiblings: false,
+				limit: 1,
+			})
+
+			expect(result.returnedLines).toBe(1)
+			expect(result.content).toContain("console.log")
+		})
+
+		it("should stop expansion when hitting lower indent", () => {
+			const result = readWithIndentation(PYTHON_CODE, {
+				anchorLine: 15, // Inside add() method body (return self.value)
+				maxLevels: 2, // Only go up 2 levels from anchor indent
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			// Should include method but respect maxLevels
+			expect(result.content).toContain("def add")
+		})
+	})
+
+	describe("real-world scenarios", () => {
+		it("should extract a function with its context", () => {
+			const result = readWithIndentation(TYPESCRIPT_CODE, {
+				anchorLine: 37, // Inside createHandler function body (return statement)
+				maxLevels: 0,
+				includeHeader: true,
+				includeSiblings: false,
+			})
+
+			expect(result.content).toContain("export function createHandler")
+			expect(result.content).toContain("return new Handler")
+		})
+
+		it("should extract a class method with class context", () => {
+			const result = readWithIndentation(TYPESCRIPT_CODE, {
+				anchorLine: 19, // Inside process() method
+				maxLevels: 1,
+				includeHeader: false,
+				includeSiblings: false,
+			})
+
+			expect(result.content).toContain("process(input: string)")
+		})
+	})
+
+	describe("includedRanges", () => {
+		it("should return correct contiguous range", () => {
+			const result = readWithIndentation(SIMPLE_CODE, {
+				anchorLine: 3,
+				maxLevels: 0,
+				includeHeader: false,
+				includeSiblings: false,
+				limit: 10,
+			})
+
+			expect(result.includedRanges.length).toBeGreaterThan(0)
+			// Each range should be [start, end] with start <= end
+			for (const [start, end] of result.includedRanges) {
+				expect(start).toBeLessThanOrEqual(end)
+				expect(start).toBeGreaterThan(0)
+			}
+		})
+	})
+})

+ 0 - 147
src/integrations/misc/__tests__/read-file-tool.spec.ts

@@ -1,147 +0,0 @@
-// npx vitest run integrations/misc/__tests__/read-file-tool.spec.ts
-
-import type { Mock } from "vitest"
-import * as path from "path"
-import { countFileLines } from "../line-counter"
-import { readLines } from "../read-lines"
-import { extractTextFromFile, addLineNumbers } from "../extract-text"
-
-// Mock the required functions
-vitest.mock("../line-counter")
-vitest.mock("../read-lines")
-vitest.mock("../extract-text")
-
-describe("read_file tool with maxReadFileLine setting", () => {
-	// Mock original implementation first to use in tests
-	let originalCountFileLines: any
-	let originalReadLines: any
-	let originalExtractTextFromFile: any
-	let originalAddLineNumbers: any
-
-	beforeEach(async () => {
-		// Import actual implementations
-		originalCountFileLines = ((await vitest.importActual("../line-counter")) as any).countFileLines
-		originalReadLines = ((await vitest.importActual("../read-lines")) as any).readLines
-		originalExtractTextFromFile = ((await vitest.importActual("../extract-text")) as any).extractTextFromFile
-		originalAddLineNumbers = ((await vitest.importActual("../extract-text")) as any).addLineNumbers
-
-		vitest.resetAllMocks()
-		// Reset mocks to simulate original behavior
-		;(countFileLines as Mock).mockImplementation(originalCountFileLines)
-		;(readLines as Mock).mockImplementation(originalReadLines)
-		;(extractTextFromFile as Mock).mockImplementation(originalExtractTextFromFile)
-		;(addLineNumbers as Mock).mockImplementation(originalAddLineNumbers)
-	})
-
-	// Test for the case when file size is smaller than maxReadFileLine
-	it("should read entire file when line count is less than maxReadFileLine", async () => {
-		// Mock necessary functions
-		;(countFileLines as Mock).mockResolvedValue(100)
-		;(extractTextFromFile as Mock).mockResolvedValue("Small file content")
-
-		// Create mock implementation that would simulate the behavior
-		// Note: We're not testing the Cline class directly as it would be too complex
-		// We're testing the logic flow that would happen in the read_file implementation
-
-		const filePath = path.resolve("/test", "smallFile.txt")
-		const maxReadFileLine = 500
-
-		// Check line count
-		const lineCount = await countFileLines(filePath)
-		expect(lineCount).toBeLessThan(maxReadFileLine)
-
-		// Should use extractTextFromFile for small files
-		if (lineCount < maxReadFileLine) {
-			await extractTextFromFile(filePath)
-		}
-
-		expect(extractTextFromFile).toHaveBeenCalledWith(filePath)
-		expect(readLines).not.toHaveBeenCalled()
-	})
-
-	// Test for the case when file size is larger than maxReadFileLine
-	it("should truncate file when line count exceeds maxReadFileLine", async () => {
-		// Mock necessary functions
-		;(countFileLines as Mock).mockResolvedValue(5000)
-		;(readLines as Mock).mockResolvedValue("First 500 lines of large file")
-		;(addLineNumbers as Mock).mockReturnValue("1 | First line\n2 | Second line\n...")
-
-		const filePath = path.resolve("/test", "largeFile.txt")
-		const maxReadFileLine = 500
-
-		// Check line count
-		const lineCount = await countFileLines(filePath)
-		expect(lineCount).toBeGreaterThan(maxReadFileLine)
-
-		// Should use readLines for large files
-		if (lineCount > maxReadFileLine) {
-			const content = await readLines(filePath, maxReadFileLine - 1, 0)
-			const numberedContent = addLineNumbers(content)
-
-			// Verify the truncation message is shown (simulated)
-			const truncationMsg = `\n\n[File truncated: showing ${maxReadFileLine} of ${lineCount} total lines]`
-			const fullResult = numberedContent + truncationMsg
-
-			expect(fullResult).toContain("File truncated")
-		}
-
-		expect(readLines).toHaveBeenCalledWith(filePath, maxReadFileLine - 1, 0)
-		expect(addLineNumbers).toHaveBeenCalled()
-		expect(extractTextFromFile).not.toHaveBeenCalled()
-	})
-
-	// Test for the case when the file is a source code file
-	it("should add source code file type info for large source code files", async () => {
-		// Mock necessary functions
-		;(countFileLines as Mock).mockResolvedValue(5000)
-		;(readLines as Mock).mockResolvedValue("First 500 lines of large JavaScript file")
-		;(addLineNumbers as Mock).mockReturnValue('1 | const foo = "bar";\n2 | function test() {...')
-
-		const filePath = path.resolve("/test", "largeFile.js")
-		const maxReadFileLine = 500
-
-		// Check line count
-		const lineCount = await countFileLines(filePath)
-		expect(lineCount).toBeGreaterThan(maxReadFileLine)
-
-		// Check if the file is a source code file
-		const fileExt = path.extname(filePath).toLowerCase()
-		const isSourceCode = [
-			".js",
-			".ts",
-			".jsx",
-			".tsx",
-			".py",
-			".java",
-			".c",
-			".cpp",
-			".cs",
-			".go",
-			".rb",
-			".php",
-			".swift",
-			".rs",
-		].includes(fileExt)
-		expect(isSourceCode).toBeTruthy()
-
-		// Should use readLines for large files
-		if (lineCount > maxReadFileLine) {
-			const content = await readLines(filePath, maxReadFileLine - 1, 0)
-			const numberedContent = addLineNumbers(content)
-
-			// Verify the truncation message and source code message are shown (simulated)
-			let truncationMsg = `\n\n[File truncated: showing ${maxReadFileLine} of ${lineCount} total lines]`
-			if (isSourceCode) {
-				truncationMsg +=
-					"\n\nThis appears to be a source code file. Consider using list_code_definition_names to understand its structure."
-			}
-			const fullResult = numberedContent + truncationMsg
-
-			expect(fullResult).toContain("source code file")
-			expect(fullResult).toContain("list_code_definition_names")
-		}
-
-		expect(readLines).toHaveBeenCalledWith(filePath, maxReadFileLine - 1, 0)
-		expect(addLineNumbers).toHaveBeenCalled()
-	})
-})

+ 0 - 321
src/integrations/misc/__tests__/read-file-with-budget.spec.ts

@@ -1,321 +0,0 @@
-import fs from "fs/promises"
-import path from "path"
-import os from "os"
-import { readFileWithTokenBudget } from "../read-file-with-budget"
-
-describe("readFileWithTokenBudget", () => {
-	let tempDir: string
-
-	beforeEach(async () => {
-		// Create a temporary directory for test files
-		tempDir = path.join(os.tmpdir(), `read-file-budget-test-${Date.now()}`)
-		await fs.mkdir(tempDir, { recursive: true })
-	})
-
-	afterEach(async () => {
-		// Clean up temporary directory
-		await fs.rm(tempDir, { recursive: true, force: true })
-	})
-
-	describe("Basic functionality", () => {
-		test("reads entire small file when within budget", async () => {
-			const filePath = path.join(tempDir, "small.txt")
-			const content = "Line 1\nLine 2\nLine 3"
-			await fs.writeFile(filePath, content)
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000, // Large budget
-			})
-
-			expect(result.content).toBe(content)
-			expect(result.lineCount).toBe(3)
-			expect(result.complete).toBe(true)
-			expect(result.tokenCount).toBeGreaterThan(0)
-			expect(result.tokenCount).toBeLessThan(1000)
-		})
-
-		test("returns correct token count", async () => {
-			const filePath = path.join(tempDir, "token-test.txt")
-			const content = "This is a test file with some content."
-			await fs.writeFile(filePath, content)
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			// Token count should be reasonable (rough estimate: 1 token per 3-4 chars)
-			expect(result.tokenCount).toBeGreaterThan(5)
-			expect(result.tokenCount).toBeLessThan(20)
-		})
-
-		test("returns complete: true for files within budget", async () => {
-			const filePath = path.join(tempDir, "within-budget.txt")
-			const lines = Array.from({ length: 10 }, (_, i) => `Line ${i + 1}`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			expect(result.complete).toBe(true)
-			expect(result.lineCount).toBe(10)
-		})
-	})
-
-	describe("Truncation behavior", () => {
-		test("stops reading when token budget reached", async () => {
-			const filePath = path.join(tempDir, "large.txt")
-			// Create a file with many lines
-			const lines = Array.from({ length: 1000 }, (_, i) => `This is line number ${i + 1} with some content`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 50, // Small budget
-			})
-
-			expect(result.complete).toBe(false)
-			expect(result.lineCount).toBeLessThan(1000)
-			expect(result.lineCount).toBeGreaterThan(0)
-			expect(result.tokenCount).toBeLessThanOrEqual(50)
-		})
-
-		test("returns complete: false when truncated", async () => {
-			const filePath = path.join(tempDir, "truncated.txt")
-			const lines = Array.from({ length: 500 }, (_, i) => `Line ${i + 1}`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 20,
-			})
-
-			expect(result.complete).toBe(false)
-			expect(result.tokenCount).toBeLessThanOrEqual(20)
-		})
-
-		test("content ends at line boundary (no partial lines)", async () => {
-			const filePath = path.join(tempDir, "line-boundary.txt")
-			const lines = Array.from({ length: 100 }, (_, i) => `Line ${i + 1}`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 30,
-			})
-
-			// Content should not end mid-line
-			const contentLines = result.content.split("\n")
-			expect(contentLines.length).toBe(result.lineCount)
-			// Last line should be complete (not cut off)
-			expect(contentLines[contentLines.length - 1]).toMatch(/^Line \d+$/)
-		})
-
-		test("works with different chunk sizes", async () => {
-			const filePath = path.join(tempDir, "chunks.txt")
-			const lines = Array.from({ length: 1000 }, (_, i) => `Line ${i + 1}`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			// Test with small chunk size
-			const result1 = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 50,
-				chunkLines: 10,
-			})
-
-			// Test with large chunk size
-			const result2 = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 50,
-				chunkLines: 500,
-			})
-
-			// Both should truncate, but may differ slightly in exact line count
-			expect(result1.complete).toBe(false)
-			expect(result2.complete).toBe(false)
-			expect(result1.tokenCount).toBeLessThanOrEqual(50)
-			expect(result2.tokenCount).toBeLessThanOrEqual(50)
-		})
-	})
-
-	describe("Edge cases", () => {
-		test("handles empty file", async () => {
-			const filePath = path.join(tempDir, "empty.txt")
-			await fs.writeFile(filePath, "")
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 100,
-			})
-
-			expect(result.content).toBe("")
-			expect(result.lineCount).toBe(0)
-			expect(result.tokenCount).toBe(0)
-			expect(result.complete).toBe(true)
-		})
-
-		test("handles single line file", async () => {
-			const filePath = path.join(tempDir, "single-line.txt")
-			await fs.writeFile(filePath, "Single line content")
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 100,
-			})
-
-			expect(result.content).toBe("Single line content")
-			expect(result.lineCount).toBe(1)
-			expect(result.complete).toBe(true)
-		})
-
-		test("handles budget of 0 tokens", async () => {
-			const filePath = path.join(tempDir, "zero-budget.txt")
-			await fs.writeFile(filePath, "Line 1\nLine 2\nLine 3")
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 0,
-			})
-
-			expect(result.content).toBe("")
-			expect(result.lineCount).toBe(0)
-			expect(result.tokenCount).toBe(0)
-			expect(result.complete).toBe(false)
-		})
-
-		test("handles very small budget (fewer tokens than first line)", async () => {
-			const filePath = path.join(tempDir, "tiny-budget.txt")
-			const longLine = "This is a very long line with lots of content that will exceed a tiny token budget"
-			await fs.writeFile(filePath, `${longLine}\nLine 2\nLine 3`)
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 2, // Very small budget
-			})
-
-			// Should return empty since first line exceeds budget
-			expect(result.content).toBe("")
-			expect(result.lineCount).toBe(0)
-			expect(result.complete).toBe(false)
-		})
-
-		test("throws error for non-existent file", async () => {
-			const filePath = path.join(tempDir, "does-not-exist.txt")
-
-			await expect(
-				readFileWithTokenBudget(filePath, {
-					budgetTokens: 100,
-				}),
-			).rejects.toThrow("File not found")
-		})
-
-		test("handles file with no trailing newline", async () => {
-			const filePath = path.join(tempDir, "no-trailing-newline.txt")
-			await fs.writeFile(filePath, "Line 1\nLine 2\nLine 3")
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			expect(result.content).toBe("Line 1\nLine 2\nLine 3")
-			expect(result.lineCount).toBe(3)
-			expect(result.complete).toBe(true)
-		})
-
-		test("handles file with trailing newline", async () => {
-			const filePath = path.join(tempDir, "trailing-newline.txt")
-			await fs.writeFile(filePath, "Line 1\nLine 2\nLine 3\n")
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			expect(result.content).toBe("Line 1\nLine 2\nLine 3")
-			expect(result.lineCount).toBe(3)
-			expect(result.complete).toBe(true)
-		})
-	})
-
-	describe("Token counting accuracy", () => {
-		test("returned tokenCount matches actual tokens in content", async () => {
-			const filePath = path.join(tempDir, "accuracy.txt")
-			const content = "Hello world\nThis is a test\nWith some content"
-			await fs.writeFile(filePath, content)
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			// Verify the token count is reasonable
-			// Rough estimate: 1 token per 3-4 characters
-			const minExpected = Math.floor(content.length / 5)
-			const maxExpected = Math.ceil(content.length / 2)
-
-			expect(result.tokenCount).toBeGreaterThanOrEqual(minExpected)
-			expect(result.tokenCount).toBeLessThanOrEqual(maxExpected)
-		})
-
-		test("handles special characters correctly", async () => {
-			const filePath = path.join(tempDir, "special-chars.txt")
-			const content = "Special chars: @#$%^&*()\nUnicode: 你好世界\nEmoji: 😀🎉"
-			await fs.writeFile(filePath, content)
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			expect(result.content).toBe(content)
-			expect(result.tokenCount).toBeGreaterThan(0)
-			expect(result.complete).toBe(true)
-		})
-
-		test("handles code content", async () => {
-			const filePath = path.join(tempDir, "code.ts")
-			const code = `function hello(name: string): string {\n  return \`Hello, \${name}!\`\n}`
-			await fs.writeFile(filePath, code)
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 1000,
-			})
-
-			expect(result.content).toBe(code)
-			expect(result.tokenCount).toBeGreaterThan(0)
-			expect(result.complete).toBe(true)
-		})
-	})
-
-	describe("Performance", () => {
-		test("handles large files efficiently", async () => {
-			const filePath = path.join(tempDir, "large-file.txt")
-			// Create a 1MB file
-			const lines = Array.from({ length: 10000 }, (_, i) => `Line ${i + 1} with some additional content`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			const startTime = Date.now()
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 100,
-			})
-
-			const endTime = Date.now()
-			const duration = endTime - startTime
-
-			// Should complete in reasonable time (less than 5 seconds)
-			expect(duration).toBeLessThan(5000)
-			expect(result.complete).toBe(false)
-			expect(result.tokenCount).toBeLessThanOrEqual(100)
-		})
-
-		test("early exits when budget is reached", async () => {
-			const filePath = path.join(tempDir, "early-exit.txt")
-			// Create a very large file
-			const lines = Array.from({ length: 50000 }, (_, i) => `Line ${i + 1}`)
-			await fs.writeFile(filePath, lines.join("\n"))
-
-			const startTime = Date.now()
-
-			const result = await readFileWithTokenBudget(filePath, {
-				budgetTokens: 50, // Small budget should trigger early exit
-			})
-
-			const endTime = Date.now()
-			const duration = endTime - startTime
-
-			// Should be much faster than reading entire file (less than 2 seconds)
-			expect(duration).toBeLessThan(2000)
-			expect(result.complete).toBe(false)
-			expect(result.lineCount).toBeLessThan(50000)
-		})
-	})
-})

+ 58 - 34
src/integrations/misc/extract-text.ts

@@ -5,8 +5,8 @@ import mammoth from "mammoth"
 import fs from "fs/promises"
 import { isBinaryFile } from "isbinaryfile"
 import { extractTextFromXLSX } from "./extract-text-from-xlsx"
-import { countFileLines } from "./line-counter"
-import { readLines } from "./read-lines"
+import { readWithSlice } from "./indentation-reader"
+import { DEFAULT_LINE_LIMIT } from "../../core/prompts/tools/native-tools/read_file"
 
 async function extractTextFromPDF(filePath: string): Promise<string> {
 	const dataBuffer = await fs.readFile(filePath)
@@ -51,26 +51,34 @@ export function getSupportedBinaryFormats(): string[] {
 }
 
 /**
- * Extracts text content from a file, with support for various formats including PDF, DOCX, XLSX, and plain text.
- * For large text files, can limit the number of lines read to prevent context exhaustion.
+ * Result of extracting text with metadata about truncation
+ */
+export interface ExtractTextResult {
+	/** The extracted content with line numbers */
+	content: string
+	/** Total lines in the file */
+	totalLines: number
+	/** Lines actually returned */
+	returnedLines: number
+	/** Whether output was truncated */
+	wasTruncated: boolean
+	/** Line range shown [start, end] (1-based) */
+	linesShown?: [number, number]
+}
+
+/**
+ * Extracts text content from a file with truncation support.
+ * Returns structured result with metadata about truncation.
  *
  * @param filePath - Path to the file to extract text from
- * @param maxReadFileLine - Maximum number of lines to read from text files.
- *                          Use UNLIMITED_LINES (-1) or undefined for no limit.
- *                          Must be a positive integer or UNLIMITED_LINES.
- * @returns Promise resolving to the extracted text content with line numbers
- * @throws {Error} If file not found, unsupported format, or invalid parameters
+ * @param limit - Maximum lines to return (default: 2000)
+ * @returns Promise resolving to extracted text with metadata
+ * @throws {Error} If file not found or unsupported binary format
  */
-export async function extractTextFromFile(filePath: string, maxReadFileLine?: number): Promise<string> {
-	// Validate maxReadFileLine parameter
-	if (maxReadFileLine !== undefined && maxReadFileLine !== -1) {
-		if (!Number.isInteger(maxReadFileLine) || maxReadFileLine < 1) {
-			throw new Error(
-				`Invalid maxReadFileLine: ${maxReadFileLine}. Must be a positive integer or -1 for unlimited.`,
-			)
-		}
-	}
-
+export async function extractTextFromFileWithMetadata(
+	filePath: string,
+	limit: number = DEFAULT_LINE_LIMIT,
+): Promise<ExtractTextResult> {
 	try {
 		await fs.access(filePath)
 	} catch (error) {
@@ -82,33 +90,49 @@ export async function extractTextFromFile(filePath: string, maxReadFileLine?: nu
 	// Check if we have a specific extractor for this format
 	const extractor = SUPPORTED_BINARY_FORMATS[fileExtension as keyof typeof SUPPORTED_BINARY_FORMATS]
 	if (extractor) {
-		return extractor(filePath)
+		// For binary formats, extract and count lines
+		const content = await extractor(filePath)
+		const lines = content.split("\n")
+		return {
+			content,
+			totalLines: lines.length,
+			returnedLines: lines.length,
+			wasTruncated: false,
+		}
 	}
 
 	// Handle other files
 	const isBinary = await isBinaryFile(filePath).catch(() => false)
 
 	if (!isBinary) {
-		// Check if we need to apply line limit
-		if (maxReadFileLine !== undefined && maxReadFileLine !== -1) {
-			const totalLines = await countFileLines(filePath)
-			if (totalLines > maxReadFileLine) {
-				// Read only up to maxReadFileLine (endLine is 0-based and inclusive)
-				const content = await readLines(filePath, maxReadFileLine - 1, 0)
-				const numberedContent = addLineNumbers(content)
-				return (
-					numberedContent +
-					`\n\n[File truncated: showing ${maxReadFileLine} of ${totalLines} total lines. The file is too large and may exhaust the context window if read in full.]`
-				)
-			}
+		const rawContent = await fs.readFile(filePath, "utf8")
+		const result = readWithSlice(rawContent, 0, limit)
+
+		return {
+			content: result.content,
+			totalLines: result.totalLines,
+			returnedLines: result.returnedLines,
+			wasTruncated: result.wasTruncated,
+			linesShown: result.includedRanges.length > 0 ? result.includedRanges[0] : undefined,
 		}
-		// Read the entire file if no limit or file is within limit
-		return addLineNumbers(await fs.readFile(filePath, "utf8"))
 	} else {
 		throw new Error(`Cannot read text for file type: ${fileExtension}`)
 	}
 }
 
+/**
+ * Extracts text content from a file, with support for various formats including PDF, DOCX, XLSX, and plain text.
+ * Now uses truncation to limit large files to DEFAULT_LINE_LIMIT lines.
+ *
+ * @param filePath - Path to the file to extract text from
+ * @returns Promise resolving to the extracted text content with line numbers
+ * @throws {Error} If file not found or unsupported binary format
+ */
+export async function extractTextFromFile(filePath: string): Promise<string> {
+	const result = await extractTextFromFileWithMetadata(filePath)
+	return result.content
+}
+
 export function addLineNumbers(content: string, startLine: number = 1): string {
 	// If content is empty, return empty string - empty files should not have line numbers
 	// If content is empty but startLine > 1, return "startLine | " because we know the file is not empty

+ 469 - 0
src/integrations/misc/indentation-reader.ts

@@ -0,0 +1,469 @@
+/**
+ * Indentation-based semantic code block extraction.
+ *
+ * Inspired by Codex's indentation mode, this module extracts meaningful code blocks
+ * based on indentation hierarchy rather than arbitrary line ranges.
+ *
+ * The algorithm uses bidirectional expansion from an anchor line:
+ * 1. Parse the file to determine indentation level of each line
+ * 2. Compute effective indents (blank lines inherit previous non-blank line's indent)
+ * 3. Expand up and down from anchor simultaneously
+ * 4. Apply sibling exclusion counters to limit scope
+ * 5. Trim empty lines from edges
+ * 6. Apply line limit
+ */
+
+import {
+	DEFAULT_LINE_LIMIT,
+	DEFAULT_MAX_LEVELS,
+	MAX_LINE_LENGTH,
+} from "../../core/prompts/tools/native-tools/read_file"
+
+// ─── Types ────────────────────────────────────────────────────────────────────
+
+export interface LineRecord {
+	/** 1-based line number */
+	lineNumber: number
+	/** Original line content */
+	content: string
+	/** Computed indentation level (number of leading whitespace units) */
+	indentLevel: number
+	/** Whether this line is blank (empty or whitespace only) */
+	isBlank: boolean
+	/** Whether this line starts a new block (has content followed by colon, brace, etc.) */
+	isBlockStart: boolean
+}
+
+export interface IndentationReadOptions {
+	/** 1-based anchor line number */
+	anchorLine: number
+	/** Maximum indentation levels to include above anchor (0 = unlimited, default: 0) */
+	maxLevels?: number
+	/** Include sibling blocks at the same indentation level (default: false) */
+	includeSiblings?: boolean
+	/** Include file header content (imports, comments at top) (default: true) */
+	includeHeader?: boolean
+	/** Maximum lines to return from bidirectional expansion (default: 2000) */
+	limit?: number
+	/** Hard cap on lines returned, separate from limit (optional) */
+	maxLines?: number
+}
+
+export interface IndentationReadResult {
+	/** The extracted content with line numbers */
+	content: string
+	/** Line ranges that were included [start, end] tuples (1-based) */
+	includedRanges: Array<[number, number]>
+	/** Total lines in the file */
+	totalLines: number
+	/** Lines actually returned */
+	returnedLines: number
+	/** Whether output was truncated due to limit */
+	wasTruncated: boolean
+}
+
+// ─── Constants ────────────────────────────────────────────────────────────────
+
+/** Indentation unit size (spaces) */
+const INDENT_SIZE = 4
+
+/** Tab width for indent measurement (Codex standard) */
+const TAB_WIDTH = 4
+
+/** Patterns that indicate a block start */
+const BLOCK_START_PATTERNS = [
+	/:\s*$/, // Python-style (def foo():)
+	/\{\s*$/, // C-style opening brace
+	/=>\s*\{?\s*$/, // Arrow functions
+	/\bthen\s*$/, // Lua/some languages
+	/\bdo\s*$/, // Ruby, Lua
+]
+
+/** Patterns for file header lines (imports, comments, etc.) */
+const HEADER_PATTERNS = [
+	/^import\s/, // ES6 imports
+	/^from\s.*import/, // Python imports
+	/^const\s.*=\s*require/, // CommonJS requires
+	/^#!/, // Shebang
+	/^\/\*/, // Block comment start
+	/^\*/, // Block comment continuation
+	/^\s*\*\//, // Block comment end
+	/^\/\//, // Line comment
+	/^#(?!include)/, // Python/shell comment (not C #include)
+	/^"""/, // Python docstring
+	/^'''/, // Python docstring
+	/^use\s/, // Rust use
+	/^package\s/, // Go/Java package
+	/^require\s/, // Lua require
+	/^@/, // Decorators (Python, TypeScript)
+	/^"use\s/, // "use strict", "use client"
+]
+
+/** Comment prefixes for header detection (Codex standard) */
+const COMMENT_PREFIXES = ["#", "//", "--", "/*", "*", "'''", '"""']
+
+// ─── Core Functions ───────────────────────────────────────────────────────────
+
+/**
+ * Parse a file's lines into LineRecord objects with indentation information.
+ */
+export function parseLines(content: string): LineRecord[] {
+	const lines = content.split("\n")
+	return lines.map((line, index) => {
+		const trimmed = line.trimStart()
+		const leadingWhitespace = line.length - trimmed.length
+
+		// Calculate indent in spaces (tabs = TAB_WIDTH spaces each)
+		let indentSpaces = 0
+		for (let i = 0; i < leadingWhitespace; i++) {
+			if (line[i] === "\t") {
+				indentSpaces += TAB_WIDTH
+			} else {
+				indentSpaces += 1
+			}
+		}
+		// Convert to indent level (number of INDENT_SIZE units)
+		const indentLevel = Math.floor(indentSpaces / INDENT_SIZE)
+
+		const isBlank = trimmed.length === 0
+		const isBlockStart = !isBlank && BLOCK_START_PATTERNS.some((pattern) => pattern.test(line))
+
+		return {
+			lineNumber: index + 1,
+			content: line,
+			indentLevel,
+			isBlank,
+			isBlockStart,
+		}
+	})
+}
+
+/**
+ * Compute effective indents where blank lines inherit the previous non-blank line's indent.
+ * This matches the Codex algorithm behavior.
+ */
+export function computeEffectiveIndents(lines: LineRecord[]): number[] {
+	const effective: number[] = []
+	let previousIndent = 0
+
+	for (const line of lines) {
+		if (line.isBlank) {
+			effective.push(previousIndent)
+		} else {
+			previousIndent = line.indentLevel
+			effective.push(previousIndent)
+		}
+	}
+	return effective
+}
+
+/**
+ * Check if a line is a comment (for include_header behavior).
+ */
+function isComment(line: LineRecord): boolean {
+	const trimmed = line.content.trim()
+	return COMMENT_PREFIXES.some((prefix) => trimmed.startsWith(prefix))
+}
+
+/**
+ * Trim empty lines from the front and back of a line array.
+ */
+function trimEmptyLines(lines: LineRecord[]): void {
+	// Trim from front
+	while (lines.length > 0 && lines[0].isBlank) {
+		lines.shift()
+	}
+	// Trim from back
+	while (lines.length > 0 && lines[lines.length - 1].isBlank) {
+		lines.pop()
+	}
+}
+
+/**
+ * Find the file header (imports, top-level comments, etc.).
+ * Returns the end index of the header section.
+ */
+function findHeaderEnd(lines: LineRecord[]): number {
+	let lastHeaderIdx = -1
+	let inBlockComment = false
+
+	for (let i = 0; i < lines.length; i++) {
+		const line = lines[i]
+		const trimmed = line.content.trim()
+
+		// Track block comments
+		if (trimmed.startsWith("/*")) inBlockComment = true
+		if (trimmed.endsWith("*/")) {
+			inBlockComment = false
+			lastHeaderIdx = i
+			continue
+		}
+		if (inBlockComment) {
+			lastHeaderIdx = i
+			continue
+		}
+
+		// Check if this is a header line
+		if (line.isBlank) {
+			// Blank lines are part of header if we haven't seen content yet
+			if (lastHeaderIdx === i - 1) {
+				lastHeaderIdx = i
+			}
+			continue
+		}
+
+		const isHeader = HEADER_PATTERNS.some((pattern) => pattern.test(trimmed))
+		if (isHeader) {
+			lastHeaderIdx = i
+		} else if (line.indentLevel === 0) {
+			// Hit first non-header top-level content
+			break
+		}
+	}
+
+	return lastHeaderIdx
+}
+
+/**
+ * Format lines with line numbers, applying truncation to long lines.
+ */
+export function formatWithLineNumbers(lines: LineRecord[], maxLineLength: number = MAX_LINE_LENGTH): string {
+	if (lines.length === 0) return ""
+	const maxLineNumWidth = String(lines[lines.length - 1]?.lineNumber || 1).length
+
+	return lines
+		.map((line) => {
+			const lineNum = String(line.lineNumber).padStart(maxLineNumWidth, " ")
+			let content = line.content
+
+			// Truncate long lines
+			if (content.length > maxLineLength) {
+				content = content.substring(0, maxLineLength - 3) + "..."
+			}
+
+			return `${lineNum} | ${content}`
+		})
+		.join("\n")
+}
+
+/**
+ * Convert a contiguous array of LineRecords into merged ranges for output.
+ */
+function computeIncludedRanges(lines: LineRecord[]): Array<[number, number]> {
+	if (lines.length === 0) return []
+
+	const ranges: Array<[number, number]> = []
+	let rangeStart = lines[0].lineNumber
+	let rangeEnd = lines[0].lineNumber
+
+	for (let i = 1; i < lines.length; i++) {
+		const lineNum = lines[i].lineNumber
+		if (lineNum === rangeEnd + 1) {
+			// Contiguous
+			rangeEnd = lineNum
+		} else {
+			// Gap - save current range and start new one
+			ranges.push([rangeStart, rangeEnd])
+			rangeStart = lineNum
+			rangeEnd = lineNum
+		}
+	}
+	// Don't forget the last range
+	ranges.push([rangeStart, rangeEnd])
+
+	return ranges
+}
+
+// ─── Main Export ──────────────────────────────────────────────────────────────
+
+/**
+ * Read a file using indentation-based semantic extraction (Codex algorithm).
+ *
+ * Uses bidirectional expansion from the anchor line with sibling exclusion counters.
+ *
+ * @param content - The file content to process
+ * @param options - Extraction options
+ * @returns The extracted content with metadata
+ */
+export function readWithIndentation(content: string, options: IndentationReadOptions): IndentationReadResult {
+	const {
+		anchorLine,
+		maxLevels = DEFAULT_MAX_LEVELS,
+		includeSiblings = false,
+		includeHeader = true,
+		limit = DEFAULT_LINE_LIMIT,
+		maxLines,
+	} = options
+
+	const lines = parseLines(content)
+	const totalLines = lines.length
+
+	// Validate anchor line
+	if (anchorLine < 1 || anchorLine > totalLines) {
+		return {
+			content: `Error: anchor_line ${anchorLine} is out of range (1-${totalLines})`,
+			includedRanges: [],
+			totalLines,
+			returnedLines: 0,
+			wasTruncated: false,
+		}
+	}
+
+	const anchorIdx = anchorLine - 1 // Convert to 0-based
+	const effectiveIndents = computeEffectiveIndents(lines)
+	const anchorIndent = effectiveIndents[anchorIdx]
+
+	// Calculate minimum indent threshold
+	// maxLevels = 0 means unlimited (minIndent = 0)
+	// maxLevels > 0 means limit to that many levels above anchor
+	let minIndent: number
+	if (maxLevels === 0) {
+		minIndent = 0
+	} else {
+		// Each "level" is INDENT_SIZE spaces worth of indentation
+		// We subtract maxLevels from the anchor's indent level
+		minIndent = Math.max(0, anchorIndent - maxLevels)
+	}
+
+	// Calculate final limit (use maxLines as hard cap if provided)
+	const guardLimit = maxLines ?? limit
+	const finalLimit = Math.min(limit, guardLimit, totalLines)
+
+	// Edge case: if limit is 1, just return the anchor line
+	if (finalLimit === 1) {
+		const singleLine = [lines[anchorIdx]]
+		return {
+			content: formatWithLineNumbers(singleLine),
+			includedRanges: [[anchorLine, anchorLine]],
+			totalLines,
+			returnedLines: 1,
+			wasTruncated: totalLines > 1,
+		}
+	}
+
+	// Bidirectional expansion from anchor (Codex algorithm)
+	const result: LineRecord[] = [lines[anchorIdx]]
+	let i = anchorIdx - 1 // Up cursor
+	let j = anchorIdx + 1 // Down cursor
+	let iMinCount = 0 // Count of min-indent lines seen going up
+	let jMinCount = 0 // Count of min-indent lines seen going down
+
+	while (result.length < finalLimit) {
+		let progressed = false
+
+		// Expand upward
+		if (i >= 0 && effectiveIndents[i] >= minIndent) {
+			result.unshift(lines[i])
+			progressed = true
+
+			// Handle sibling exclusion at min indent
+			if (effectiveIndents[i] === minIndent && !includeSiblings) {
+				const allowHeader = includeHeader && isComment(lines[i])
+				const canTake = allowHeader || iMinCount === 0
+
+				if (canTake) {
+					iMinCount++
+				} else {
+					// Reject this line - remove it and stop expanding up
+					result.shift()
+					progressed = false
+					i = -1 // Stop expanding up
+				}
+			}
+
+			if (i >= 0) i--
+		} else if (i >= 0) {
+			i = -1 // Stop expanding up (hit lower indent)
+		}
+
+		if (result.length >= finalLimit) break
+
+		// Expand downward
+		if (j < lines.length && effectiveIndents[j] >= minIndent) {
+			result.push(lines[j])
+			progressed = true
+
+			// Handle sibling exclusion at min indent
+			if (effectiveIndents[j] === minIndent && !includeSiblings) {
+				if (jMinCount > 0) {
+					// Already saw one min-indent block going down, reject this
+					result.pop()
+					progressed = false
+					j = lines.length // Stop expanding down
+				}
+				jMinCount++
+			}
+
+			if (j < lines.length) j++
+		} else if (j < lines.length) {
+			j = lines.length // Stop expanding down (hit lower indent)
+		}
+
+		if (!progressed) break
+	}
+
+	// Trim leading/trailing empty lines
+	trimEmptyLines(result)
+
+	// Check if we were truncated
+	const wasTruncated = result.length >= finalLimit || i >= 0 || j < lines.length
+
+	// Format output
+	const formattedContent = formatWithLineNumbers(result)
+
+	// Compute included ranges
+	const includedRanges = computeIncludedRanges(result)
+
+	return {
+		content: formattedContent,
+		includedRanges,
+		totalLines,
+		returnedLines: result.length,
+		wasTruncated: wasTruncated && result.length < totalLines,
+	}
+}
+
+/**
+ * Simple slice mode reading - read lines with offset/limit.
+ *
+ * @param content - The file content to process
+ * @param offset - 0-based line offset to start from (default: 0)
+ * @param limit - Maximum lines to return (default: 2000)
+ * @returns The extracted content with metadata
+ */
+export function readWithSlice(
+	content: string,
+	offset: number = 0,
+	limit: number = DEFAULT_LINE_LIMIT,
+): IndentationReadResult {
+	const lines = parseLines(content)
+	const totalLines = lines.length
+
+	// Validate offset
+	if (offset < 0) offset = 0
+	if (offset >= totalLines) {
+		return {
+			content: `Error: offset ${offset} is beyond file end (${totalLines} lines)`,
+			includedRanges: [],
+			totalLines,
+			returnedLines: 0,
+			wasTruncated: false,
+		}
+	}
+
+	// Slice lines
+	const endIdx = Math.min(offset + limit, totalLines)
+	const selectedLines = lines.slice(offset, endIdx)
+	const wasTruncated = endIdx < totalLines
+
+	// Format output
+	const formattedContent = formatWithLineNumbers(selectedLines)
+
+	return {
+		content: formattedContent,
+		includedRanges: [[offset + 1, endIdx]], // 1-based
+		totalLines,
+		returnedLines: selectedLines.length,
+		wasTruncated,
+	}
+}

+ 0 - 182
src/integrations/misc/read-file-with-budget.ts

@@ -1,182 +0,0 @@
-import { createReadStream } from "fs"
-import fs from "fs/promises"
-import { createInterface } from "readline"
-import { countTokens } from "../../utils/countTokens"
-import { Anthropic } from "@anthropic-ai/sdk"
-
-export interface ReadWithBudgetResult {
-	/** The content read up to the token budget */
-	content: string
-	/** Actual token count of returned content */
-	tokenCount: number
-	/** Total lines in the returned content */
-	lineCount: number
-	/** Whether the entire file was read (false if truncated) */
-	complete: boolean
-}
-
-export interface ReadWithBudgetOptions {
-	/** Maximum tokens allowed. Required. */
-	budgetTokens: number
-	/** Number of lines to buffer before token counting (default: 256) */
-	chunkLines?: number
-}
-
-/**
- * Reads a file while incrementally counting tokens, stopping when budget is reached.
- *
- * Unlike validateFileTokenBudget + extractTextFromFile, this is a single-pass
- * operation that returns the actual content up to the token limit.
- *
- * @param filePath - Path to the file to read
- * @param options - Budget and chunking options
- * @returns Content read, token count, and completion status
- */
-export async function readFileWithTokenBudget(
-	filePath: string,
-	options: ReadWithBudgetOptions,
-): Promise<ReadWithBudgetResult> {
-	const { budgetTokens, chunkLines = 256 } = options
-
-	// Verify file exists
-	try {
-		await fs.access(filePath)
-	} catch {
-		throw new Error(`File not found: ${filePath}`)
-	}
-
-	return new Promise((resolve, reject) => {
-		let content = ""
-		let lineCount = 0
-		let tokenCount = 0
-		let lineBuffer: string[] = []
-		let complete = true
-		let isProcessing = false
-		let shouldClose = false
-
-		const readStream = createReadStream(filePath)
-		const rl = createInterface({
-			input: readStream,
-			crlfDelay: Infinity,
-		})
-
-		const processBuffer = async (): Promise<boolean> => {
-			if (lineBuffer.length === 0) return true
-
-			const bufferText = lineBuffer.join("\n")
-			const currentBuffer = [...lineBuffer]
-			lineBuffer = []
-
-			// Count tokens for this chunk
-			let chunkTokens: number
-			try {
-				const contentBlocks: Anthropic.Messages.ContentBlockParam[] = [{ type: "text", text: bufferText }]
-				chunkTokens = await countTokens(contentBlocks)
-			} catch {
-				// Fallback: conservative estimate (2 chars per token)
-				chunkTokens = Math.ceil(bufferText.length / 2)
-			}
-
-			// Check if adding this chunk would exceed budget
-			if (tokenCount + chunkTokens > budgetTokens) {
-				// Need to find cutoff within this chunk using binary search
-				let low = 0
-				let high = currentBuffer.length
-				let bestFit = 0
-				let bestTokens = 0
-
-				while (low < high) {
-					const mid = Math.floor((low + high + 1) / 2)
-					const testContent = currentBuffer.slice(0, mid).join("\n")
-					let testTokens: number
-					try {
-						const blocks: Anthropic.Messages.ContentBlockParam[] = [{ type: "text", text: testContent }]
-						testTokens = await countTokens(blocks)
-					} catch {
-						testTokens = Math.ceil(testContent.length / 2)
-					}
-
-					if (tokenCount + testTokens <= budgetTokens) {
-						bestFit = mid
-						bestTokens = testTokens
-						low = mid
-					} else {
-						high = mid - 1
-					}
-				}
-
-				// Add best fit lines
-				if (bestFit > 0) {
-					const fitContent = currentBuffer.slice(0, bestFit).join("\n")
-					content += (content.length > 0 ? "\n" : "") + fitContent
-					tokenCount += bestTokens
-					lineCount += bestFit
-				}
-				complete = false
-				return false
-			}
-
-			// Entire chunk fits - add it all
-			content += (content.length > 0 ? "\n" : "") + bufferText
-			tokenCount += chunkTokens
-			lineCount += currentBuffer.length
-			return true
-		}
-
-		rl.on("line", (line) => {
-			lineBuffer.push(line)
-
-			if (lineBuffer.length >= chunkLines && !isProcessing) {
-				isProcessing = true
-				rl.pause()
-
-				processBuffer()
-					.then((continueReading) => {
-						isProcessing = false
-						if (!continueReading) {
-							shouldClose = true
-							rl.close()
-							readStream.destroy()
-						} else if (!shouldClose) {
-							rl.resume()
-						}
-					})
-					.catch((err) => {
-						isProcessing = false
-						shouldClose = true
-						rl.close()
-						readStream.destroy()
-						reject(err)
-					})
-			}
-		})
-
-		rl.on("close", async () => {
-			// Wait for any ongoing processing with timeout
-			const maxWaitTime = 30000 // 30 seconds
-			const startWait = Date.now()
-			while (isProcessing) {
-				if (Date.now() - startWait > maxWaitTime) {
-					reject(new Error("Timeout waiting for buffer processing to complete"))
-					return
-				}
-				await new Promise((r) => setTimeout(r, 10))
-			}
-
-			// Process remaining buffer
-			if (!shouldClose) {
-				try {
-					await processBuffer()
-				} catch (err) {
-					reject(err)
-					return
-				}
-			}
-
-			resolve({ content, tokenCount, lineCount, complete })
-		})
-
-		rl.on("error", reject)
-		readStream.on("error", reject)
-	})
-}

+ 20 - 1
src/services/code-index/__tests__/service-factory.spec.ts

@@ -286,7 +286,7 @@ describe("CodeIndexServiceFactory", () => {
 			// Arrange
 			const testConfig = {
 				embedderProvider: "gemini",
-				modelId: "text-embedding-004",
+				modelId: "gemini-embedding-001",
 				geminiOptions: {
 					apiKey: "test-gemini-api-key",
 				},
@@ -297,6 +297,25 @@ describe("CodeIndexServiceFactory", () => {
 			factory.createEmbedder()
 
 			// Assert
+			expect(MockedGeminiEmbedder).toHaveBeenCalledWith("test-gemini-api-key", "gemini-embedding-001")
+		})
+
+		it("should pass deprecated text-embedding-004 modelId to GeminiEmbedder (migration happens inside GeminiEmbedder)", () => {
+			// Arrange - service-factory passes the config modelId directly;
+			// GeminiEmbedder handles the migration internally
+			const testConfig = {
+				embedderProvider: "gemini",
+				modelId: "text-embedding-004",
+				geminiOptions: {
+					apiKey: "test-gemini-api-key",
+				},
+			}
+			mockConfigManager.getConfig.mockReturnValue(testConfig as any)
+
+			// Act
+			factory.createEmbedder()
+
+			// Assert - factory passes the original modelId; GeminiEmbedder migrates it internally
 			expect(MockedGeminiEmbedder).toHaveBeenCalledWith("test-gemini-api-key", "text-embedding-004")
 		})
 

+ 22 - 5
src/services/code-index/embedders/__tests__/gemini.spec.ts

@@ -44,7 +44,7 @@ describe("GeminiEmbedder", () => {
 		it("should create an instance with specified model", () => {
 			// Arrange
 			const apiKey = "test-gemini-api-key"
-			const modelId = "text-embedding-004"
+			const modelId = "gemini-embedding-001"
 
 			// Act
 			embedder = new GeminiEmbedder(apiKey, modelId)
@@ -53,7 +53,24 @@ describe("GeminiEmbedder", () => {
 			expect(MockedOpenAICompatibleEmbedder).toHaveBeenCalledWith(
 				"https://generativelanguage.googleapis.com/v1beta/openai/",
 				apiKey,
-				"text-embedding-004",
+				"gemini-embedding-001",
+				2048,
+			)
+		})
+
+		it("should migrate deprecated text-embedding-004 to gemini-embedding-001", () => {
+			// Arrange
+			const apiKey = "test-gemini-api-key"
+			const deprecatedModelId = "text-embedding-004"
+
+			// Act
+			embedder = new GeminiEmbedder(apiKey, deprecatedModelId)
+
+			// Assert - should be migrated to gemini-embedding-001
+			expect(MockedOpenAICompatibleEmbedder).toHaveBeenCalledWith(
+				"https://generativelanguage.googleapis.com/v1beta/openai/",
+				apiKey,
+				"gemini-embedding-001",
 				2048,
 			)
 		})
@@ -109,8 +126,8 @@ describe("GeminiEmbedder", () => {
 			})
 
 			it("should use provided model parameter when specified", async () => {
-				// Arrange
-				embedder = new GeminiEmbedder("test-api-key", "text-embedding-004")
+				// Arrange - even with deprecated model in constructor, the runtime parameter takes precedence
+				embedder = new GeminiEmbedder("test-api-key", "gemini-embedding-001")
 				const texts = ["test text 1", "test text 2"]
 				const mockResponse = {
 					embeddings: [
@@ -120,7 +137,7 @@ describe("GeminiEmbedder", () => {
 				}
 				mockCreateEmbeddings.mockResolvedValue(mockResponse)
 
-				// Act
+				// Act - specify a different model at runtime
 				const result = await embedder.createEmbeddings(texts, "gemini-embedding-001")
 
 				// Assert

+ 25 - 4
src/services/code-index/embedders/gemini.ts

@@ -10,15 +10,33 @@ import { TelemetryService } from "@roo-code/telemetry"
  * with configuration for Google's Gemini embedding API.
  *
  * Supported models:
- * - text-embedding-004 (dimension: 768)
- * - gemini-embedding-001 (dimension: 2048)
+ * - gemini-embedding-001 (dimension: 3072)
+ *
+ * Note: text-embedding-004 has been deprecated and is automatically
+ * migrated to gemini-embedding-001 for backward compatibility.
  */
 export class GeminiEmbedder implements IEmbedder {
 	private readonly openAICompatibleEmbedder: OpenAICompatibleEmbedder
 	private static readonly GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai/"
 	private static readonly DEFAULT_MODEL = "gemini-embedding-001"
+	/**
+	 * Deprecated models that are automatically migrated to their replacements.
+	 * Users with these models configured will be silently migrated without interruption.
+	 */
+	private static readonly DEPRECATED_MODEL_MIGRATIONS: Record<string, string> = {
+		"text-embedding-004": "gemini-embedding-001",
+	}
 	private readonly modelId: string
 
+	/**
+	 * Migrates deprecated model IDs to their replacements.
+	 * @param modelId The model ID to potentially migrate
+	 * @returns The migrated model ID, or the original if no migration is needed
+	 */
+	private static migrateModelId(modelId: string): string {
+		return GeminiEmbedder.DEPRECATED_MODEL_MIGRATIONS[modelId] ?? modelId
+	}
+
 	/**
 	 * Creates a new Gemini embedder
 	 * @param apiKey The Gemini API key for authentication
@@ -29,8 +47,11 @@ export class GeminiEmbedder implements IEmbedder {
 			throw new Error(t("embeddings:validation.apiKeyRequired"))
 		}
 
-		// Use provided model or default
-		this.modelId = modelId || GeminiEmbedder.DEFAULT_MODEL
+		// Migrate deprecated models to their replacements silently
+		const migratedModelId = modelId ? GeminiEmbedder.migrateModelId(modelId) : undefined
+
+		// Use provided model (after migration) or default
+		this.modelId = migratedModelId || GeminiEmbedder.DEFAULT_MODEL
 
 		// Create an OpenAI Compatible embedder with Gemini's configuration
 		this.openAICompatibleEmbedder = new OpenAICompatibleEmbedder(

+ 95 - 0
src/shared/__tests__/embeddingModels.spec.ts

@@ -0,0 +1,95 @@
+import { describe, it, expect } from "vitest"
+import {
+	getModelDimension,
+	getModelScoreThreshold,
+	getDefaultModelId,
+	EMBEDDING_MODEL_PROFILES,
+} from "../embeddingModels"
+
+describe("embeddingModels", () => {
+	describe("EMBEDDING_MODEL_PROFILES", () => {
+		it("should have gemini provider with gemini-embedding-001 model", () => {
+			const geminiProfiles = EMBEDDING_MODEL_PROFILES.gemini
+			expect(geminiProfiles).toBeDefined()
+			expect(geminiProfiles!["gemini-embedding-001"]).toBeDefined()
+			expect(geminiProfiles!["gemini-embedding-001"].dimension).toBe(3072)
+		})
+
+		it("should have deprecated text-embedding-004 in gemini profiles for backward compatibility", () => {
+			// This is critical for backward compatibility:
+			// Users with text-embedding-004 configured need dimension lookup to work
+			// even though the model is migrated to gemini-embedding-001 in GeminiEmbedder
+			const geminiProfiles = EMBEDDING_MODEL_PROFILES.gemini
+			expect(geminiProfiles).toBeDefined()
+			expect(geminiProfiles!["text-embedding-004"]).toBeDefined()
+			expect(geminiProfiles!["text-embedding-004"].dimension).toBe(3072)
+		})
+	})
+
+	describe("getModelDimension", () => {
+		it("should return dimension for gemini-embedding-001", () => {
+			const dimension = getModelDimension("gemini", "gemini-embedding-001")
+			expect(dimension).toBe(3072)
+		})
+
+		it("should return dimension for deprecated text-embedding-004", () => {
+			// This ensures createVectorStore() works for users with text-embedding-004 configured
+			// The dimension should be 3072 (matching gemini-embedding-001) because:
+			// 1. GeminiEmbedder migrates text-embedding-004 to gemini-embedding-001
+			// 2. gemini-embedding-001 produces 3072-dimensional embeddings
+			// 3. Vector store dimension must match the actual embedding dimension
+			const dimension = getModelDimension("gemini", "text-embedding-004")
+			expect(dimension).toBe(3072)
+		})
+
+		it("should return undefined for unknown model", () => {
+			const dimension = getModelDimension("gemini", "unknown-model")
+			expect(dimension).toBeUndefined()
+		})
+
+		it("should return undefined for unknown provider", () => {
+			const dimension = getModelDimension("unknown-provider" as any, "some-model")
+			expect(dimension).toBeUndefined()
+		})
+
+		it("should return correct dimensions for openai models", () => {
+			expect(getModelDimension("openai", "text-embedding-3-small")).toBe(1536)
+			expect(getModelDimension("openai", "text-embedding-3-large")).toBe(3072)
+			expect(getModelDimension("openai", "text-embedding-ada-002")).toBe(1536)
+		})
+	})
+
+	describe("getModelScoreThreshold", () => {
+		it("should return score threshold for gemini-embedding-001", () => {
+			const threshold = getModelScoreThreshold("gemini", "gemini-embedding-001")
+			expect(threshold).toBe(0.4)
+		})
+
+		it("should return score threshold for deprecated text-embedding-004", () => {
+			const threshold = getModelScoreThreshold("gemini", "text-embedding-004")
+			expect(threshold).toBe(0.4)
+		})
+
+		it("should return undefined for unknown model", () => {
+			const threshold = getModelScoreThreshold("gemini", "unknown-model")
+			expect(threshold).toBeUndefined()
+		})
+	})
+
+	describe("getDefaultModelId", () => {
+		it("should return gemini-embedding-001 for gemini provider", () => {
+			const defaultModel = getDefaultModelId("gemini")
+			expect(defaultModel).toBe("gemini-embedding-001")
+		})
+
+		it("should return text-embedding-3-small for openai provider", () => {
+			const defaultModel = getDefaultModelId("openai")
+			expect(defaultModel).toBe("text-embedding-3-small")
+		})
+
+		it("should return codestral-embed-2505 for mistral provider", () => {
+			const defaultModel = getDefaultModelId("mistral")
+			expect(defaultModel).toBe("codestral-embed-2505")
+		})
+	})
+})

+ 3 - 1
src/shared/embeddingModels.ts

@@ -34,8 +34,10 @@ export const EMBEDDING_MODEL_PROFILES: EmbeddingModelProfiles = {
 		},
 	},
 	gemini: {
-		"text-embedding-004": { dimension: 768 },
 		"gemini-embedding-001": { dimension: 3072, scoreThreshold: 0.4 },
+		// Deprecated: text-embedding-004 is migrated to gemini-embedding-001 in GeminiEmbedder
+		// Kept here for backward-compatible dimension lookup in createVectorStore()
+		"text-embedding-004": { dimension: 3072, scoreThreshold: 0.4 },
 	},
 	mistral: {
 		"codestral-embed-2505": { dimension: 1536, scoreThreshold: 0.4 },

+ 36 - 6
src/shared/tools.ts

@@ -5,7 +5,6 @@ import type {
 	ToolProgressStatus,
 	ToolGroup,
 	ToolName,
-	FileEntry,
 	BrowserActionParams,
 	GenerateImageParams,
 } from "@roo-code/types"
@@ -66,7 +65,7 @@ export const toolParamNames = [
 	"todos",
 	"prompt",
 	"image",
-	"files", // Native protocol parameter for read_file
+	// read_file parameters (native protocol)
 	"operations", // search_and_replace parameter for multiple operations
 	"patch", // apply_patch parameter
 	"file_path", // search_replace and edit_file parameter
@@ -76,8 +75,18 @@ export const toolParamNames = [
 	"expected_replacements", // edit_file parameter for multiple occurrences
 	"artifact_id", // read_command_output parameter
 	"search", // read_command_output parameter for grep-like search
-	"offset", // read_command_output parameter for pagination
-	"limit", // read_command_output parameter for max bytes to return
+	"offset", // read_command_output and read_file parameter
+	"limit", // read_command_output and read_file parameter
+	// read_file indentation mode parameters
+	"indentation",
+	"anchor_line",
+	"max_levels",
+	"include_siblings",
+	"include_header",
+	"max_lines",
+	// read_file legacy format parameter (backward compatibility)
+	"files",
+	"line_ranges",
 ] as const
 
 export type ToolParamName = (typeof toolParamNames)[number]
@@ -88,7 +97,7 @@ export type ToolParamName = (typeof toolParamNames)[number]
  */
 export type NativeToolArgs = {
 	access_mcp_resource: { server_name: string; uri: string }
-	read_file: { files: FileEntry[] }
+	read_file: import("@roo-code/types").ReadFileToolParams
 	read_command_output: { artifact_id: string; search?: string; offset?: number; limit?: number }
 	attempt_completion: { result: string }
 	execute_command: { command: string; cwd?: string }
@@ -137,6 +146,11 @@ export interface ToolUse<TName extends ToolName = ToolName> {
 	partial: boolean
 	// nativeArgs is properly typed based on TName if it's in NativeToolArgs, otherwise never
 	nativeArgs?: TName extends keyof NativeToolArgs ? NativeToolArgs[TName] : never
+	/**
+	 * Flag indicating whether the tool call used a legacy/deprecated format.
+	 * Used for telemetry tracking to monitor migration from old formats.
+	 */
+	usedLegacyFormat?: boolean
 }
 
 /**
@@ -167,7 +181,23 @@ export interface ExecuteCommandToolUse extends ToolUse<"execute_command"> {
 
 export interface ReadFileToolUse extends ToolUse<"read_file"> {
 	name: "read_file"
-	params: Partial<Pick<Record<ToolParamName, string>, "args" | "path" | "start_line" | "end_line" | "files">>
+	params: Partial<
+		Pick<
+			Record<ToolParamName, string>,
+			| "args"
+			| "path"
+			| "start_line"
+			| "end_line"
+			| "mode"
+			| "offset"
+			| "limit"
+			| "indentation"
+			| "anchor_line"
+			| "max_levels"
+			| "include_siblings"
+			| "include_header"
+		>
+	>
 }
 
 export interface WriteToFileToolUse extends ToolUse<"write_to_file"> {

+ 24 - 52
src/utils/__tests__/json-schema.spec.ts

@@ -86,9 +86,9 @@ describe("normalizeToolSchema", () => {
 				type: "object",
 				properties: {
 					path: { type: "string" },
-					line_ranges: {
+					tags: {
 						type: ["array", "null"],
-						items: { type: "integer" },
+						items: { type: "string" },
 					},
 				},
 			},
@@ -104,8 +104,8 @@ describe("normalizeToolSchema", () => {
 				type: "object",
 				properties: {
 					path: { type: "string" },
-					line_ranges: {
-						anyOf: [{ type: "array", items: { type: "integer" } }, { type: "null" }],
+					tags: {
+						anyOf: [{ type: "array", items: { type: "string" } }, { type: "null" }],
 					},
 				},
 				additionalProperties: false,
@@ -123,7 +123,7 @@ describe("normalizeToolSchema", () => {
 						type: "object",
 						properties: {
 							path: { type: "string" },
-							line_ranges: {
+							ranges: {
 								type: ["array", "null"],
 								items: {
 									type: "array",
@@ -131,7 +131,7 @@ describe("normalizeToolSchema", () => {
 								},
 							},
 						},
-						required: ["path", "line_ranges"],
+						required: ["path", "ranges"],
 					},
 				},
 			},
@@ -144,7 +144,7 @@ describe("normalizeToolSchema", () => {
 		const filesItems = properties.files.items as Record<string, unknown>
 		const filesItemsProps = filesItems.properties as Record<string, Record<string, unknown>>
 		// Array-specific properties (items) should be moved inside the array variant
-		expect(filesItemsProps.line_ranges.anyOf).toEqual([
+		expect(filesItemsProps.ranges.anyOf).toEqual([
 			{ type: "array", items: { type: "array", items: { type: "integer" } } },
 			{ type: "null" },
 		])
@@ -224,60 +224,32 @@ describe("normalizeToolSchema", () => {
 		const input = {
 			type: "object",
 			properties: {
-				files: {
-					type: "array",
-					description: "List of files to read",
-					items: {
-						type: "object",
-						properties: {
-							path: {
-								type: "string",
-								description: "Path to the file",
-							},
-							line_ranges: {
-								type: ["array", "null"],
-								description: "Optional line ranges",
-								items: {
-									type: "array",
-									items: { type: "integer" },
-									minItems: 2,
-									maxItems: 2,
-								},
-							},
+				path: {
+					type: "string",
+					description: "Path to the file",
+				},
+				indentation: {
+					type: ["object", "null"],
+					properties: {
+						anchor_line: {
+							type: ["integer", "null"],
 						},
-						required: ["path", "line_ranges"],
-						additionalProperties: false,
 					},
-					minItems: 1,
 				},
 			},
-			required: ["files"],
+			required: ["path"],
 			additionalProperties: false,
 		}
 
 		const result = normalizeToolSchema(input)
 
-		// Verify the line_ranges was transformed with items inside the array variant
-		const files = (result.properties as Record<string, unknown>).files as Record<string, unknown>
-		const items = files.items as Record<string, unknown>
-		const props = items.properties as Record<string, Record<string, unknown>>
-		// Array-specific properties (items, minItems, maxItems) should be moved inside the array variant
-		expect(props.line_ranges.anyOf).toEqual([
-			{
-				type: "array",
-				items: {
-					type: "array",
-					items: { type: "integer" },
-					minItems: 2,
-					maxItems: 2,
-				},
-			},
-			{ type: "null" },
-		])
-		// items should NOT be at root level anymore
-		expect(props.line_ranges.items).toBeUndefined()
-		// Other properties are preserved at root level
-		expect(props.line_ranges.description).toBe("Optional line ranges")
+		// Verify nested nullable objects are transformed correctly
+		const props = result.properties as Record<string, Record<string, unknown>>
+		expect(props.indentation.anyOf).toEqual([{ type: "object" }, { type: "null" }])
+		expect(props.indentation.additionalProperties).toBe(false)
+		expect((props.indentation.properties as Record<string, unknown>).anchor_line).toEqual({
+			anyOf: [{ type: "integer" }, { type: "null" }],
+		})
 	})
 
 	describe("format field handling", () => {

+ 8 - 0
src/utils/__tests__/tool-id.spec.ts

@@ -47,6 +47,14 @@ describe("sanitizeToolUseId", () => {
 		it("should replace multiple invalid characters", () => {
 			expect(sanitizeToolUseId("mcp.server:tool/name")).toBe("mcp_server_tool_name")
 		})
+
+		it("should sanitize Gemini/OpenRouter function call IDs with dots and colons", () => {
+			// This is the exact pattern seen in PostHog errors where tool_result IDs
+			// didn't match tool_use IDs due to missing sanitization
+			expect(sanitizeToolUseId("functions.read_file:0")).toBe("functions_read_file_0")
+			expect(sanitizeToolUseId("functions.write_to_file:1")).toBe("functions_write_to_file_1")
+			expect(sanitizeToolUseId("read_file:0")).toBe("read_file_0")
+		})
 	})
 
 	describe("real-world MCP tool use ID patterns", () => {

+ 7 - 1
webview-ui/src/components/chat/ChatRow.tsx

@@ -600,7 +600,13 @@ export const ChatRowContent = ({
 							<ToolUseBlock>
 								<ToolUseBlockHeader
 									className="group"
-									onClick={() => vscode.postMessage({ type: "openFile", text: tool.content })}>
+									onClick={() =>
+										vscode.postMessage({
+											type: "openFile",
+											text: tool.content,
+											values: tool.startLine ? { line: tool.startLine } : undefined,
+										})
+									}>
 									{tool.path?.startsWith(".") && <span>.</span>}
 									<PathTooltip content={formatPathTooltip(tool.path, tool.reason)}>
 										<span className="whitespace-nowrap overflow-hidden text-ellipsis text-left mr-2 rtl">

+ 89 - 4
webview-ui/src/components/chat/ChatView.tsx

@@ -640,7 +640,13 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
 				// - Task is busy (sendingDisabled)
 				// - API request in progress (isStreaming)
 				// - Queue has items (preserve message order during drain)
-				if (sendingDisabled || isStreaming || messageQueue.length > 0) {
+				// - Command is running (command_output) - user's message should be queued for AI, not sent to terminal
+				if (
+					sendingDisabled ||
+					isStreaming ||
+					messageQueue.length > 0 ||
+					clineAskRef.current === "command_output"
+				) {
 					try {
 						console.log("queueMessage", text, images)
 						vscode.postMessage({ type: "queueMessage", text, images })
@@ -673,7 +679,6 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
 						case "tool":
 						case "browser_action_launch":
 						case "command": // User can provide feedback to a tool or command use.
-						case "command_output": // User can send input to command stdin.
 						case "use_mcp_server":
 						case "completion_result": // If this happens then the user has feedback for the completion result.
 						case "resume_task":
@@ -1164,7 +1169,76 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
 
 	const groupedMessages = useMemo(() => {
 		// Only filter out the launch ask and result messages - browser actions appear in chat
-		const result: ClineMessage[] = visibleMessages.filter((msg) => !isBrowserSessionMessage(msg))
+		const filtered: ClineMessage[] = visibleMessages.filter((msg) => !isBrowserSessionMessage(msg))
+
+		// Helper to check if a message is a read_file ask that should be batched
+		const isReadFileAsk = (msg: ClineMessage): boolean => {
+			if (msg.type !== "ask" || msg.ask !== "tool") return false
+			try {
+				const tool = JSON.parse(msg.text || "{}")
+				return tool.tool === "readFile" && !tool.batchFiles // Don't re-batch already batched
+			} catch {
+				return false
+			}
+		}
+
+		// Consolidate consecutive read_file ask messages into batches
+		const result: ClineMessage[] = []
+		let i = 0
+		while (i < filtered.length) {
+			const msg = filtered[i]
+
+			// Check if this starts a sequence of read_file asks
+			if (isReadFileAsk(msg)) {
+				// Collect all consecutive read_file asks
+				const batch: ClineMessage[] = [msg]
+				let j = i + 1
+				while (j < filtered.length && isReadFileAsk(filtered[j])) {
+					batch.push(filtered[j])
+					j++
+				}
+
+				if (batch.length > 1) {
+					// Create a synthetic batch message
+					const batchFiles = batch.map((batchMsg) => {
+						try {
+							const tool = JSON.parse(batchMsg.text || "{}")
+							return {
+								path: tool.path || "",
+								lineSnippet: tool.reason || "",
+								isOutsideWorkspace: tool.isOutsideWorkspace || false,
+								key: `${tool.path}${tool.reason ? ` (${tool.reason})` : ""}`,
+								content: tool.content || "",
+							}
+						} catch {
+							return { path: "", lineSnippet: "", key: "", content: "" }
+						}
+					})
+
+					// Use the first message as the base, but add batchFiles
+					const firstTool = JSON.parse(msg.text || "{}")
+					const syntheticMessage: ClineMessage = {
+						...msg,
+						text: JSON.stringify({
+							...firstTool,
+							batchFiles,
+						}),
+						// Store original messages for response handling
+						_batchedMessages: batch,
+					} as ClineMessage & { _batchedMessages: ClineMessage[] }
+
+					result.push(syntheticMessage)
+					i = j // Skip past all batched messages
+				} else {
+					// Single read_file ask, keep as-is
+					result.push(msg)
+					i++
+				}
+			} else {
+				result.push(msg)
+				i++
+			}
+		}
 
 		if (isCondensing) {
 			result.push({
@@ -1446,9 +1520,20 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
 
 	useImperativeHandle(ref, () => ({
 		acceptInput: () => {
+			const hasInput = inputValue.trim() || selectedImages.length > 0
+
+			// Special case: during command_output, queue the message instead of
+			// triggering the primary button action (which would lose the message)
+			if (clineAskRef.current === "command_output" && hasInput) {
+				vscode.postMessage({ type: "queueMessage", text: inputValue.trim(), images: selectedImages })
+				setInputValue("")
+				setSelectedImages([])
+				return
+			}
+
 			if (enableButtons && primaryButtonText) {
 				handlePrimaryButtonClick(inputValue, selectedImages)
-			} else if (!sendingDisabled && !isProfileDisabled && (inputValue.trim() || selectedImages.length > 0)) {
+			} else if (!sendingDisabled && !isProfileDisabled && hasInput) {
 				handleSendMessage(inputValue, selectedImages)
 			}
 		},

+ 62 - 0
webview-ui/src/components/chat/__tests__/ChatView.spec.tsx

@@ -1081,6 +1081,68 @@ describe("ChatView - Message Queueing Tests", () => {
 			}),
 		)
 	})
+
+	it("queues messages during command_output state instead of losing them", async () => {
+		const { getByTestId } = renderChatView()
+
+		// Hydrate state with command_output ask (Proceed While Running state)
+		mockPostMessage({
+			clineMessages: [
+				{
+					type: "say",
+					say: "task",
+					ts: Date.now() - 2000,
+					text: "Initial task",
+				},
+				{
+					type: "ask",
+					ask: "command_output",
+					ts: Date.now(),
+					text: "",
+					partial: false, // Non-partial so buttons are enabled
+				},
+			],
+		})
+
+		// Wait for state to be updated - need to allow time for React effects to propagate
+		// (clineAsk state update -> clineAskRef.current update)
+		await waitFor(() => {
+			expect(getByTestId("chat-textarea")).toBeInTheDocument()
+		})
+
+		// Allow React effects to complete (clineAsk -> clineAskRef sync)
+		await act(async () => {
+			await new Promise((resolve) => setTimeout(resolve, 50))
+		})
+
+		// Clear message calls before simulating user input
+		vi.mocked(vscode.postMessage).mockClear()
+
+		// Simulate user typing and sending a message during command execution
+		const chatTextArea = getByTestId("chat-textarea")
+		const input = chatTextArea.querySelector("input")! as HTMLInputElement
+
+		await act(async () => {
+			fireEvent.change(input, { target: { value: "message during command execution" } })
+			fireEvent.keyDown(input, { key: "Enter", code: "Enter" })
+		})
+
+		// Verify that the message was queued (not lost via terminalOperation)
+		await waitFor(() => {
+			expect(vscode.postMessage).toHaveBeenCalledWith({
+				type: "queueMessage",
+				text: "message during command execution",
+				images: [],
+			})
+		})
+
+		// Verify it was NOT sent as terminalOperation (which would lose the message)
+		expect(vscode.postMessage).not.toHaveBeenCalledWith(
+			expect.objectContaining({
+				type: "terminalOperation",
+			}),
+		)
+	})
 })
 
 describe("ChatView - Context Condensing Indicator Tests", () => {

+ 0 - 68
webview-ui/src/components/settings/ContextManagementSettings.tsx

@@ -33,10 +33,8 @@ type ContextManagementSettingsProps = HTMLAttributes<HTMLDivElement> & {
 	maxWorkspaceFiles: number
 	showRooIgnoredFiles?: boolean
 	enableSubfolderRules?: boolean
-	maxReadFileLine?: number
 	maxImageFileSize?: number
 	maxTotalImageSize?: number
-	maxConcurrentFileReads?: number
 	profileThresholds?: Record<string, number>
 	includeDiagnosticMessages?: boolean
 	maxDiagnosticMessages?: number
@@ -53,10 +51,8 @@ type ContextManagementSettingsProps = HTMLAttributes<HTMLDivElement> & {
 		| "maxWorkspaceFiles"
 		| "showRooIgnoredFiles"
 		| "enableSubfolderRules"
-		| "maxReadFileLine"
 		| "maxImageFileSize"
 		| "maxTotalImageSize"
-		| "maxConcurrentFileReads"
 		| "profileThresholds"
 		| "includeDiagnosticMessages"
 		| "maxDiagnosticMessages"
@@ -76,10 +72,8 @@ export const ContextManagementSettings = ({
 	showRooIgnoredFiles,
 	enableSubfolderRules,
 	setCachedStateField,
-	maxReadFileLine,
 	maxImageFileSize,
 	maxTotalImageSize,
-	maxConcurrentFileReads,
 	profileThresholds = {},
 	includeDiagnosticMessages,
 	maxDiagnosticMessages,
@@ -218,29 +212,6 @@ export const ContextManagementSettings = ({
 					</div>
 				</SearchableSetting>
 
-				<SearchableSetting
-					settingId="context-max-concurrent-file-reads"
-					section="contextManagement"
-					label={t("settings:contextManagement.maxConcurrentFileReads.label")}>
-					<span className="block font-medium mb-1">
-						{t("settings:contextManagement.maxConcurrentFileReads.label")}
-					</span>
-					<div className="flex items-center gap-2">
-						<Slider
-							min={1}
-							max={100}
-							step={1}
-							value={[Math.max(1, maxConcurrentFileReads ?? 5)]}
-							onValueChange={([value]) => setCachedStateField("maxConcurrentFileReads", value)}
-							data-testid="max-concurrent-file-reads-slider"
-						/>
-						<span className="w-10 text-sm">{Math.max(1, maxConcurrentFileReads ?? 5)}</span>
-					</div>
-					<div className="text-vscode-descriptionForeground text-sm mt-1 mb-3">
-						{t("settings:contextManagement.maxConcurrentFileReads.description")}
-					</div>
-				</SearchableSetting>
-
 				<SearchableSetting
 					settingId="context-show-rooignored-files"
 					section="contextManagement"
@@ -275,45 +246,6 @@ export const ContextManagementSettings = ({
 					</div>
 				</SearchableSetting>
 
-				<SearchableSetting
-					settingId="context-max-read-file"
-					section="contextManagement"
-					label={t("settings:contextManagement.maxReadFile.label")}>
-					<div className="flex flex-col gap-2">
-						<span className="font-medium">{t("settings:contextManagement.maxReadFile.label")}</span>
-						<div className="flex items-center gap-4">
-							<Input
-								type="number"
-								pattern="-?[0-9]*"
-								className="w-24 bg-vscode-input-background text-vscode-input-foreground border border-vscode-input-border px-2 py-1 rounded text-right [appearance:textfield] [&::-webkit-outer-spin-button]:appearance-none [&::-webkit-inner-spin-button]:appearance-none disabled:opacity-50"
-								value={maxReadFileLine ?? -1}
-								min={-1}
-								onChange={(e) => {
-									const newValue = parseInt(e.target.value, 10)
-									if (!isNaN(newValue) && newValue >= -1) {
-										setCachedStateField("maxReadFileLine", newValue)
-									}
-								}}
-								onClick={(e) => e.currentTarget.select()}
-								data-testid="max-read-file-line-input"
-								disabled={maxReadFileLine === -1}
-							/>
-							<span>{t("settings:contextManagement.maxReadFile.lines")}</span>
-							<VSCodeCheckbox
-								checked={maxReadFileLine === -1}
-								onChange={(e: any) =>
-									setCachedStateField("maxReadFileLine", e.target.checked ? -1 : 500)
-								}
-								data-testid="max-read-file-always-full-checkbox">
-								{t("settings:contextManagement.maxReadFile.always_full_read")}
-							</VSCodeCheckbox>
-						</div>
-					</div>
-					<div className="text-vscode-descriptionForeground text-sm mt-2">
-						{t("settings:contextManagement.maxReadFile.description")}
-					</div>
-				</SearchableSetting>
-
 				<SearchableSetting
 					settingId="context-max-image-file-size"
 					section="contextManagement"

Некоторые файлы не были показаны из-за большого количества измененных файлов