mirror of
https://github.com/siteboon/claudecodeui.git
synced 2026-04-22 21:41:29 +00:00
* feat: implement MCP provider registry and service
- Add provider registry to manage LLM providers (Claude, Codex, Cursor, Gemini).
- Create provider routes for MCP server operations (list, upsert, delete, run).
- Implement MCP service for handling server operations and validations.
- Introduce abstract provider class and MCP provider base for shared functionality.
- Add tests for MCP server operations across different providers and scopes.
- Define shared interfaces and types for MCP functionality.
- Implement utility functions for handling JSON config files and API responses.
* chore: remove dead code related to MCP server
* refactor: put /api/providers in index.js and remove /providers prefix from provider.routes.ts
* refactor(settings): move MCP server management into provider module
Extract MCP server settings out of the settings controller and agents tab into a
dedicated frontend MCP module. The settings UI now delegates MCP rendering and
behavior to a single module that only needs the selected provider and current
projects.
Changes:
- Add `src/components/mcp` as the single frontend MCP module
- Move MCP server list rendering into `McpServers`
- Move MCP add/edit modal into `McpServerFormModal`
- Move MCP API/state logic into `useMcpServers`
- Move MCP form state/validation logic into `useMcpServerForm`
- Add provider-specific MCP constants, types, and formatting helpers
- Use the unified `/api/providers/:provider/mcp/servers` API for all providers
- Support MCP management for Claude, Cursor, Codex, and Gemini
- Remove old settings-owned Claude/Codex MCP modal components
- Remove old provider-specific `McpServersContent` branching from settings
- Strip MCP server state, fetch, save, delete, and modal ownership from
`useSettingsController`
- Simplify agents settings props so MCP only receives `selectedProvider` and
`currentProjects`
- Keep Claude working-directory unsupported while preserving cwd support for
Cursor, Codex, and Gemini
- Add progressive MCP loading:
- render user/global scope first
- load project/local scopes in the background
- append project results as they resolve
- cache MCP lists briefly to avoid slow tab-switch refetches
- ignore stale async responses after provider switches
Verification:
- `npx eslint src/components/mcp`
- `npm run typecheck`
- `npm run build:client`
* fix(mcp): form with multiline text handling for args, env, headers, and envVars
* feat(mcp): add global MCP server creation flow
Add a separate global MCP add path in the settings MCP module so users can create
one shared MCP server configuration across Claude, Cursor, Codex, and Gemini from
the same screen.
The provider-specific add flow is still kept next to it because these two actions
have different intent. A global MCP server must be constrained to the subset of
configuration that every provider can accept, while a provider-specific server can
still use that provider's own supported scopes, transports, and fields. Naming the
buttons as "Add Global MCP Server" and "Add <Provider> MCP Server" makes that
distinction explicit without forcing users to infer it from the selected tab.
This also moves the explanatory copy to button hover text to keep the MCP toolbar
compact while still documenting the difference between global and provider-only
adds at the point of action.
Implementation details:
- Add global MCP form mode with shared user/project scopes and stdio/http transports.
- Submit global creates through `/api/providers/mcp/servers/global`.
- Reuse the existing MCP form modal with configurable scopes, transports, labels,
and descriptions instead of duplicating form logic.
- Disable provider-only fields for the global flow because those fields cannot be
safely written to every provider.
- Clear the MCP server cache globally after a global add because every provider tab
may have changed.
- Surface partial global add failures with provider-specific error messages.
Validation:
- npx eslint src/components/mcp/view/McpServers.tsx
- npm run typecheck
- npm run build:client
* feat: implement platform-specific provider visibility for cursor agent
* refactor(providers): centralize message handling in provider module
Move provider-specific normalizeMessage and fetchHistory logic out of the legacy
server/providers adapters and into the refactored provider classes so callers can
depend on the main provider contract instead of parallel adapter plumbing.
Add a providers service to resolve concrete providers through the registry and
delegate message normalization/history loading from realtime handlers and the
unified messages route. Add shared TypeScript message/history types and normalized
message helpers so provider implementations and callers use the same contract.
Remove the old adapter registry/files now that Claude, Codex, Cursor, and Gemini
implement the required behavior directly.
* refactor(providers): move auth status checks into provider runtimes
Move provider authentication status logic out of the CLI auth route so auth checks
live with the provider implementations that understand each provider's install
and credential model.
Add provider-specific auth runtime classes for Claude, Codex, Cursor, and Gemini,
and expose them through the shared provider contract as `provider.auth`. Add a
provider auth service that resolves providers through the registry and delegates
status checks via `auth.getStatus()`.
Keep the existing `/api/cli/<provider>/status` endpoints, but make them thin route
adapters over the new provider auth service. This removes duplicated route-local
credential parsing and makes auth status a first-class provider capability beside
MCP and message handling.
* refactor(providers): clarify provider auth and MCP naming
Rename provider auth/MCP contracts to remove the overloaded Runtime suffix so
the shared interfaces read as stable provider capabilities instead of execution
implementation details.
Add a consistent provider-first auth class naming convention by renaming
ClaudeAuthProvider, CodexAuthProvider, CursorAuthProvider, and GeminiAuthProvider
to ClaudeProviderAuth, CodexProviderAuth, CursorProviderAuth, and
GeminiProviderAuth.
This keeps the provider module API easier to scan and aligns auth naming with
the main provider ownership model.
* refactor(providers): move session message delegation into sessions service
Move provider-backed session history and message normalization calls out of the
generic providers service so the service name reflects the behavior it owns.
Add a dedicated sessions service for listing session-capable providers,
normalizing live provider events, and fetching persisted session history through
the provider registry. Update realtime handlers and the unified messages route to
depend on `sessionsService` instead of `providersService`.
This separates session message operations from other provider concerns such as
auth and MCP, keeping the provider services easier to navigate as the module
grows.
* refactor(providers): move auth status routes under provider API
Move provider authentication status endpoints out of the legacy `/api/cli` route
namespace so auth status is exposed through the same provider module that owns
provider auth and MCP behavior.
Add `GET /api/providers/:provider/auth/status` to the provider router and route
it through the provider auth service. Remove the old `cli-auth` route file and
`/api/cli` mount now that provider auth status is handled by the unified provider
API.
Update the frontend provider auth endpoint map to call the new provider-scoped
routes and rename the endpoint constant to reflect that it is no longer CLI
specific.
* chore(api): remove unused backend endpoints after MCP audit
Remove legacy backend routes that no longer have frontend or internal
callers, including the old Claude/Codex MCP APIs, unused Cursor and Codex
helper endpoints, stale TaskMaster detection/next/initialize routes,
and unused command/project helpers.
This reduces duplicated MCP behavior now handled by the provider-based
MCP API, shrinks the exposed backend surface, and removes probe/service
code that only existed for deleted endpoints.
Add an MCP settings API audit document to capture the route-usage
analysis and explain why the legacy MCP endpoints were considered safe
to remove.
* refactor(providers): remove debug logging from Claude authentication status checks
* refactor(cursor): lazy-load better-sqlite3 and remove unused type definitions
* refactor(cursor): remove SSE from CursorMcpProvider constructor and error message
* refactor(auth): standardize API response structure and remove unused error handling
* refactor: make providers use dedicated session handling classes
* refactor: remove legacy provider selection UI and logic
* fix(server/providers): harden and correct session history normalization/pagination
Address correctness and safety issues in provider session adapters while
preserving existing normalized message shapes.
Claude sessions:
- Ensure user text content parts generate unique normalized message ids.
- Replace duplicate `${baseId}_text` ids with index-suffixed ids to avoid
collisions when one user message contains multiple text segments.
Cursor sessions:
- Add session id sanitization before constructing SQLite paths to prevent
path traversal via crafted session ids.
- Enforce containment by resolving the computed DB path and asserting it stays
under ~/.cursor/chats/<cwdId>.
- Refactor blob parsing to a two-pass flow: first build blobMap and collect
JSON blobs, then parse binary parent refs against the fully populated map.
- Fix pagination semantics so limit=0 returns an empty page instead of full
history, with consistent total/hasMore/offset/limit metadata.
Gemini sessions:
- Honor FetchHistoryOptions pagination by reading limit/offset and slicing
normalized history accordingly.
- Return consistent hasMore/offset/limit metadata for paged responses.
Validation:
- eslint passed for touched files.
- server TypeScript check passed (tsc --noEmit -p server/tsconfig.json).
---------
470 lines
19 KiB
JavaScript
470 lines
19 KiB
JavaScript
import { spawn } from 'child_process';
|
|
import crossSpawn from 'cross-spawn';
|
|
|
|
// Use cross-spawn on Windows for correct .cmd resolution (same pattern as cursor-cli.js)
|
|
const spawnFunction = process.platform === 'win32' ? crossSpawn : spawn;
|
|
import { promises as fs } from 'fs';
|
|
import path from 'path';
|
|
import os from 'os';
|
|
import sessionManager from './sessionManager.js';
|
|
import GeminiResponseHandler from './gemini-response-handler.js';
|
|
import { notifyRunFailed, notifyRunStopped } from './services/notification-orchestrator.js';
|
|
import { providerAuthService } from './modules/providers/services/provider-auth.service.js';
|
|
import { createNormalizedMessage } from './shared/utils.js';
|
|
|
|
let activeGeminiProcesses = new Map(); // Track active processes by session ID
|
|
|
|
async function spawnGemini(command, options = {}, ws) {
|
|
const { sessionId, projectPath, cwd, toolsSettings, permissionMode, images, sessionSummary } = options;
|
|
let capturedSessionId = sessionId; // Track session ID throughout the process
|
|
let sessionCreatedSent = false; // Track if we've already sent session-created event
|
|
let assistantBlocks = []; // Accumulate the full response blocks including tools
|
|
|
|
// Use tools settings passed from frontend, or defaults
|
|
const settings = toolsSettings || {
|
|
allowedTools: [],
|
|
disallowedTools: [],
|
|
skipPermissions: false
|
|
};
|
|
|
|
// Build Gemini CLI command - start with print/resume flags first
|
|
const args = [];
|
|
|
|
// Add prompt flag with command if we have a command
|
|
if (command && command.trim()) {
|
|
args.push('--prompt', command);
|
|
}
|
|
|
|
// If we have a sessionId, we want to resume
|
|
if (sessionId) {
|
|
const session = sessionManager.getSession(sessionId);
|
|
if (session && session.cliSessionId) {
|
|
args.push('--resume', session.cliSessionId);
|
|
}
|
|
}
|
|
|
|
// Use cwd (actual project directory) instead of projectPath (Gemini's metadata directory)
|
|
// Clean the path by removing any non-printable characters
|
|
const cleanPath = (cwd || projectPath || process.cwd()).replace(/[^\x20-\x7E]/g, '').trim();
|
|
const workingDir = cleanPath;
|
|
|
|
// Handle images by saving them to temporary files and passing paths to Gemini
|
|
const tempImagePaths = [];
|
|
let tempDir = null;
|
|
if (images && images.length > 0) {
|
|
try {
|
|
// Create temp directory in the project directory so Gemini can access it
|
|
tempDir = path.join(workingDir, '.tmp', 'images', Date.now().toString());
|
|
await fs.mkdir(tempDir, { recursive: true });
|
|
|
|
// Save each image to a temp file
|
|
for (const [index, image] of images.entries()) {
|
|
// Extract base64 data and mime type
|
|
const matches = image.data.match(/^data:([^;]+);base64,(.+)$/);
|
|
if (!matches) {
|
|
continue;
|
|
}
|
|
|
|
const [, mimeType, base64Data] = matches;
|
|
const extension = mimeType.split('/')[1] || 'png';
|
|
const filename = `image_${index}.${extension}`;
|
|
const filepath = path.join(tempDir, filename);
|
|
|
|
// Write base64 data to file
|
|
await fs.writeFile(filepath, Buffer.from(base64Data, 'base64'));
|
|
tempImagePaths.push(filepath);
|
|
}
|
|
|
|
// Include the full image paths in the prompt for Gemini to reference
|
|
// Gemini CLI can read images from file paths in the prompt
|
|
if (tempImagePaths.length > 0 && command && command.trim()) {
|
|
const imageNote = `\n\n[Images given: ${tempImagePaths.length} images are located at the following paths:]\n${tempImagePaths.map((p, i) => `${i + 1}. ${p}`).join('\n')}`;
|
|
const modifiedCommand = command + imageNote;
|
|
|
|
// Update the command in args
|
|
const promptIndex = args.indexOf('--prompt');
|
|
if (promptIndex !== -1 && args[promptIndex + 1] === command) {
|
|
args[promptIndex + 1] = modifiedCommand;
|
|
} else if (promptIndex !== -1) {
|
|
// If we're using context, update the full prompt
|
|
args[promptIndex + 1] = args[promptIndex + 1] + imageNote;
|
|
}
|
|
}
|
|
} catch (error) {
|
|
console.error('Error processing images for Gemini:', error);
|
|
}
|
|
}
|
|
|
|
// Add basic flags for Gemini
|
|
if (options.debug) {
|
|
args.push('--debug');
|
|
}
|
|
|
|
// Add MCP config flag only if MCP servers are configured
|
|
try {
|
|
const geminiConfigPath = path.join(os.homedir(), '.gemini.json');
|
|
let hasMcpServers = false;
|
|
|
|
try {
|
|
await fs.access(geminiConfigPath);
|
|
const geminiConfigRaw = await fs.readFile(geminiConfigPath, 'utf8');
|
|
const geminiConfig = JSON.parse(geminiConfigRaw);
|
|
|
|
// Check global MCP servers
|
|
if (geminiConfig.mcpServers && Object.keys(geminiConfig.mcpServers).length > 0) {
|
|
hasMcpServers = true;
|
|
}
|
|
|
|
// Check project-specific MCP servers
|
|
if (!hasMcpServers && geminiConfig.geminiProjects) {
|
|
const currentProjectPath = process.cwd();
|
|
const projectConfig = geminiConfig.geminiProjects[currentProjectPath];
|
|
if (projectConfig && projectConfig.mcpServers && Object.keys(projectConfig.mcpServers).length > 0) {
|
|
hasMcpServers = true;
|
|
}
|
|
}
|
|
} catch (e) {
|
|
// Ignore if file doesn't exist or isn't parsable
|
|
}
|
|
|
|
if (hasMcpServers) {
|
|
args.push('--mcp-config', geminiConfigPath);
|
|
}
|
|
} catch (error) {
|
|
// Ignore outer errors
|
|
}
|
|
|
|
// Add model for all sessions (both new and resumed)
|
|
let modelToUse = options.model || 'gemini-2.5-flash';
|
|
args.push('--model', modelToUse);
|
|
args.push('--output-format', 'stream-json');
|
|
|
|
// Handle approval modes and allowed tools
|
|
if (settings.skipPermissions || options.skipPermissions || permissionMode === 'yolo') {
|
|
args.push('--yolo');
|
|
} else if (permissionMode === 'auto_edit') {
|
|
args.push('--approval-mode', 'auto_edit');
|
|
} else if (permissionMode === 'plan') {
|
|
args.push('--approval-mode', 'plan');
|
|
}
|
|
|
|
if (settings.allowedTools && settings.allowedTools.length > 0) {
|
|
args.push('--allowed-tools', settings.allowedTools.join(','));
|
|
}
|
|
|
|
// Try to find gemini in PATH first, then fall back to environment variable
|
|
const geminiPath = process.env.GEMINI_PATH || 'gemini';
|
|
console.log('Spawning Gemini CLI:', geminiPath, args.join(' '));
|
|
console.log('Working directory:', workingDir);
|
|
|
|
let spawnCmd = geminiPath;
|
|
let spawnArgs = args;
|
|
|
|
// On non-Windows platforms, wrap the execution in a shell to avoid ENOEXEC
|
|
// which happens when the target is a script lacking a shebang.
|
|
if (os.platform() !== 'win32') {
|
|
spawnCmd = 'sh';
|
|
// Use exec to replace the shell process, ensuring signals hit gemini directly
|
|
spawnArgs = ['-c', 'exec "$0" "$@"', geminiPath, ...args];
|
|
}
|
|
|
|
return new Promise((resolve, reject) => {
|
|
const geminiProcess = spawnFunction(spawnCmd, spawnArgs, {
|
|
cwd: workingDir,
|
|
stdio: ['pipe', 'pipe', 'pipe'],
|
|
env: { ...process.env } // Inherit all environment variables
|
|
});
|
|
let terminalNotificationSent = false;
|
|
let terminalFailureReason = null;
|
|
|
|
const notifyTerminalState = ({ code = null, error = null } = {}) => {
|
|
if (terminalNotificationSent) {
|
|
return;
|
|
}
|
|
|
|
terminalNotificationSent = true;
|
|
|
|
const finalSessionId = capturedSessionId || sessionId || processKey;
|
|
if (code === 0 && !error) {
|
|
notifyRunStopped({
|
|
userId: ws?.userId || null,
|
|
provider: 'gemini',
|
|
sessionId: finalSessionId,
|
|
sessionName: sessionSummary,
|
|
stopReason: 'completed'
|
|
});
|
|
return;
|
|
}
|
|
|
|
notifyRunFailed({
|
|
userId: ws?.userId || null,
|
|
provider: 'gemini',
|
|
sessionId: finalSessionId,
|
|
sessionName: sessionSummary,
|
|
error: error || terminalFailureReason || `Gemini CLI exited with code ${code}`
|
|
});
|
|
};
|
|
|
|
// Attach temp file info to process for cleanup later
|
|
geminiProcess.tempImagePaths = tempImagePaths;
|
|
geminiProcess.tempDir = tempDir;
|
|
|
|
// Store process reference for potential abort
|
|
const processKey = capturedSessionId || sessionId || Date.now().toString();
|
|
activeGeminiProcesses.set(processKey, geminiProcess);
|
|
|
|
// Store sessionId on the process object for debugging
|
|
geminiProcess.sessionId = processKey;
|
|
|
|
// Close stdin to signal we're done sending input
|
|
geminiProcess.stdin.end();
|
|
|
|
// Add timeout handler
|
|
const timeoutMs = 120000; // 120 seconds for slower models
|
|
let timeout;
|
|
|
|
const startTimeout = () => {
|
|
if (timeout) clearTimeout(timeout);
|
|
timeout = setTimeout(() => {
|
|
const socketSessionId = typeof ws.getSessionId === 'function' ? ws.getSessionId() : (capturedSessionId || sessionId || processKey);
|
|
terminalFailureReason = `Gemini CLI timeout - no response received for ${timeoutMs / 1000} seconds`;
|
|
ws.send(createNormalizedMessage({ kind: 'error', content: terminalFailureReason, sessionId: socketSessionId, provider: 'gemini' }));
|
|
try {
|
|
geminiProcess.kill('SIGTERM');
|
|
} catch (e) { }
|
|
}, timeoutMs);
|
|
};
|
|
|
|
startTimeout();
|
|
|
|
// Save user message to session when starting
|
|
if (command && capturedSessionId) {
|
|
sessionManager.addMessage(capturedSessionId, 'user', command);
|
|
}
|
|
|
|
// Create response handler for NDJSON buffering
|
|
let responseHandler;
|
|
if (ws) {
|
|
responseHandler = new GeminiResponseHandler(ws, {
|
|
onContentFragment: (content) => {
|
|
if (assistantBlocks.length > 0 && assistantBlocks[assistantBlocks.length - 1].type === 'text') {
|
|
assistantBlocks[assistantBlocks.length - 1].text += content;
|
|
} else {
|
|
assistantBlocks.push({ type: 'text', text: content });
|
|
}
|
|
},
|
|
onToolUse: (event) => {
|
|
assistantBlocks.push({
|
|
type: 'tool_use',
|
|
id: event.tool_id,
|
|
name: event.tool_name,
|
|
input: event.parameters
|
|
});
|
|
},
|
|
onToolResult: (event) => {
|
|
if (capturedSessionId) {
|
|
if (assistantBlocks.length > 0) {
|
|
sessionManager.addMessage(capturedSessionId, 'assistant', [...assistantBlocks]);
|
|
assistantBlocks = [];
|
|
}
|
|
sessionManager.addMessage(capturedSessionId, 'user', [{
|
|
type: 'tool_result',
|
|
tool_use_id: event.tool_id,
|
|
content: event.output === undefined ? null : event.output,
|
|
is_error: event.status === 'error'
|
|
}]);
|
|
}
|
|
},
|
|
onInit: (event) => {
|
|
if (capturedSessionId) {
|
|
const sess = sessionManager.getSession(capturedSessionId);
|
|
if (sess && !sess.cliSessionId) {
|
|
sess.cliSessionId = event.session_id;
|
|
sessionManager.saveSession(capturedSessionId);
|
|
}
|
|
}
|
|
}
|
|
});
|
|
}
|
|
|
|
// Handle stdout
|
|
geminiProcess.stdout.on('data', (data) => {
|
|
const rawOutput = data.toString();
|
|
startTimeout(); // Re-arm the timeout
|
|
|
|
// For new sessions, create a session ID FIRST
|
|
if (!sessionId && !sessionCreatedSent && !capturedSessionId) {
|
|
capturedSessionId = `gemini_${Date.now()}`;
|
|
sessionCreatedSent = true;
|
|
|
|
// Create session in session manager
|
|
sessionManager.createSession(capturedSessionId, cwd || process.cwd());
|
|
|
|
// Save the user message now that we have a session ID
|
|
if (command) {
|
|
sessionManager.addMessage(capturedSessionId, 'user', command);
|
|
}
|
|
|
|
// Update process key with captured session ID
|
|
if (processKey !== capturedSessionId) {
|
|
activeGeminiProcesses.delete(processKey);
|
|
activeGeminiProcesses.set(capturedSessionId, geminiProcess);
|
|
}
|
|
|
|
ws.setSessionId && typeof ws.setSessionId === 'function' && ws.setSessionId(capturedSessionId);
|
|
|
|
ws.send(createNormalizedMessage({ kind: 'session_created', newSessionId: capturedSessionId, sessionId: capturedSessionId, provider: 'gemini' }));
|
|
}
|
|
|
|
if (responseHandler) {
|
|
responseHandler.processData(rawOutput);
|
|
} else if (rawOutput) {
|
|
// Fallback to direct sending for raw CLI mode without WS
|
|
if (assistantBlocks.length > 0 && assistantBlocks[assistantBlocks.length - 1].type === 'text') {
|
|
assistantBlocks[assistantBlocks.length - 1].text += rawOutput;
|
|
} else {
|
|
assistantBlocks.push({ type: 'text', text: rawOutput });
|
|
}
|
|
const socketSessionId = typeof ws.getSessionId === 'function' ? ws.getSessionId() : (capturedSessionId || sessionId);
|
|
ws.send(createNormalizedMessage({ kind: 'stream_delta', content: rawOutput, sessionId: socketSessionId, provider: 'gemini' }));
|
|
}
|
|
});
|
|
|
|
// Handle stderr
|
|
geminiProcess.stderr.on('data', (data) => {
|
|
const errorMsg = data.toString();
|
|
|
|
// Filter out deprecation warnings and "Loaded cached credentials" message
|
|
if (errorMsg.includes('[DEP0040]') ||
|
|
errorMsg.includes('DeprecationWarning') ||
|
|
errorMsg.includes('--trace-deprecation') ||
|
|
errorMsg.includes('Loaded cached credentials')) {
|
|
return;
|
|
}
|
|
|
|
const socketSessionId = typeof ws.getSessionId === 'function' ? ws.getSessionId() : (capturedSessionId || sessionId);
|
|
ws.send(createNormalizedMessage({ kind: 'error', content: errorMsg, sessionId: socketSessionId, provider: 'gemini' }));
|
|
});
|
|
|
|
// Handle process completion
|
|
geminiProcess.on('close', async (code) => {
|
|
clearTimeout(timeout);
|
|
|
|
// Flush any remaining buffered content
|
|
if (responseHandler) {
|
|
responseHandler.forceFlush();
|
|
responseHandler.destroy();
|
|
}
|
|
|
|
// Clean up process reference
|
|
const finalSessionId = capturedSessionId || sessionId || processKey;
|
|
activeGeminiProcesses.delete(finalSessionId);
|
|
|
|
// Save assistant response to session if we have one
|
|
if (finalSessionId && assistantBlocks.length > 0) {
|
|
sessionManager.addMessage(finalSessionId, 'assistant', assistantBlocks);
|
|
}
|
|
|
|
ws.send(createNormalizedMessage({ kind: 'complete', exitCode: code, isNewSession: !sessionId && !!command, sessionId: finalSessionId, provider: 'gemini' }));
|
|
|
|
// Clean up temporary image files if any
|
|
if (geminiProcess.tempImagePaths && geminiProcess.tempImagePaths.length > 0) {
|
|
for (const imagePath of geminiProcess.tempImagePaths) {
|
|
await fs.unlink(imagePath).catch(err => { });
|
|
}
|
|
if (geminiProcess.tempDir) {
|
|
await fs.rm(geminiProcess.tempDir, { recursive: true, force: true }).catch(err => { });
|
|
}
|
|
}
|
|
|
|
if (code === 0) {
|
|
notifyTerminalState({ code });
|
|
resolve();
|
|
} else {
|
|
// code 127 = shell "command not found" — check installation
|
|
if (code === 127) {
|
|
const installed = await providerAuthService.isProviderInstalled('gemini');
|
|
if (!installed) {
|
|
const socketSessionId = typeof ws.getSessionId === 'function' ? ws.getSessionId() : finalSessionId;
|
|
ws.send(createNormalizedMessage({ kind: 'error', content: 'Gemini CLI is not installed. Please install it first: https://github.com/google-gemini/gemini-cli', sessionId: socketSessionId, provider: 'gemini' }));
|
|
}
|
|
}
|
|
|
|
notifyTerminalState({
|
|
code,
|
|
error: code === null ? 'Gemini CLI process was terminated or timed out' : null
|
|
});
|
|
reject(new Error(code === null ? 'Gemini CLI process was terminated or timed out' : `Gemini CLI exited with code ${code}`));
|
|
}
|
|
});
|
|
|
|
// Handle process errors
|
|
geminiProcess.on('error', async (error) => {
|
|
// Clean up process reference on error
|
|
const finalSessionId = capturedSessionId || sessionId || processKey;
|
|
activeGeminiProcesses.delete(finalSessionId);
|
|
|
|
// Check if Gemini CLI is installed for a clearer error message
|
|
const installed = await providerAuthService.isProviderInstalled('gemini');
|
|
const errorContent = !installed
|
|
? 'Gemini CLI is not installed. Please install it first: https://github.com/google-gemini/gemini-cli'
|
|
: error.message;
|
|
|
|
const errorSessionId = typeof ws.getSessionId === 'function' ? ws.getSessionId() : finalSessionId;
|
|
ws.send(createNormalizedMessage({ kind: 'error', content: errorContent, sessionId: errorSessionId, provider: 'gemini' }));
|
|
notifyTerminalState({ error });
|
|
|
|
reject(error);
|
|
});
|
|
|
|
});
|
|
}
|
|
|
|
function abortGeminiSession(sessionId) {
|
|
let geminiProc = activeGeminiProcesses.get(sessionId);
|
|
let processKey = sessionId;
|
|
|
|
if (!geminiProc) {
|
|
for (const [key, proc] of activeGeminiProcesses.entries()) {
|
|
if (proc.sessionId === sessionId) {
|
|
geminiProc = proc;
|
|
processKey = key;
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
if (geminiProc) {
|
|
try {
|
|
geminiProc.kill('SIGTERM');
|
|
setTimeout(() => {
|
|
if (activeGeminiProcesses.has(processKey)) {
|
|
try {
|
|
geminiProc.kill('SIGKILL');
|
|
} catch (e) { }
|
|
}
|
|
}, 2000); // Wait 2 seconds before force kill
|
|
|
|
return true;
|
|
} catch (error) {
|
|
return false;
|
|
}
|
|
}
|
|
return false;
|
|
}
|
|
|
|
function isGeminiSessionActive(sessionId) {
|
|
return activeGeminiProcesses.has(sessionId);
|
|
}
|
|
|
|
function getActiveGeminiSessions() {
|
|
return Array.from(activeGeminiProcesses.keys());
|
|
}
|
|
|
|
export {
|
|
spawnGemini,
|
|
abortGeminiSession,
|
|
isGeminiSessionActive,
|
|
getActiveGeminiSessions
|
|
};
|