mirror of
https://github.com/siteboon/claudecodeui.git
synced 2026-05-01 18:28:38 +00:00
Refactor provider/session architecture to be DB-driven, modular, and sessionId-first across backend and frontend (#715)
* refactor: remove unused exports
* refactor: remove unused fields from project and session objects
* refactor: rename session_names table and related code to sessions for clarity and consistency
* refactor(database): move db into typescript
- Implemented githubTokensDb for managing GitHub tokens with CRUD operations.
- Created
otificationPreferencesDb to handle user notification preferences.
- Added projectsDb for project path management and related operations.
- Introduced pushSubscriptionsDb for managing browser push subscriptions.
- Developed scanStateDb to track the last scanned timestamp.
- Established sessionsDb for session management with CRUD functionalities.
- Created userDb for user management, including authentication and onboarding.
- Implemented apidKeysDb for storing and managing VAPID keys.
feat(database): define schema for new database tables
- Added SQL schema definitions for users, API keys, user credentials, notification preferences, VAPID keys, push subscriptions, projects, sessions, scan state, and app configuration.
- Included necessary indexes for performance optimization.
refactor(shared): enhance type definitions and utility functions
- Updated shared types and interfaces for improved clarity and consistency.
- Added new types for credential management and provider-specific operations.
- Refined utility functions for better error handling and message normalization.
* feat: added session indexer logic
* perf(projects): lazy-load TaskMaster metadata per selected project
Why:
- /api/projects is a hot path (initial load, sidebar refresh, websocket sync).
- Scanning .taskmaster for every project on each call added avoidable fs I/O and payload size.
- TaskMaster metadata is only needed after selecting a specific project.
- Moving it to a project-scoped endpoint makes loading cost match user intent.
- The UI now hydrates TaskMaster state on selection and keeps it across refresh events.
- This prevents status flicker/regression while still removing global scan overhead.
- Selection fetches are sequence-guarded to block stale async responses on fast switching.
- isManuallyAdded was removed from responses to keep the public project contract minimal.
- Project dumps now use incrementing snapshot files to preserve history for debugging.
What changed:
- Added GET /api/projects/:projectName/taskmaster and getProjectTaskMaster().
- Removed TaskMaster detection from bulk getProjects().
- Added api.projectTaskmaster(...) plus selection-time hydration in frontend contexts.
- Merged cached taskmaster values into refreshed project lists for continuity.
- Removed isManuallyAdded from manual project payloads.
* refactor: update import paths for database modules and remove legacy db.js and schema.js files
* refactor(projects): identify projects by DB projectId instead of folder-derived name
GET /api/projects used to scan ~/.claude/projects/ on every request, derive
each project's identity from the encoded folder name, and re-parse JSONL
files to build session lists. Using the folder-derived name as the project
identifier leaked the Claude CLI's on-disk encoding into every API route,
forced every downstream endpoint to re-resolve a real path via JSONL
'cwd' inspection, and made the project list endpoint O(projects x sessions)
on disk I/O.
This change switches the entire API surface to identify projects by the
stable primary key from the 'projects' table and drives the listing
straight from the DB:
- Add projectsDb.getProjectPathById as the canonical projectId -> path
resolver so routes no longer need to touch the filesystem to figure out
where a project lives.
- Rewrite getProjects so it reads the project list from the 'projects'
table and the per-project session list from the 'sessions' table (one
SELECT per project). No filesystem scanning happens for this endpoint
anymore, which removes the dependency on ~/.claude/projects existing,
on Cursor's MD5-hashed chat folders being discoverable, and on Codex's
JSONL history being on disk. Per the migration spec each session now
exposes 'summary' sourced from sessions.custom_name, 'messageCount' = 0
(message counting is not implemented), and sessionMeta.hasMore is
pinned to false since this endpoint doesn't drive session pagination.
- Introduce id-based wrappers (getSessionsById, renameProjectById,
deleteSessionById, deleteProjectById, getProjectTaskMasterById) so
every caller can pass projectId and resolve the real path through the
DB. renameProjectById also writes to projects.custom_project_name so
the DB-driven getProjects response reflects renames immediately; it
keeps project-config.json in sync for any legacy reader that still
consults the JSON file.
- Migrate every /api/projects/:projectName route in server/index.js,
server/routes/taskmaster.js, and server/routes/messages.js to
:projectId, and change server/routes/git.js so the 'project'
query/body parameter carries a projectId that is resolved through the
DB before any git command runs. TaskMaster WebSocket broadcasts emit
'projectId' for the same reason so the frontend can match
notifications against its current selection without another lookup.
- Delete helpers that existed only to feed the old getProjects path
(getCursorSessions, getGeminiCliSessions, getProjectTaskMaster) along
with their unused imports (better-sqlite3's Database,
applyCustomSessionNames). The legacy folder-name helpers (getSessions,
renameProject, deleteSession, deleteProject, extractProjectDirectory)
are kept as internal implementation details of the id-based wrappers
and of destructive cleanup / conversation search, but they are no
longer re-exported.
- searchConversations still walks JSONL to produce match snippets (that
data doesn't live in the DB), but it now includes the resolved
projectId in each result so the sidebar can cross-reference hits with
its already loaded project list without a second round-trip.
Frontend migration:
- Project.name is replaced by Project.projectId in src/types/app.ts, and
ProjectSession.__projectName becomes __projectId so session tagging
and sidebar state keys stay aligned with the backend identifier.
Settings continues to use SettingsProject.name for legacy consumers,
but it is populated from projectId by normalizeProjectForSettings.
- All places that previously indexed per-project state by project.name
(sidebar expanded/starred/loading/deletingProjects sets,
additionalSessions map, projectHasMoreOverrides, starredProjects
localStorage, command history and draft-input localStorage,
TaskMaster caches) now key on projectId so state survives
display-name edits and is consistent across the app.
- src/utils/api.js renames every endpoint parameter to projectId, the
unified messages endpoint takes projectId in its query string, and
useSessionStore forwards projectId on fetchFromServer / fetchMore /
refreshFromServer. Git panel, file tree, code editor, PRD editor,
plugins context, MCP server flows and TaskMaster hooks are all
updated to pass projectId.
- DEFAULT_PROJECT_FOR_EMPTY_SHELL is updated to carry a 'default'
projectId sentinel so the empty-shell placeholder still satisfies the
Project contract.
Bug fix bundled in:
- sessionsDb.setName no longer bumps updated_at when a row already
exists. Renaming is a label change, not activity, so there is no
reason for it to reset 'last activity' in the sidebar. It also no
longer relies on SQLite's CURRENT_TIMESTAMP, which stores a naive
'YYYY-MM-DD HH:MM:SS' value that JavaScript parses as local time and
caused renamed sessions to appear shifted backwards by the client's
UTC offset. When an INSERT actually happens it now writes ISO-8601
UTC with a 'Z' suffix.
- buildSessionsByProviderFromDb normalizes any legacy naive timestamps
in the sessions table to ISO-8601 UTC on the way out so rows written
before this change also render correctly on the client.
Other cleanup:
- Removed the filesystem-first project-discovery comment block at the
top of server/projects.js and replaced it with a short note that
describes the new DB-driven flow and lists the few remaining
filesystem-dependent helpers (message reads, search, destructive
delete, manual project registration).
- server/modules/providers/index.ts is added as a small barrel so the
providers module exposes a stable public surface.
Made-with: Cursor
* refactor(projects): reorganize project-related logic into dedicated modules
* refactor(projects): rename getProjects with getProjectsWithSessions
* refactor: update import path for getProjectsWithSessions to include file extension
* refactor: use updated session watcher
In addition, for projects_updated websocket response, send the sessionId instead
* refactor(websocket): move websocket logic to its own module
* refactor(sessions-watcher): remove redundant logging after session sync completion
* refactor(index.js): reorganize code structure
* refactor(index.js): fix import order
* refactor: remove unnecessary GitHub cloning logic from create-workspace endpoint
* refactor: modularize project services, and wizard create/clone flow
Restructure project creation, listing, GitHub clone progress, and TaskMaster
details behind a dedicated TypeScript module under server/modules/projects/,
and align the client wizard with a single path-based flow.
Server / routing
- Remove server/routes/projects.js and mount server/modules/projects/
projects.routes.ts at /api/projects (still behind authenticateToken).
- Drop duplicate handlers from server/index.js for GET /api/projects and
GET /api/projects/:projectId/taskmaster; those live on the new router.
- Import WORKSPACES_ROOT and validateWorkspacePath from shared utils in
index.js instead of the deleted projects route module.
Projects router (projects.routes.ts)
- GET /: list projects with sessions (existing snapshot behavior).
- POST /create-project: validate body, reject legacy workspaceType and
mixed clone fields, delegate to createProject service, return distinct
success copy when an archived path is reactivated.
- GET /clone-progress: Server-Sent Events for clone progress/complete/error;
requires authenticated user id for token resolution; wires startCloneProject.
- GET /:projectId/taskmaster: delegates to getProjectTaskMaster.
Services (new)
- project-management.service.ts: path validation, workspace directory
creation, persistence via projectsDb.createProjectPath, mapping to API
project shape; surfaces AppError for validation, conflict, and not-found
cases; optional dependency injection for tests.
- project-clone.service.ts: validates workspace, resolves GitHub auth
(stored token or inline token), runs git clone with progress callbacks,
registers project via createProject on success; sanitizes errors and
supports cancellation; injectable dependencies for tests.
- projects-has-taskmaster.service.ts: moves TaskMaster detection and
normalization out of server/projects.js; resolve-by-id and public
getProjectTaskMaster with structured AppError responses.
Persistence and shared types
- projectsDb.createProjectPath now returns CreateProjectPathResult
(created | reactivated_archived | active_conflict) using INSERT … ON
CONFLICT with selective update when the row is archived; normalizes
display name from path or custom name; repository row typing moves to
shared ProjectRepositoryRow.
- getProjectPaths() returns only non-archived rows (isArchived = 0).
- shared/types.ts: ProjectRepositoryRow, CreateProjectPathResult/outcome,
WorkspacePathValidationResult.
- shared/utils.ts: WORKSPACES_ROOT, forbidden path lists, validateWorkspacePath,
asyncHandler for Express async routes.
Legacy cleanup
- server/projects.js: remove detectTaskMasterFolder, normalizeTaskMasterInfo,
and getProjectTaskMasterById (logic lives in the new service).
- server/routes/agent.js: register external API project paths with
projectsDb.createProjectPath instead of addProjectManually try/catch;
treat active_conflict as an existing registration and continue.
Tests
- Add Node test suites for project-management, project-clone, and
projects-has-taskmaster services; update projects.service test import
for renamed projects-with-sessions-fetch.service.ts.
Rename
- projects.service.ts → projects-with-sessions-fetch.service.ts;
re-export from modules/projects/index.ts.
Client (project creation wizard)
- Remove StepTypeSelection and workspaceType from form state and types;
wizard is two steps (configure path/GitHub auth, then review).
- createWorkspaceRequest → createProjectRequest; clone vs create-only
inferred from githubUrl (pathUtils / isCloneWorkflow).
- Adjust step indices, WizardProgress, StepConfiguration/Review,
WorkspacePathField, and src/utils/api.js as needed for the new API.
Docs
- Minor websocket README touch-up.
Net: ~1.6k insertions / ~0.9k deletions across 29 files; behavior is
centralized in typed services with explicit HTTP errors and test seams.
* refactor: remove loading sessions logic from sidebar
* refactor: move project rename to module
* refactor: move project deletion to module
* refactor: move project star state from localStorage to backend
* refactor: implement optimistic UI for project star state management
* feat: optimistic update for session watcher
* fix(projects-state): stop websocket message reprocessing loop
The websocket projects effect in useProjectsState could re-handle the same
latestMessage after local state writes triggered re-renders.
Under bursty websocket traffic, this created an update feedback cycle
that surfaced as 'Maximum update depth exceeded', often from Sidebar.
What changed:
- Added lastHandledMessageRef so each latestMessage object is handled once.
- Added an early return guard when the current message was already handled.
- Made projects updates idempotent by comparing previous and merged payloads
before calling setProjects.
Result:
- Breaks the effect -> state update -> effect re-entry cycle.
- Reduces redundant renders during rapid projects_updated traffic while
preserving normal project/session synchronization.
* refactor: optimize project auto-expand logic
* refactor: move projects provider specific logic into respective session providers
* refactor: move rename and delete sessions to modules
* refactor: move fetching messages to module
* fix: remove unused var
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* Potential fix for pull request finding 'Useless assignment to local variable'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* refactor(projects/sidebar): remove temp snapshot side-effects and simplify session metadata UX
Why this change was needed:
- Project listing had an implicit side effect: every fetch wrote a debug snapshot under `.tmp/project-dumps`.
That added unnecessary disk I/O to a hot path, introduced hidden runtime behavior, and created maintenance
overhead for code that was not part of product functionality.
- Keeping snapshot-specific exports/tests around made the projects module API broader than needed and coupled
tests to temporary/debug behavior instead of user-visible behavior.
- Codex sessions could remain stuck with a placeholder name (`Untitled Codex Session`) even after a real title
became available from newer sync data, which degraded session discoverability in the UI.
- Sidebar session rows showed duplicated provider branding and long-form relative times, which added visual noise
and reduced scan speed when many sessions are listed.
What changed:
- Removed temporary projects snapshot dumping from `projects-with-sessions-fetch.service.ts`:
- deleted snapshot types/helpers and file-write flow
- removed the write call from `getProjectsWithSessions`
- Removed snapshot-related surface area from `projects/index.ts`.
- Removed the snapshot-focused test `projects.service.test.ts` that only validated removed debug behavior.
- Updated `codex-session-synchronizer.provider.ts` to upgrade session names when an existing session still has
the placeholder title but a real parsed name is now available.
- Updated `SidebarSessionItem.tsx`:
- removed duplicate provider logo rendering in each session row
- moved age indicator to the right side
- made age indicator fade on hover to prioritize action controls
- switched to compact relative time format (`<1m`, `Xm`, `Xhr`, `Xd`) for faster list scanning
Outcome:
- Lower overhead and fewer hidden side effects in project fetches.
- Cleaner module boundaries in projects.
- Better Codex session naming consistency after sync.
- Cleaner sidebar density and clearer hover/action behavior.
* refactor: implement pagination for project sessions loading
* refactor: move search to module
* fix: search performance
* refactor: add handling for internal Codex metadata in conversation search
* fix(migrations,projects,clone): normalize legacy schema before writes and harden conflict detection
Why
- Legacy installs can have a sessions table shape that predates provider/custom_name columns. Running migrateLegacySessionNames first caused its INSERT OR REPLACE INTO sessions (...) to target columns that may not exist and fail during startup migration.
- Some upgraded databases had projects.project_id as plain TEXT instead of a real PRIMARY KEY. That breaks assumptions used by id-based lookups and can allow invalid/duplicate identity semantics over time.
- projectsDb.createProjectPath inferred outcomes from
ow.isArchived, but the upsert path always returns the post-update row with isArchived=0, so archived-reactivation and fresh-create could be misclassified.
- git clone accepted user-controlled URLs directly in argv position, so inputs beginning with - could be interpreted as options instead of a repository argument.
What
- Added
ebuildProjectsTableWithPrimaryKeySchema in migrations: detect table shape via getTableInfo('projects'), verify project_id has pk=1, and rebuild when missing.
- Rebuild flow now creates a canonical projects__new table (project_id TEXT PRIMARY KEY), copies rows with transformation, backfills empty ids via SQLITE_UUID_SQL, deduplicates conflicting ids/paths, then swaps tables inside a transaction.
- Replaced the prior ddColumnToTableIfNotExists(...) + UPDATE project_id sequence with PK-aware detection/rebuild logic so legacy DBs converge to the required schema.
- Reordered migration sequence to run
ebuildSessionsTableWithProjectSchema before migrateLegacySessionNames, ensuring sessions is normalized before legacy session_names merge writes execute.
- Updated projectsDb.createProjectPath to generate an ttemptedId before insert, pass it into the prepared statement, and classify outcomes by comparing returned
ow.project_id to ttemptedId (created vs
eactivated_archived), with no-row remaining ctive_conflict.
- Hardened clone execution by inserting -- before clone URL in git argv and rejecting normalized GitHub URLs that start with - in startCloneProject.
Tests
- Added integration coverage for projectsDb.createProjectPath branches: fresh insert, archived reactivation, and active conflict.
- Added clone service test for option-prefixed githubUrl rejection (INVALID_GITHUB_URL).
* refactor(session-synchronizer): update last scanned timestamp based on synchronization results
* refactor: improve session limit and offset validation in provider routes
* refactor: normalize project paths across database and service modules
* refactor(database): make session id the primary key in sessions table
* fix(codex): preserve reasoning entries as thinking blocks
Codex history normalization was downgrading reasoning into plain assistant text
because of branch ordering, not because the raw data was missing.
Why this mattered:
- Codex reasoning JSONL entries are intentionally mapped to history items with
type thinking, but they also carry message.role assistant.
- normalizeHistoryEntry evaluated the assistant-role branch before the
thinking branch.
- As a result, reasoning content matched the assistant-text path first and was
emitted as kind text instead of kind thinking.
- This collapses semantic intent, so UI and downstream features that rely on
thinking blocks (separate rendering, filtering, and interpretation of model
thought process vs final answer) receive the wrong message kind.
What changed:
- Prioritized thinking detection (raw.type === thinking or raw.isReasoning)
before role-based assistant normalization.
- Kept a non-empty content guard for thinking payloads to avoid emitting empty
artifacts.
Impact:
- Reasoning entries from persisted Codex JSONL now remain thinking blocks
end-to-end.
- Regular assistant text normalization behavior remains unchanged.
* refactor: remove dead code
* refactor: directly use getProjectPathById from projectsDb
* refactor: add gemini jsonl session support
This commit is contained in:
143
server/modules/database/connection.ts
Normal file
143
server/modules/database/connection.ts
Normal file
@@ -0,0 +1,143 @@
|
||||
/**
|
||||
* Database connection management.
|
||||
*
|
||||
* Owns the single SQLite connection used across all repositories.
|
||||
* Handles path resolution, directory creation, legacy database migration,
|
||||
* and eager app_config bootstrap so the auth middleware can read the
|
||||
* JWT secret before the full schema is applied.
|
||||
*
|
||||
* Consumers should never create their own Database instance — they use
|
||||
* `getConnection()` to obtain the shared singleton.
|
||||
*/
|
||||
|
||||
import Database from 'better-sqlite3';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
import { APP_CONFIG_TABLE_SCHEMA_SQL } from '@/modules/database/schema.js';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Path resolution
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Resolves the database file path from environment or falls back
|
||||
* to the legacy location inside the server/database/ folder.
|
||||
*
|
||||
* Priority:
|
||||
* 1. DATABASE_PATH environment variable (set by cli.js or load-env-vars.js)
|
||||
* 2. Legacy path: server/database/auth.db
|
||||
*/
|
||||
function resolveDatabasePath(): string {
|
||||
// process.env.DATABASE_PATH is set by load-env-vars.js to either the .env value or a default(~/.cloudcli/auth.db) in the user's home directory.
|
||||
return process.env.DATABASE_PATH || resolveLegacyDatabasePath();
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolves the legacy database path (always inside server/database/).
|
||||
* Used for the one-time migration to the new external location.
|
||||
*/
|
||||
function resolveLegacyDatabasePath(): string {
|
||||
const serverDir = path.resolve(__dirname, '..', '..', '..');
|
||||
return path.join(serverDir, 'database', 'auth.db');
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Directory & migration helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function ensureDatabaseDirectory(dbPath: string): void {
|
||||
const dir = path.dirname(dbPath);
|
||||
if (!fs.existsSync(dir)) {
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
console.log('Created database directory:', dir);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* If the database was moved to an external location (e.g. ~/.cloudcli/)
|
||||
* but the user still has a legacy auth.db inside the install directory,
|
||||
* copy it to the new location as a one-time migration.
|
||||
*/
|
||||
function migrateLegacyDatabase(targetPath: string): void {
|
||||
const legacyPath = resolveLegacyDatabasePath();
|
||||
|
||||
if (targetPath === legacyPath) return;
|
||||
if (fs.existsSync(targetPath)) return;
|
||||
if (!fs.existsSync(legacyPath)) return;
|
||||
|
||||
try {
|
||||
fs.copyFileSync(legacyPath, targetPath);
|
||||
console.log('Migrated legacy database', { from: legacyPath, to: targetPath });
|
||||
|
||||
|
||||
// copy the write-ahead log and shared memory files (auth.db-wal, auth.db-shm) if they exist, to preserve any uncommitted transactions
|
||||
for (const suffix of ['-wal', '-shm']) {
|
||||
const src = legacyPath + suffix;
|
||||
if (fs.existsSync(src)) {
|
||||
fs.copyFileSync(src, targetPath + suffix);
|
||||
}
|
||||
}
|
||||
} catch (err: any) {
|
||||
console.error('Could not migrate legacy database', { error: err.message });
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Singleton connection
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
let instance: Database.Database | null = null;
|
||||
|
||||
/**
|
||||
* Returns the shared database connection, creating it on first call.
|
||||
*
|
||||
* The first invocation:
|
||||
* 1. Resolves the target database path
|
||||
* 2. Ensures the parent directory exists
|
||||
* 3. Migrates from the legacy install-directory path if needed
|
||||
* 4. Opens the SQLite connection
|
||||
* 5. Eagerly creates the app_config table (auth reads JWT secret at import time)
|
||||
* 6. Logs the database location
|
||||
*/
|
||||
export function getConnection(): Database.Database {
|
||||
if (instance) return instance;
|
||||
|
||||
const dbPath = resolveDatabasePath();
|
||||
|
||||
ensureDatabaseDirectory(dbPath);
|
||||
migrateLegacyDatabase(dbPath);
|
||||
|
||||
instance = new Database(dbPath);
|
||||
|
||||
// app_config must exist immediately — the auth middleware reads
|
||||
// the JWT secret at module-load time, before initializeDatabase() runs.
|
||||
instance.exec(APP_CONFIG_TABLE_SCHEMA_SQL);
|
||||
|
||||
return instance;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the resolved database file path without opening a connection.
|
||||
* Useful for diagnostics and CLI status commands.
|
||||
*/
|
||||
export function getDatabasePath(): string {
|
||||
return resolveDatabasePath();
|
||||
}
|
||||
|
||||
/**
|
||||
* Closes the database connection and clears the singleton.
|
||||
* Primarily used for graceful shutdown or testing.
|
||||
*/
|
||||
export function closeConnection(): void {
|
||||
if (instance) {
|
||||
instance.close();
|
||||
instance = null;
|
||||
console.log('Database connection closed');
|
||||
}
|
||||
}
|
||||
12
server/modules/database/index.ts
Normal file
12
server/modules/database/index.ts
Normal file
@@ -0,0 +1,12 @@
|
||||
export { initializeDatabase } from '@/modules/database/init-db.js';
|
||||
export { apiKeysDb } from '@/modules/database/repositories/api-keys.js';
|
||||
export { appConfigDb } from '@/modules/database/repositories/app-config.js';
|
||||
export { credentialsDb } from '@/modules/database/repositories/credentials.js';
|
||||
export { githubTokensDb } from '@/modules/database/repositories/github-tokens.js';
|
||||
export { notificationPreferencesDb } from '@/modules/database/repositories/notification-preferences.js';
|
||||
export { projectsDb } from '@/modules/database/repositories/projects.db.js';
|
||||
export { pushSubscriptionsDb } from '@/modules/database/repositories/push-subscriptions.js';
|
||||
export { scanStateDb } from '@/modules/database/repositories/scan-state.db.js';
|
||||
export { sessionsDb } from '@/modules/database/repositories/sessions.db.js';
|
||||
export { userDb } from '@/modules/database/repositories/users.js';
|
||||
export { vapidKeysDb } from '@/modules/database/repositories/vapid-keys.js';
|
||||
17
server/modules/database/init-db.ts
Normal file
17
server/modules/database/init-db.ts
Normal file
@@ -0,0 +1,17 @@
|
||||
import { getConnection } from "@/modules/database/connection.js";
|
||||
import { runMigrations } from "@/modules/database/migrations.js";
|
||||
import { INIT_SCHEMA_SQL } from "@/modules/database/schema.js";
|
||||
|
||||
// Initialize database with schema
|
||||
export const initializeDatabase = async () => {
|
||||
try {
|
||||
const db = getConnection();
|
||||
db.exec(INIT_SCHEMA_SQL);
|
||||
console.log('Database schema applied');
|
||||
runMigrations(db);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
console.log('Database initialization failed', { error: message });
|
||||
throw err;
|
||||
}
|
||||
};
|
||||
442
server/modules/database/migrations.ts
Normal file
442
server/modules/database/migrations.ts
Normal file
@@ -0,0 +1,442 @@
|
||||
import { Database } from 'better-sqlite3';
|
||||
|
||||
import {
|
||||
APP_CONFIG_TABLE_SCHEMA_SQL,
|
||||
LAST_SCANNED_AT_SQL,
|
||||
PROJECTS_TABLE_SCHEMA_SQL,
|
||||
PUSH_SUBSCRIPTIONS_TABLE_SCHEMA_SQL,
|
||||
SESSIONS_TABLE_SCHEMA_SQL,
|
||||
USER_NOTIFICATION_PREFERENCES_TABLE_SCHEMA_SQL,
|
||||
VAPID_KEYS_TABLE_SCHEMA_SQL,
|
||||
} from '@/modules/database/schema.js';
|
||||
|
||||
const SQLITE_UUID_SQL = `
|
||||
lower(hex(randomblob(4))) || '-' ||
|
||||
lower(hex(randomblob(2))) || '-' ||
|
||||
lower(hex(randomblob(2))) || '-' ||
|
||||
lower(hex(randomblob(2))) || '-' ||
|
||||
lower(hex(randomblob(6)))
|
||||
`;
|
||||
|
||||
type TableInfoRow = {
|
||||
name: string;
|
||||
pk: number;
|
||||
};
|
||||
|
||||
const addColumnToTableIfNotExists = (
|
||||
db: Database,
|
||||
tableName: string,
|
||||
columnNames: string[],
|
||||
columnName: string,
|
||||
columnType: string
|
||||
) => {
|
||||
if (!columnNames.includes(columnName)) {
|
||||
console.log(`Running migration: Adding ${columnName} column to ${tableName} table`);
|
||||
db.exec(`ALTER TABLE ${tableName} ADD COLUMN ${columnName} ${columnType}`);
|
||||
}
|
||||
};
|
||||
|
||||
const tableExists = (db: Database, tableName: string): boolean =>
|
||||
Boolean(
|
||||
db
|
||||
.prepare("SELECT name FROM sqlite_master WHERE type = 'table' AND name = ?")
|
||||
.get(tableName)
|
||||
);
|
||||
|
||||
const getTableInfo = (db: Database, tableName: string): TableInfoRow[] =>
|
||||
db.prepare(`PRAGMA table_info(${tableName})`).all() as TableInfoRow[];
|
||||
|
||||
const migrateLegacySessionNames = (db: Database): void => {
|
||||
const hasLegacySessionNamesTable = tableExists(db, 'session_names');
|
||||
const hasSessionsTable = tableExists(db, 'sessions');
|
||||
|
||||
if (!hasLegacySessionNamesTable) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (hasSessionsTable) {
|
||||
console.log('Running migration: Merging session_names into sessions');
|
||||
db.exec(`
|
||||
INSERT INTO sessions (session_id, provider, custom_name, created_at, updated_at)
|
||||
SELECT
|
||||
session_id,
|
||||
COALESCE(provider, 'claude'),
|
||||
custom_name,
|
||||
COALESCE(created_at, CURRENT_TIMESTAMP),
|
||||
COALESCE(updated_at, CURRENT_TIMESTAMP)
|
||||
FROM session_names
|
||||
ON CONFLICT(session_id) DO UPDATE SET
|
||||
provider = excluded.provider,
|
||||
custom_name = COALESCE(excluded.custom_name, sessions.custom_name),
|
||||
created_at = COALESCE(sessions.created_at, excluded.created_at),
|
||||
updated_at = COALESCE(excluded.updated_at, sessions.updated_at)
|
||||
`);
|
||||
db.exec('DROP TABLE session_names');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('Running migration: Renaming session_names table to sessions');
|
||||
db.exec('ALTER TABLE session_names RENAME TO sessions');
|
||||
};
|
||||
|
||||
const migrateLegacyWorkspaceTableIntoProjects = (db: Database): void => {
|
||||
db.exec(PROJECTS_TABLE_SCHEMA_SQL);
|
||||
|
||||
if (!tableExists(db, 'workspace_original_paths')) {
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('Running migration: Migrating workspace_original_paths data into projects');
|
||||
db.exec(`
|
||||
INSERT INTO projects (project_id, project_path, custom_project_name, isStarred, isArchived)
|
||||
SELECT
|
||||
CASE
|
||||
WHEN workspace_id IS NULL OR trim(workspace_id) = ''
|
||||
THEN ${SQLITE_UUID_SQL}
|
||||
ELSE workspace_id
|
||||
END,
|
||||
workspace_path,
|
||||
custom_workspace_name,
|
||||
COALESCE(isStarred, 0),
|
||||
0
|
||||
FROM workspace_original_paths
|
||||
WHERE workspace_path IS NOT NULL AND trim(workspace_path) <> ''
|
||||
ON CONFLICT(project_path) DO UPDATE SET
|
||||
custom_project_name = COALESCE(projects.custom_project_name, excluded.custom_project_name),
|
||||
isStarred = COALESCE(projects.isStarred, excluded.isStarred)
|
||||
`);
|
||||
};
|
||||
|
||||
const rebuildProjectsTableWithPrimaryKeySchema = (db: Database): void => {
|
||||
const hasProjectsTable = tableExists(db, 'projects');
|
||||
if (!hasProjectsTable) {
|
||||
db.exec(PROJECTS_TABLE_SCHEMA_SQL);
|
||||
return;
|
||||
}
|
||||
|
||||
const projectsTableInfo = getTableInfo(db, 'projects');
|
||||
const columnNames = projectsTableInfo.map((column) => column.name);
|
||||
const hasProjectIdPrimaryKey = projectsTableInfo.some(
|
||||
(column) => column.name === 'project_id' && column.pk === 1,
|
||||
);
|
||||
|
||||
if (hasProjectIdPrimaryKey) {
|
||||
addColumnToTableIfNotExists(db, 'projects', columnNames, 'custom_project_name', 'TEXT DEFAULT NULL');
|
||||
addColumnToTableIfNotExists(db, 'projects', columnNames, 'isStarred', 'BOOLEAN DEFAULT 0');
|
||||
addColumnToTableIfNotExists(db, 'projects', columnNames, 'isArchived', 'BOOLEAN DEFAULT 0');
|
||||
db.exec(`
|
||||
UPDATE projects
|
||||
SET project_id = ${SQLITE_UUID_SQL}
|
||||
WHERE project_id IS NULL OR trim(project_id) = ''
|
||||
`);
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('Running migration: Rebuilding projects table to enforce project_id primary key');
|
||||
|
||||
const projectPathExpression = columnNames.includes('project_path')
|
||||
? 'project_path'
|
||||
: columnNames.includes('workspace_path')
|
||||
? 'workspace_path'
|
||||
: 'NULL';
|
||||
|
||||
const customProjectNameExpression = columnNames.includes('custom_project_name')
|
||||
? 'custom_project_name'
|
||||
: columnNames.includes('custom_workspace_name')
|
||||
? 'custom_workspace_name'
|
||||
: 'NULL';
|
||||
|
||||
const isStarredExpression = columnNames.includes('isStarred') ? 'COALESCE(isStarred, 0)' : '0';
|
||||
|
||||
const isArchivedExpression = columnNames.includes('isArchived') ? 'COALESCE(isArchived, 0)' : '0';
|
||||
|
||||
const projectIdExpression = columnNames.includes('project_id')
|
||||
? `CASE
|
||||
WHEN project_id IS NULL OR trim(project_id) = ''
|
||||
THEN ${SQLITE_UUID_SQL}
|
||||
ELSE project_id
|
||||
END`
|
||||
: SQLITE_UUID_SQL;
|
||||
|
||||
db.exec('PRAGMA foreign_keys = OFF');
|
||||
try {
|
||||
db.exec('BEGIN TRANSACTION');
|
||||
db.exec('DROP TABLE IF EXISTS projects__new');
|
||||
db.exec(`
|
||||
CREATE TABLE projects__new (
|
||||
project_id TEXT PRIMARY KEY NOT NULL,
|
||||
project_path TEXT NOT NULL UNIQUE,
|
||||
custom_project_name TEXT DEFAULT NULL,
|
||||
isStarred BOOLEAN DEFAULT 0,
|
||||
isArchived BOOLEAN DEFAULT 0
|
||||
)
|
||||
`);
|
||||
db.exec(`
|
||||
WITH source_rows AS (
|
||||
SELECT
|
||||
${projectPathExpression} AS project_path,
|
||||
${customProjectNameExpression} AS custom_project_name,
|
||||
${isStarredExpression} AS isStarred,
|
||||
${isArchivedExpression} AS isArchived,
|
||||
${projectIdExpression} AS candidate_project_id,
|
||||
rowid AS source_rowid
|
||||
FROM projects
|
||||
WHERE ${projectPathExpression} IS NOT NULL AND trim(${projectPathExpression}) <> ''
|
||||
),
|
||||
deduped_paths AS (
|
||||
SELECT
|
||||
project_path,
|
||||
custom_project_name,
|
||||
isStarred,
|
||||
isArchived,
|
||||
candidate_project_id,
|
||||
source_rowid,
|
||||
ROW_NUMBER() OVER (PARTITION BY project_path ORDER BY source_rowid) AS project_path_rank
|
||||
FROM source_rows
|
||||
),
|
||||
prepared_rows AS (
|
||||
SELECT
|
||||
CASE
|
||||
WHEN ROW_NUMBER() OVER (PARTITION BY candidate_project_id ORDER BY source_rowid) = 1
|
||||
THEN candidate_project_id
|
||||
ELSE ${SQLITE_UUID_SQL}
|
||||
END AS project_id,
|
||||
project_path,
|
||||
custom_project_name,
|
||||
isStarred,
|
||||
isArchived
|
||||
FROM deduped_paths
|
||||
WHERE project_path_rank = 1
|
||||
)
|
||||
INSERT INTO projects__new (
|
||||
project_id,
|
||||
project_path,
|
||||
custom_project_name,
|
||||
isStarred,
|
||||
isArchived
|
||||
)
|
||||
SELECT
|
||||
project_id,
|
||||
project_path,
|
||||
custom_project_name,
|
||||
isStarred,
|
||||
isArchived
|
||||
FROM prepared_rows
|
||||
`);
|
||||
db.exec('DROP TABLE projects');
|
||||
db.exec('ALTER TABLE projects__new RENAME TO projects');
|
||||
db.exec('COMMIT');
|
||||
} catch (migrationError) {
|
||||
db.exec('ROLLBACK');
|
||||
throw migrationError;
|
||||
} finally {
|
||||
db.exec('PRAGMA foreign_keys = ON');
|
||||
}
|
||||
};
|
||||
|
||||
const rebuildSessionsTableWithProjectSchema = (db: Database): void => {
|
||||
const hasSessions = tableExists(db, 'sessions');
|
||||
if (!hasSessions) {
|
||||
db.exec(SESSIONS_TABLE_SCHEMA_SQL);
|
||||
return;
|
||||
}
|
||||
|
||||
const sessionsTableInfo = getTableInfo(db, 'sessions');
|
||||
const columnNames = sessionsTableInfo.map((column) => column.name);
|
||||
const primaryKeyColumns = sessionsTableInfo
|
||||
.filter((column) => column.pk > 0)
|
||||
.sort((a, b) => a.pk - b.pk)
|
||||
.map((column) => column.name);
|
||||
|
||||
const shouldRebuild =
|
||||
!columnNames.includes('project_path') ||
|
||||
primaryKeyColumns.length !== 1 ||
|
||||
primaryKeyColumns[0] !== 'session_id' ||
|
||||
!columnNames.includes('provider');
|
||||
|
||||
if (!shouldRebuild) {
|
||||
addColumnToTableIfNotExists(db, 'sessions', columnNames, 'jsonl_path', 'TEXT');
|
||||
addColumnToTableIfNotExists(db, 'sessions', columnNames, 'created_at', 'DATETIME');
|
||||
addColumnToTableIfNotExists(db, 'sessions', columnNames, 'updated_at', 'DATETIME');
|
||||
db.exec('UPDATE sessions SET created_at = COALESCE(created_at, CURRENT_TIMESTAMP)');
|
||||
db.exec('UPDATE sessions SET updated_at = COALESCE(updated_at, CURRENT_TIMESTAMP)');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('Running migration: Rebuilding sessions table to project-based schema');
|
||||
|
||||
const projectPathExpression = columnNames.includes('project_path')
|
||||
? 'project_path'
|
||||
: columnNames.includes('workspace_path')
|
||||
? 'workspace_path'
|
||||
: 'NULL';
|
||||
|
||||
const providerExpression = columnNames.includes('provider')
|
||||
? "COALESCE(provider, 'claude')"
|
||||
: "'claude'";
|
||||
|
||||
const customNameExpression = columnNames.includes('custom_name')
|
||||
? 'custom_name'
|
||||
: 'NULL';
|
||||
|
||||
const jsonlPathExpression = columnNames.includes('jsonl_path')
|
||||
? 'jsonl_path'
|
||||
: 'NULL';
|
||||
|
||||
const createdAtExpression = columnNames.includes('created_at')
|
||||
? 'COALESCE(created_at, CURRENT_TIMESTAMP)'
|
||||
: 'CURRENT_TIMESTAMP';
|
||||
|
||||
const updatedAtExpression = columnNames.includes('updated_at')
|
||||
? 'COALESCE(updated_at, CURRENT_TIMESTAMP)'
|
||||
: 'CURRENT_TIMESTAMP';
|
||||
|
||||
db.exec('PRAGMA foreign_keys = OFF');
|
||||
try {
|
||||
db.exec('BEGIN TRANSACTION');
|
||||
db.exec('DROP TABLE IF EXISTS sessions__new');
|
||||
db.exec(`
|
||||
CREATE TABLE sessions__new (
|
||||
session_id TEXT NOT NULL,
|
||||
provider TEXT NOT NULL DEFAULT 'claude',
|
||||
custom_name TEXT,
|
||||
project_path TEXT,
|
||||
jsonl_path TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (session_id),
|
||||
FOREIGN KEY (project_path) REFERENCES projects(project_path)
|
||||
ON DELETE SET NULL
|
||||
ON UPDATE CASCADE
|
||||
)
|
||||
`);
|
||||
db.exec(`
|
||||
WITH source_rows AS (
|
||||
SELECT
|
||||
session_id,
|
||||
${providerExpression} AS provider,
|
||||
${customNameExpression} AS custom_name,
|
||||
${projectPathExpression} AS project_path,
|
||||
${jsonlPathExpression} AS jsonl_path,
|
||||
${createdAtExpression} AS created_at,
|
||||
${updatedAtExpression} AS updated_at,
|
||||
rowid AS source_rowid
|
||||
FROM sessions
|
||||
WHERE session_id IS NOT NULL AND trim(session_id) <> ''
|
||||
),
|
||||
ranked_rows AS (
|
||||
SELECT
|
||||
session_id,
|
||||
provider,
|
||||
custom_name,
|
||||
project_path,
|
||||
jsonl_path,
|
||||
created_at,
|
||||
updated_at,
|
||||
ROW_NUMBER() OVER (
|
||||
PARTITION BY session_id
|
||||
ORDER BY datetime(COALESCE(updated_at, created_at)) DESC, source_rowid DESC
|
||||
) AS session_rank
|
||||
FROM source_rows
|
||||
)
|
||||
INSERT INTO sessions__new (
|
||||
session_id,
|
||||
provider,
|
||||
custom_name,
|
||||
project_path,
|
||||
jsonl_path,
|
||||
created_at,
|
||||
updated_at
|
||||
)
|
||||
SELECT
|
||||
session_id,
|
||||
provider,
|
||||
custom_name,
|
||||
project_path,
|
||||
jsonl_path,
|
||||
created_at,
|
||||
updated_at
|
||||
FROM ranked_rows
|
||||
WHERE session_rank = 1
|
||||
`);
|
||||
db.exec('DROP TABLE sessions');
|
||||
db.exec('ALTER TABLE sessions__new RENAME TO sessions');
|
||||
db.exec('COMMIT');
|
||||
} catch (migrationError) {
|
||||
db.exec('ROLLBACK');
|
||||
throw migrationError;
|
||||
} finally {
|
||||
db.exec('PRAGMA foreign_keys = ON');
|
||||
}
|
||||
};
|
||||
|
||||
const ensureProjectsForSessionPaths = (db: Database): void => {
|
||||
if (!tableExists(db, 'sessions')) {
|
||||
return;
|
||||
}
|
||||
|
||||
db.exec(`
|
||||
INSERT INTO projects (project_id, project_path, custom_project_name, isStarred, isArchived)
|
||||
SELECT
|
||||
${SQLITE_UUID_SQL},
|
||||
project_path,
|
||||
NULL,
|
||||
0,
|
||||
0
|
||||
FROM sessions
|
||||
WHERE project_path IS NOT NULL AND trim(project_path) <> ''
|
||||
ON CONFLICT(project_path) DO NOTHING
|
||||
`);
|
||||
};
|
||||
|
||||
export const runMigrations = (db: Database) => {
|
||||
try {
|
||||
const usersTableInfo = db.prepare('PRAGMA table_info(users)').all() as { name: string }[];
|
||||
const userColumnNames = usersTableInfo.map((column) => column.name);
|
||||
|
||||
addColumnToTableIfNotExists(db, 'users', userColumnNames, 'git_name', 'TEXT');
|
||||
addColumnToTableIfNotExists(db, 'users', userColumnNames, 'git_email', 'TEXT');
|
||||
addColumnToTableIfNotExists(
|
||||
db,
|
||||
'users',
|
||||
userColumnNames,
|
||||
'has_completed_onboarding',
|
||||
'BOOLEAN DEFAULT 0'
|
||||
);
|
||||
|
||||
db.exec(APP_CONFIG_TABLE_SCHEMA_SQL);
|
||||
db.exec(USER_NOTIFICATION_PREFERENCES_TABLE_SCHEMA_SQL);
|
||||
db.exec(VAPID_KEYS_TABLE_SCHEMA_SQL);
|
||||
db.exec(PUSH_SUBSCRIPTIONS_TABLE_SCHEMA_SQL);
|
||||
db.exec('CREATE INDEX IF NOT EXISTS idx_push_subscriptions_user_id ON push_subscriptions(user_id)');
|
||||
|
||||
db.exec(PROJECTS_TABLE_SCHEMA_SQL);
|
||||
rebuildProjectsTableWithPrimaryKeySchema(db);
|
||||
|
||||
migrateLegacyWorkspaceTableIntoProjects(db);
|
||||
rebuildSessionsTableWithProjectSchema(db);
|
||||
migrateLegacySessionNames(db);
|
||||
ensureProjectsForSessionPaths(db);
|
||||
|
||||
db.exec('CREATE INDEX IF NOT EXISTS idx_session_ids_lookup ON sessions(session_id)');
|
||||
db.exec('CREATE INDEX IF NOT EXISTS idx_sessions_project_path ON sessions(project_path)');
|
||||
db.exec('CREATE INDEX IF NOT EXISTS idx_projects_is_starred ON projects(isStarred)');
|
||||
db.exec('CREATE INDEX IF NOT EXISTS idx_projects_is_archived ON projects(isArchived)');
|
||||
|
||||
db.exec('DROP INDEX IF EXISTS idx_session_names_lookup');
|
||||
db.exec('DROP INDEX IF EXISTS idx_sessions_workspace_path');
|
||||
db.exec('DROP INDEX IF EXISTS idx_workspace_original_paths_is_starred');
|
||||
db.exec('DROP INDEX IF EXISTS idx_workspace_original_paths_workspace_id');
|
||||
|
||||
if (tableExists(db, 'workspace_original_paths')) {
|
||||
console.log('Running migration: Dropping legacy workspace_original_paths table');
|
||||
db.exec('DROP TABLE workspace_original_paths');
|
||||
}
|
||||
|
||||
db.exec(LAST_SCANNED_AT_SQL);
|
||||
console.log('Database migrations completed successfully');
|
||||
} catch (error: any) {
|
||||
console.error('Error running migrations:', error.message);
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
119
server/modules/database/repositories/api-keys.ts
Normal file
119
server/modules/database/repositories/api-keys.ts
Normal file
@@ -0,0 +1,119 @@
|
||||
/**
|
||||
* API keys repository.
|
||||
*
|
||||
* Manages API keys used for external/programmatic access to the backend.
|
||||
* Keys are prefixed with `ck_` and tied to a user via foreign key.
|
||||
*/
|
||||
|
||||
import crypto from 'crypto';
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
type ApiKeyRow = {
|
||||
id: number;
|
||||
key_name: string;
|
||||
api_key: string;
|
||||
created_at: string;
|
||||
last_used: string | null;
|
||||
is_active: number;
|
||||
};
|
||||
|
||||
type CreateApiKeyResult = {
|
||||
id: number | bigint;
|
||||
keyName: string;
|
||||
apiKey: string;
|
||||
};
|
||||
|
||||
type ValidatedApiKeyUser = {
|
||||
id: number;
|
||||
username: string;
|
||||
api_key_id: number;
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Generates a cryptographically random API key with the `ck_` prefix. */
|
||||
function generateApiKey(): string {
|
||||
return 'ck_' + crypto.randomBytes(32).toString('hex');
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Queries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const apiKeysDb = {
|
||||
generateApiKey,
|
||||
|
||||
/** Creates a new API key for the given user and returns it for one-time display. */
|
||||
createApiKey(userId: number, keyName: string): CreateApiKeyResult {
|
||||
const db = getConnection();
|
||||
const apiKey = generateApiKey();
|
||||
const result = db
|
||||
.prepare(
|
||||
'INSERT INTO api_keys (user_id, key_name, api_key) VALUES (?, ?, ?)'
|
||||
)
|
||||
.run(userId, keyName, apiKey);
|
||||
return { id: result.lastInsertRowid, keyName, apiKey };
|
||||
},
|
||||
|
||||
/** Lists all API keys for a user, most recent first. */
|
||||
getApiKeys(userId: number): ApiKeyRow[] {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare(
|
||||
'SELECT id, key_name, api_key, created_at, last_used, is_active FROM api_keys WHERE user_id = ? ORDER BY created_at DESC'
|
||||
)
|
||||
.all(userId) as ApiKeyRow[];
|
||||
},
|
||||
|
||||
/**
|
||||
* Validates an API key and resolves the owning user.
|
||||
* If the key is valid, its `last_used` timestamp is updated as a side effect.
|
||||
* Returns undefined when the key is invalid or the user is inactive.
|
||||
*/
|
||||
validateApiKey(apiKey: string): ValidatedApiKeyUser | undefined {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
`SELECT u.id, u.username, ak.id as api_key_id
|
||||
FROM api_keys ak
|
||||
JOIN users u ON ak.user_id = u.id
|
||||
WHERE ak.api_key = ? AND ak.is_active = 1 AND u.is_active = 1`
|
||||
)
|
||||
.get(apiKey) as ValidatedApiKeyUser | undefined;
|
||||
|
||||
if (row) {
|
||||
db.prepare(
|
||||
'UPDATE api_keys SET last_used = CURRENT_TIMESTAMP WHERE id = ?'
|
||||
).run(row.api_key_id);
|
||||
}
|
||||
|
||||
return row;
|
||||
},
|
||||
|
||||
/** Permanently removes an API key. Returns true if a row was deleted. */
|
||||
deleteApiKey(userId: number, apiKeyId: number): boolean {
|
||||
const db = getConnection();
|
||||
const result = db
|
||||
.prepare('DELETE FROM api_keys WHERE id = ? AND user_id = ?')
|
||||
.run(apiKeyId, userId);
|
||||
return result.changes > 0;
|
||||
},
|
||||
|
||||
/** Enables or disables an API key without deleting it. */
|
||||
toggleApiKey(
|
||||
userId: number,
|
||||
apiKeyId: number,
|
||||
isActive: boolean
|
||||
): boolean {
|
||||
const db = getConnection();
|
||||
const result = db
|
||||
.prepare(
|
||||
'UPDATE api_keys SET is_active = ? WHERE id = ? AND user_id = ?'
|
||||
)
|
||||
.run(isActive ? 1 : 0, apiKeyId, userId);
|
||||
return result.changes > 0;
|
||||
},
|
||||
};
|
||||
53
server/modules/database/repositories/app-config.ts
Normal file
53
server/modules/database/repositories/app-config.ts
Normal file
@@ -0,0 +1,53 @@
|
||||
/**
|
||||
* App config repository.
|
||||
*
|
||||
* Key-value store for application-level configuration that persists
|
||||
* across restarts (JWT secret, feature flags, etc.). Values are always
|
||||
* stored as strings; callers handle parsing.
|
||||
*/
|
||||
|
||||
import crypto from 'crypto';
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Queries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const appConfigDb = {
|
||||
/** Returns the stored value for a config key, or null if missing. */
|
||||
get(key: string): string | null {
|
||||
try {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare('SELECT value FROM app_config WHERE key = ?')
|
||||
.get(key) as { value: string } | undefined;
|
||||
return row?.value ?? null;
|
||||
} catch {
|
||||
// Swallow errors so early-startup reads (e.g. JWT secret) do not crash.
|
||||
return null;
|
||||
}
|
||||
},
|
||||
|
||||
/** Inserts or updates a config key (upsert). */
|
||||
set(key: string, value: string): void {
|
||||
const db = getConnection();
|
||||
db.prepare(
|
||||
'INSERT INTO app_config (key, value) VALUES (?, ?) ON CONFLICT(key) DO UPDATE SET value = excluded.value'
|
||||
).run(key, value);
|
||||
},
|
||||
|
||||
/**
|
||||
* Returns the JWT signing secret, generating and persisting one
|
||||
* if it does not already exist. This ensures the secret survives
|
||||
* server restarts while being created automatically on first boot.
|
||||
*/
|
||||
getOrCreateJwtSecret(): string {
|
||||
let secret = appConfigDb.get('jwt_secret');
|
||||
if (!secret) {
|
||||
secret = crypto.randomBytes(64).toString('hex');
|
||||
appConfigDb.set('jwt_secret', secret);
|
||||
}
|
||||
return secret;
|
||||
},
|
||||
};
|
||||
106
server/modules/database/repositories/credentials.ts
Normal file
106
server/modules/database/repositories/credentials.ts
Normal file
@@ -0,0 +1,106 @@
|
||||
/**
|
||||
* User credentials repository.
|
||||
*
|
||||
* Manages external service tokens (GitHub, GitLab, Bitbucket, etc.)
|
||||
* stored per-user. Each credential has a type discriminator so multiple
|
||||
* credential kinds can coexist in the same table.
|
||||
*/
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
import type {
|
||||
CreateCredentialResult,
|
||||
CredentialPublicRow,
|
||||
} from '@/shared/types.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Queries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const credentialsDb = {
|
||||
/** Stores a new credential and returns a safe (no raw value) result. */
|
||||
createCredential(
|
||||
userId: number,
|
||||
credentialName: string,
|
||||
credentialType: string,
|
||||
credentialValue: string,
|
||||
description: string | null = null
|
||||
): CreateCredentialResult {
|
||||
const db = getConnection();
|
||||
const result = db
|
||||
.prepare(
|
||||
'INSERT INTO user_credentials (user_id, credential_name, credential_type, credential_value, description) VALUES (?, ?, ?, ?, ?)'
|
||||
)
|
||||
.run(userId, credentialName, credentialType, credentialValue, description);
|
||||
return {
|
||||
id: result.lastInsertRowid,
|
||||
credentialName,
|
||||
credentialType,
|
||||
};
|
||||
},
|
||||
|
||||
/**
|
||||
* Lists credentials for a user (excluding raw values).
|
||||
* Optionally filters by credential type (e.g. 'github_token').
|
||||
*/
|
||||
getCredentials(
|
||||
userId: number,
|
||||
credentialType: string | null = null
|
||||
): CredentialPublicRow[] {
|
||||
const db = getConnection();
|
||||
|
||||
if (credentialType) {
|
||||
return db
|
||||
.prepare(
|
||||
'SELECT id, credential_name, credential_type, description, created_at, is_active FROM user_credentials WHERE user_id = ? AND credential_type = ? ORDER BY created_at DESC'
|
||||
)
|
||||
.all(userId, credentialType) as CredentialPublicRow[];
|
||||
}
|
||||
|
||||
return db
|
||||
.prepare(
|
||||
'SELECT id, credential_name, credential_type, description, created_at, is_active FROM user_credentials WHERE user_id = ? ORDER BY created_at DESC'
|
||||
)
|
||||
.all(userId) as CredentialPublicRow[];
|
||||
},
|
||||
|
||||
/**
|
||||
* Returns the raw credential value for the most recent active
|
||||
* credential of the given type, or null if none exists.
|
||||
*/
|
||||
getActiveCredential(
|
||||
userId: number,
|
||||
credentialType: string
|
||||
): string | null {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
'SELECT credential_value FROM user_credentials WHERE user_id = ? AND credential_type = ? AND is_active = 1 ORDER BY created_at DESC LIMIT 1'
|
||||
)
|
||||
.get(userId, credentialType) as { credential_value: string } | undefined;
|
||||
return row?.credential_value ?? null;
|
||||
},
|
||||
|
||||
/** Permanently removes a credential. Returns true if a row was deleted. */
|
||||
deleteCredential(userId: number, credentialId: number): boolean {
|
||||
const db = getConnection();
|
||||
const result = db
|
||||
.prepare('DELETE FROM user_credentials WHERE id = ? AND user_id = ?')
|
||||
.run(credentialId, userId);
|
||||
return result.changes > 0;
|
||||
},
|
||||
|
||||
/** Enables or disables a credential without deleting it. */
|
||||
toggleCredential(
|
||||
userId: number,
|
||||
credentialId: number,
|
||||
isActive: boolean
|
||||
): boolean {
|
||||
const db = getConnection();
|
||||
const result = db
|
||||
.prepare(
|
||||
'UPDATE user_credentials SET is_active = ? WHERE id = ? AND user_id = ?'
|
||||
)
|
||||
.run(isActive ? 1 : 0, credentialId, userId);
|
||||
return result.changes > 0;
|
||||
},
|
||||
};
|
||||
100
server/modules/database/repositories/github-tokens.ts
Normal file
100
server/modules/database/repositories/github-tokens.ts
Normal file
@@ -0,0 +1,100 @@
|
||||
/**
|
||||
* GitHub tokens repository.
|
||||
*
|
||||
* Backward-compatible helper layer over generic credentials storage.
|
||||
* Tokens are stored in `user_credentials` with `credential_type = 'github_token'`.
|
||||
*/
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
import { credentialsDb } from '@/modules/database/repositories/credentials.js';
|
||||
import type {
|
||||
CredentialPublicRow,
|
||||
CreateCredentialResult,
|
||||
} from '@/shared/types.js';
|
||||
|
||||
const GITHUB_TOKEN_TYPE = 'github_token';
|
||||
|
||||
type CredentialRow = {
|
||||
id: number;
|
||||
user_id: number;
|
||||
credential_name: string;
|
||||
credential_type: string;
|
||||
credential_value: string;
|
||||
description: string | null;
|
||||
created_at: string;
|
||||
is_active: number;
|
||||
};
|
||||
|
||||
type GithubTokenLookup = CredentialRow & {
|
||||
github_token: string;
|
||||
};
|
||||
|
||||
export const githubTokensDb = {
|
||||
/** Creates a GitHub token credential entry. */
|
||||
createGithubToken(
|
||||
userId: number,
|
||||
tokenName: string,
|
||||
githubToken: string,
|
||||
description: string | null = null
|
||||
): CreateCredentialResult {
|
||||
return credentialsDb.createCredential(
|
||||
userId,
|
||||
tokenName,
|
||||
GITHUB_TOKEN_TYPE,
|
||||
githubToken,
|
||||
description
|
||||
);
|
||||
},
|
||||
|
||||
/** Returns all GitHub tokens (safe shape: no credential value). */
|
||||
getGithubTokens(userId: number): CredentialPublicRow[] {
|
||||
return credentialsDb.getCredentials(userId, GITHUB_TOKEN_TYPE);
|
||||
},
|
||||
|
||||
/** Returns the most recent active GitHub token value for a user. */
|
||||
getActiveGithubToken(userId: number): string | null {
|
||||
return credentialsDb.getActiveCredential(userId, GITHUB_TOKEN_TYPE);
|
||||
},
|
||||
|
||||
/**
|
||||
* Returns a specific active GitHub token row by id/user, including
|
||||
* a `github_token` compatibility field.
|
||||
*/
|
||||
getGithubTokenById(userId: number, tokenId: number): GithubTokenLookup | null {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
`SELECT *
|
||||
FROM user_credentials
|
||||
WHERE id = ? AND user_id = ? AND credential_type = ? AND is_active = 1`
|
||||
)
|
||||
.get(tokenId, userId, GITHUB_TOKEN_TYPE) as CredentialRow | undefined;
|
||||
|
||||
if (!row) return null;
|
||||
|
||||
return {
|
||||
...row,
|
||||
github_token: row.credential_value,
|
||||
};
|
||||
},
|
||||
|
||||
/** Updates active state for a GitHub token. */
|
||||
updateGithubToken(
|
||||
userId: number,
|
||||
tokenId: number,
|
||||
isActive: boolean
|
||||
): boolean {
|
||||
return credentialsDb.toggleCredential(userId, tokenId, isActive);
|
||||
},
|
||||
|
||||
/** Deletes a GitHub token. */
|
||||
deleteGithubToken(userId: number, tokenId: number): boolean {
|
||||
return credentialsDb.deleteCredential(userId, tokenId);
|
||||
},
|
||||
|
||||
// Legacy alias used by existing routes
|
||||
toggleGithubToken(userId: number, tokenId: number, isActive: boolean): boolean {
|
||||
return githubTokensDb.updateGithubToken(userId, tokenId, isActive);
|
||||
},
|
||||
};
|
||||
|
||||
103
server/modules/database/repositories/notification-preferences.ts
Normal file
103
server/modules/database/repositories/notification-preferences.ts
Normal file
@@ -0,0 +1,103 @@
|
||||
/**
|
||||
* Notification preferences repository.
|
||||
*
|
||||
* Stores per-user notification channel/event preferences as JSON.
|
||||
*/
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
type NotificationPreferences = {
|
||||
channels: {
|
||||
inApp: boolean;
|
||||
webPush: boolean;
|
||||
};
|
||||
events: {
|
||||
actionRequired: boolean;
|
||||
stop: boolean;
|
||||
error: boolean;
|
||||
};
|
||||
};
|
||||
|
||||
const DEFAULT_NOTIFICATION_PREFERENCES: NotificationPreferences = {
|
||||
channels: {
|
||||
inApp: false,
|
||||
webPush: false,
|
||||
},
|
||||
events: {
|
||||
actionRequired: true,
|
||||
stop: true,
|
||||
error: true,
|
||||
},
|
||||
};
|
||||
|
||||
function normalizeNotificationPreferences(value: unknown): NotificationPreferences {
|
||||
const source = value && typeof value === 'object' ? (value as Record<string, any>) : {};
|
||||
|
||||
return {
|
||||
channels: {
|
||||
inApp: source.channels?.inApp === true,
|
||||
webPush: source.channels?.webPush === true,
|
||||
},
|
||||
events: {
|
||||
actionRequired: source.events?.actionRequired !== false,
|
||||
stop: source.events?.stop !== false,
|
||||
error: source.events?.error !== false,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export const notificationPreferencesDb = {
|
||||
/** Returns the normalized preferences for a user, creating defaults on first read. */
|
||||
getNotificationPreferences(userId: number): NotificationPreferences {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
'SELECT preferences_json FROM user_notification_preferences WHERE user_id = ?'
|
||||
)
|
||||
.get(userId) as { preferences_json: string } | undefined;
|
||||
|
||||
if (!row) {
|
||||
const defaults = normalizeNotificationPreferences(DEFAULT_NOTIFICATION_PREFERENCES);
|
||||
db.prepare(
|
||||
'INSERT INTO user_notification_preferences (user_id, preferences_json, updated_at) VALUES (?, ?, CURRENT_TIMESTAMP)'
|
||||
).run(userId, JSON.stringify(defaults));
|
||||
return defaults;
|
||||
}
|
||||
|
||||
let parsed: unknown;
|
||||
try {
|
||||
parsed = JSON.parse(row.preferences_json);
|
||||
} catch {
|
||||
parsed = DEFAULT_NOTIFICATION_PREFERENCES;
|
||||
}
|
||||
return normalizeNotificationPreferences(parsed);
|
||||
},
|
||||
|
||||
/** Upserts normalized preferences for a user and returns the stored value. */
|
||||
updateNotificationPreferences(
|
||||
userId: number,
|
||||
preferences: unknown
|
||||
): NotificationPreferences {
|
||||
const normalized = normalizeNotificationPreferences(preferences);
|
||||
const db = getConnection();
|
||||
|
||||
db.prepare(
|
||||
`INSERT INTO user_notification_preferences (user_id, preferences_json, updated_at)
|
||||
VALUES (?, ?, CURRENT_TIMESTAMP)
|
||||
ON CONFLICT(user_id) DO UPDATE SET
|
||||
preferences_json = excluded.preferences_json,
|
||||
updated_at = CURRENT_TIMESTAMP`
|
||||
).run(userId, JSON.stringify(normalized));
|
||||
|
||||
return normalized;
|
||||
},
|
||||
|
||||
// Legacy aliases used by existing services/routes
|
||||
getPreferences(userId: number): NotificationPreferences {
|
||||
return notificationPreferencesDb.getNotificationPreferences(userId);
|
||||
},
|
||||
updatePreferences(userId: number, preferences: unknown): NotificationPreferences {
|
||||
return notificationPreferencesDb.updateNotificationPreferences(userId, preferences);
|
||||
},
|
||||
};
|
||||
|
||||
@@ -0,0 +1,72 @@
|
||||
import assert from 'node:assert/strict';
|
||||
import { mkdtemp, rm } from 'node:fs/promises';
|
||||
import { tmpdir } from 'node:os';
|
||||
import path from 'node:path';
|
||||
import test from 'node:test';
|
||||
|
||||
import { closeConnection } from '@/modules/database/connection.js';
|
||||
import { initializeDatabase } from '@/modules/database/init-db.js';
|
||||
import { projectsDb } from '@/modules/database/repositories/projects.db.js';
|
||||
|
||||
async function withIsolatedDatabase(runTest: () => void | Promise<void>): Promise<void> {
|
||||
const previousDatabasePath = process.env.DATABASE_PATH;
|
||||
const tempDirectory = await mkdtemp(path.join(tmpdir(), 'projects-db-'));
|
||||
const databasePath = path.join(tempDirectory, 'auth.db');
|
||||
|
||||
closeConnection();
|
||||
process.env.DATABASE_PATH = databasePath;
|
||||
await initializeDatabase();
|
||||
|
||||
try {
|
||||
await runTest();
|
||||
} finally {
|
||||
closeConnection();
|
||||
if (previousDatabasePath === undefined) {
|
||||
delete process.env.DATABASE_PATH;
|
||||
} else {
|
||||
process.env.DATABASE_PATH = previousDatabasePath;
|
||||
}
|
||||
await rm(tempDirectory, { recursive: true, force: true });
|
||||
}
|
||||
}
|
||||
|
||||
test('projectsDb.createProjectPath returns created for fresh paths', async () => {
|
||||
await withIsolatedDatabase(() => {
|
||||
const created = projectsDb.createProjectPath('/workspace/new-project');
|
||||
|
||||
assert.equal(created.outcome, 'created');
|
||||
assert.ok(created.project);
|
||||
assert.equal(created.project?.project_path, '/workspace/new-project');
|
||||
assert.equal(created.project?.isArchived, 0);
|
||||
});
|
||||
});
|
||||
|
||||
test('projectsDb.createProjectPath returns reactivated_archived for archived duplicates', async () => {
|
||||
await withIsolatedDatabase(() => {
|
||||
const initial = projectsDb.createProjectPath('/workspace/archived-project', 'Archived Project');
|
||||
assert.equal(initial.outcome, 'created');
|
||||
assert.ok(initial.project);
|
||||
|
||||
projectsDb.updateProjectIsArchived('/workspace/archived-project', true);
|
||||
|
||||
const reused = projectsDb.createProjectPath('/workspace/archived-project', 'Renamed Project');
|
||||
assert.equal(reused.outcome, 'reactivated_archived');
|
||||
assert.ok(reused.project);
|
||||
assert.equal(reused.project?.project_id, initial.project?.project_id);
|
||||
assert.equal(reused.project?.isArchived, 0);
|
||||
});
|
||||
});
|
||||
|
||||
test('projectsDb.createProjectPath returns active_conflict for active duplicates', async () => {
|
||||
await withIsolatedDatabase(() => {
|
||||
const initial = projectsDb.createProjectPath('/workspace/active-project');
|
||||
assert.equal(initial.outcome, 'created');
|
||||
assert.ok(initial.project);
|
||||
|
||||
const conflict = projectsDb.createProjectPath('/workspace/active-project');
|
||||
assert.equal(conflict.outcome, 'active_conflict');
|
||||
assert.ok(conflict.project);
|
||||
assert.equal(conflict.project?.project_id, initial.project?.project_id);
|
||||
assert.equal(conflict.project?.isArchived, 0);
|
||||
});
|
||||
});
|
||||
183
server/modules/database/repositories/projects.db.ts
Normal file
183
server/modules/database/repositories/projects.db.ts
Normal file
@@ -0,0 +1,183 @@
|
||||
import { randomUUID } from 'node:crypto';
|
||||
import path from 'node:path';
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
import type { CreateProjectPathResult, ProjectRepositoryRow } from '@/shared/types.js';
|
||||
import { normalizeProjectPath } from '@/shared/utils.js';
|
||||
|
||||
function normalizeProjectDisplayName(projectPath: string, customProjectName: string | null): string {
|
||||
const trimmedCustomName = typeof customProjectName === 'string' ? customProjectName.trim() : '';
|
||||
if (trimmedCustomName.length > 0) {
|
||||
return trimmedCustomName;
|
||||
}
|
||||
|
||||
const directoryName = path.basename(projectPath);
|
||||
return directoryName || projectPath;
|
||||
}
|
||||
|
||||
export const projectsDb = {
|
||||
createProjectPath(projectPath: string, customProjectName: string | null = null): CreateProjectPathResult {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
const normalizedProjectName = normalizeProjectDisplayName(normalizedProjectPath, customProjectName);
|
||||
const attemptedId = randomUUID();
|
||||
const row = db.prepare(`
|
||||
INSERT INTO projects (project_id, project_path, custom_project_name, isArchived)
|
||||
VALUES (?, ?, ?, 0)
|
||||
ON CONFLICT(project_path) DO UPDATE SET
|
||||
isArchived = 0
|
||||
WHERE projects.isArchived = 1
|
||||
RETURNING project_id, project_path, custom_project_name, isStarred, isArchived
|
||||
`).get(attemptedId, normalizedProjectPath, normalizedProjectName) as ProjectRepositoryRow | undefined;
|
||||
|
||||
if (row) {
|
||||
return {
|
||||
outcome: row.project_id === attemptedId ? 'created' : 'reactivated_archived',
|
||||
project: row,
|
||||
};
|
||||
}
|
||||
|
||||
const existingProject = projectsDb.getProjectPath(normalizedProjectPath);
|
||||
return {
|
||||
outcome: 'active_conflict',
|
||||
project: existingProject,
|
||||
};
|
||||
},
|
||||
|
||||
getProjectPath(projectPath: string): ProjectRepositoryRow | null {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
const row = db.prepare(`
|
||||
SELECT project_id, project_path, custom_project_name, isStarred, isArchived
|
||||
FROM projects
|
||||
WHERE project_path = ?
|
||||
`).get(normalizedProjectPath) as ProjectRepositoryRow | undefined;
|
||||
|
||||
return row ?? null;
|
||||
},
|
||||
|
||||
getProjectById(projectId: string): ProjectRepositoryRow | null {
|
||||
const db = getConnection();
|
||||
const row = db.prepare(`
|
||||
SELECT project_id, project_path, custom_project_name, isStarred, isArchived
|
||||
FROM projects
|
||||
WHERE project_id = ?
|
||||
`).get(projectId) as ProjectRepositoryRow | undefined;
|
||||
|
||||
return row ?? null;
|
||||
},
|
||||
|
||||
/**
|
||||
* Resolve the absolute project directory from a database project_id.
|
||||
*
|
||||
* This is the canonical lookup used after the projectName → projectId migration:
|
||||
* API routes receive the DB-assigned `projectId` and must resolve the real folder
|
||||
* path through this helper before touching the filesystem. Returns `null` when the
|
||||
* project row does not exist so callers can respond with a 404.
|
||||
*/
|
||||
getProjectPathById(projectId: string): string | null {
|
||||
const db = getConnection();
|
||||
const row = db.prepare(`
|
||||
SELECT project_path
|
||||
FROM projects
|
||||
WHERE project_id = ?
|
||||
`).get(projectId) as Pick<ProjectRepositoryRow, 'project_path'> | undefined;
|
||||
|
||||
return row?.project_path ?? null;
|
||||
},
|
||||
|
||||
getProjectPaths(): ProjectRepositoryRow[] {
|
||||
const db = getConnection();
|
||||
return db.prepare(`
|
||||
SELECT project_id, project_path, custom_project_name, isStarred, isArchived
|
||||
FROM projects
|
||||
WHERE isArchived = 0
|
||||
`).all() as ProjectRepositoryRow[];
|
||||
},
|
||||
|
||||
getCustomProjectName(projectPath: string): string | null {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
const row = db.prepare(`
|
||||
SELECT custom_project_name
|
||||
FROM projects
|
||||
WHERE project_path = ?
|
||||
`).get(normalizedProjectPath) as Pick<ProjectRepositoryRow, 'custom_project_name'> | undefined;
|
||||
|
||||
return row?.custom_project_name ?? null;
|
||||
},
|
||||
|
||||
updateCustomProjectName(projectPath: string, customProjectName: string | null): void {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
db.prepare(`
|
||||
INSERT INTO projects (project_id, project_path, custom_project_name)
|
||||
VALUES (?, ?, ?)
|
||||
ON CONFLICT(project_path) DO UPDATE SET custom_project_name = excluded.custom_project_name
|
||||
`).run(randomUUID(), normalizedProjectPath, customProjectName);
|
||||
},
|
||||
|
||||
updateCustomProjectNameById(projectId: string, customProjectName: string | null): void {
|
||||
const db = getConnection();
|
||||
db.prepare(`
|
||||
UPDATE projects
|
||||
SET custom_project_name = ?
|
||||
WHERE project_id = ?
|
||||
`).run(customProjectName, projectId);
|
||||
},
|
||||
|
||||
updateProjectIsStarred(projectPath: string, isStarred: boolean): void {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
db.prepare(`
|
||||
UPDATE projects
|
||||
SET isStarred = ?
|
||||
WHERE project_path = ?
|
||||
`).run(isStarred ? 1 : 0, normalizedProjectPath);
|
||||
},
|
||||
|
||||
updateProjectIsStarredById(projectId: string, isStarred: boolean): void {
|
||||
const db = getConnection();
|
||||
db.prepare(`
|
||||
UPDATE projects
|
||||
SET isStarred = ?
|
||||
WHERE project_id = ?
|
||||
`).run(isStarred ? 1 : 0, projectId);
|
||||
},
|
||||
|
||||
updateProjectIsArchived(projectPath: string, isArchived: boolean): void {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
db.prepare(`
|
||||
UPDATE projects
|
||||
SET isArchived = ?
|
||||
WHERE project_path = ?
|
||||
`).run(isArchived ? 1 : 0, normalizedProjectPath);
|
||||
},
|
||||
|
||||
updateProjectIsArchivedById(projectId: string, isArchived: boolean): void {
|
||||
const db = getConnection();
|
||||
db.prepare(`
|
||||
UPDATE projects
|
||||
SET isArchived = ?
|
||||
WHERE project_id = ?
|
||||
`).run(isArchived ? 1 : 0, projectId);
|
||||
},
|
||||
|
||||
deleteProjectPath(projectPath: string): void {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
db.prepare(`
|
||||
DELETE FROM projects
|
||||
WHERE project_path = ?
|
||||
`).run(normalizedProjectPath);
|
||||
},
|
||||
|
||||
deleteProjectById(projectId: string): void {
|
||||
const db = getConnection();
|
||||
db.prepare(`
|
||||
DELETE FROM projects
|
||||
WHERE project_id = ?
|
||||
`).run(projectId);
|
||||
},
|
||||
};
|
||||
80
server/modules/database/repositories/push-subscriptions.ts
Normal file
80
server/modules/database/repositories/push-subscriptions.ts
Normal file
@@ -0,0 +1,80 @@
|
||||
/**
|
||||
* Push subscriptions repository.
|
||||
*
|
||||
* Persists browser push subscription endpoints and keys per user.
|
||||
*/
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
type PushSubscriptionLookupRow = {
|
||||
endpoint: string;
|
||||
keys_p256dh: string;
|
||||
keys_auth: string;
|
||||
};
|
||||
|
||||
export const pushSubscriptionsDb = {
|
||||
/** Upserts a push subscription endpoint for a user. */
|
||||
createPushSubscription(
|
||||
userId: number,
|
||||
endpoint: string,
|
||||
keysP256dh: string,
|
||||
keysAuth: string
|
||||
): void {
|
||||
const db = getConnection();
|
||||
db.prepare(
|
||||
`INSERT INTO push_subscriptions (user_id, endpoint, keys_p256dh, keys_auth)
|
||||
VALUES (?, ?, ?, ?)
|
||||
ON CONFLICT(endpoint) DO UPDATE SET
|
||||
user_id = excluded.user_id,
|
||||
keys_p256dh = excluded.keys_p256dh,
|
||||
keys_auth = excluded.keys_auth`
|
||||
).run(userId, endpoint, keysP256dh, keysAuth);
|
||||
},
|
||||
|
||||
/** Returns all subscriptions for a user. */
|
||||
getPushSubscriptions(userId: number): PushSubscriptionLookupRow[] {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare(
|
||||
'SELECT endpoint, keys_p256dh, keys_auth FROM push_subscriptions WHERE user_id = ?'
|
||||
)
|
||||
.all(userId) as PushSubscriptionLookupRow[];
|
||||
},
|
||||
|
||||
/** Deletes one subscription by endpoint. */
|
||||
deletePushSubscription(endpoint: string): void {
|
||||
const db = getConnection();
|
||||
db.prepare('DELETE FROM push_subscriptions WHERE endpoint = ?').run(endpoint);
|
||||
},
|
||||
|
||||
/** Deletes all subscriptions for a user. */
|
||||
deletePushSubscriptionsForUser(userId: number): void {
|
||||
const db = getConnection();
|
||||
db.prepare('DELETE FROM push_subscriptions WHERE user_id = ?').run(userId);
|
||||
},
|
||||
|
||||
// Legacy aliases used by existing services/routes
|
||||
saveSubscription(
|
||||
userId: number,
|
||||
endpoint: string,
|
||||
keysP256dh: string,
|
||||
keysAuth: string
|
||||
): void {
|
||||
pushSubscriptionsDb.createPushSubscription(
|
||||
userId,
|
||||
endpoint,
|
||||
keysP256dh,
|
||||
keysAuth
|
||||
);
|
||||
},
|
||||
getSubscriptions(userId: number): PushSubscriptionLookupRow[] {
|
||||
return pushSubscriptionsDb.getPushSubscriptions(userId);
|
||||
},
|
||||
removeSubscription(endpoint: string): void {
|
||||
pushSubscriptionsDb.deletePushSubscription(endpoint);
|
||||
},
|
||||
removeAllForUser(userId: number): void {
|
||||
pushSubscriptionsDb.deletePushSubscriptionsForUser(userId);
|
||||
},
|
||||
};
|
||||
|
||||
42
server/modules/database/repositories/scan-state.db.ts
Normal file
42
server/modules/database/repositories/scan-state.db.ts
Normal file
@@ -0,0 +1,42 @@
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
type ScanStateRow = {
|
||||
last_scanned_at: string;
|
||||
};
|
||||
|
||||
export const scanStateDb = {
|
||||
getLastScannedAt() {
|
||||
const db = getConnection();
|
||||
|
||||
const row = db
|
||||
.prepare(`SELECT last_scanned_at FROM scan_state WHERE id = 1`)
|
||||
.get() as ScanStateRow;
|
||||
|
||||
if (!row) {
|
||||
return null; // Before any scan, the row is undefined.
|
||||
}
|
||||
|
||||
let lastScannedDate: Date | null = null;
|
||||
const lastScannedStr = row.last_scanned_at;
|
||||
|
||||
if (lastScannedStr) {
|
||||
// SQLite CURRENT_TIMESTAMP returns UTC in "YYYY-MM-DD HH:MM:SS" format.
|
||||
// Replace space with 'T' and append 'Z' to parse reliably in JS across all platforms.
|
||||
lastScannedDate = new Date(lastScannedStr.replace(' ', 'T') + 'Z');
|
||||
}
|
||||
|
||||
return lastScannedDate;
|
||||
},
|
||||
|
||||
updateLastScannedAt(scannedAt: Date = new Date()) {
|
||||
const db = getConnection();
|
||||
const sqliteTimestamp = scannedAt.toISOString().slice(0, 19).replace('T', ' ');
|
||||
|
||||
db.prepare(`
|
||||
INSERT INTO scan_state (id, last_scanned_at)
|
||||
VALUES (1, ?)
|
||||
ON CONFLICT (id)
|
||||
DO UPDATE SET last_scanned_at = excluded.last_scanned_at
|
||||
`).run(sqliteTimestamp);
|
||||
}
|
||||
};
|
||||
174
server/modules/database/repositories/sessions.db.ts
Normal file
174
server/modules/database/repositories/sessions.db.ts
Normal file
@@ -0,0 +1,174 @@
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
import { projectsDb } from '@/modules/database/repositories/projects.db.js';
|
||||
import { normalizeProjectPath } from '@/shared/utils.js';
|
||||
|
||||
type SessionRow = {
|
||||
session_id: string;
|
||||
provider: string;
|
||||
project_path: string | null;
|
||||
jsonl_path: string | null;
|
||||
custom_name: string | null;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
};
|
||||
|
||||
type SessionMetadataLookupRow = Pick<
|
||||
SessionRow,
|
||||
'session_id' | 'provider' | 'project_path' | 'jsonl_path' | 'custom_name' | 'created_at' | 'updated_at'
|
||||
>;
|
||||
|
||||
function normalizeTimestamp(value?: string): string | null {
|
||||
if (!value) return null;
|
||||
|
||||
const parsed = new Date(value);
|
||||
if (Number.isNaN(parsed.getTime())) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return parsed.toISOString();
|
||||
}
|
||||
|
||||
function normalizeProjectPathForProvider(provider: string, projectPath: string): string {
|
||||
void provider;
|
||||
return normalizeProjectPath(projectPath);
|
||||
}
|
||||
|
||||
export const sessionsDb = {
|
||||
createSession(
|
||||
sessionId: string,
|
||||
provider: string,
|
||||
projectPath: string,
|
||||
customName?: string,
|
||||
createdAt?: string,
|
||||
updatedAt?: string,
|
||||
jsonlPath?: string | null
|
||||
): string {
|
||||
const db = getConnection();
|
||||
const createdAtValue = normalizeTimestamp(createdAt);
|
||||
const updatedAtValue = normalizeTimestamp(updatedAt);
|
||||
const normalizedProjectPath = normalizeProjectPathForProvider(provider, projectPath);
|
||||
|
||||
// First, ensure the project path is recorded in the projects table,
|
||||
// since it's a foreign key in the sessions table.
|
||||
projectsDb.createProjectPath(normalizedProjectPath);
|
||||
|
||||
db.prepare(
|
||||
`INSERT INTO sessions (session_id, provider, custom_name, project_path, jsonl_path, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?, COALESCE(?, CURRENT_TIMESTAMP), COALESCE(?, CURRENT_TIMESTAMP))
|
||||
ON CONFLICT(session_id) DO UPDATE SET
|
||||
provider = excluded.provider,
|
||||
updated_at = excluded.updated_at,
|
||||
project_path = excluded.project_path,
|
||||
jsonl_path = excluded.jsonl_path,
|
||||
custom_name = COALESCE(excluded.custom_name, sessions.custom_name)`
|
||||
).run(
|
||||
sessionId,
|
||||
provider,
|
||||
customName ?? null,
|
||||
normalizedProjectPath,
|
||||
jsonlPath ?? null,
|
||||
createdAtValue,
|
||||
updatedAtValue
|
||||
);
|
||||
|
||||
return sessionId;
|
||||
},
|
||||
|
||||
updateSessionCustomName(sessionId: string, customName: string): void {
|
||||
const db = getConnection();
|
||||
db.prepare(
|
||||
`UPDATE sessions
|
||||
SET custom_name = ?
|
||||
WHERE session_id = ?`
|
||||
).run(customName, sessionId);
|
||||
},
|
||||
|
||||
getSessionById(sessionId: string): SessionMetadataLookupRow | null {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
`SELECT session_id, provider, project_path, jsonl_path, custom_name, created_at, updated_at
|
||||
FROM sessions
|
||||
WHERE session_id = ?
|
||||
ORDER BY updated_at DESC
|
||||
LIMIT 1`
|
||||
)
|
||||
.get(sessionId) as SessionMetadataLookupRow | undefined;
|
||||
|
||||
return row ?? null;
|
||||
},
|
||||
|
||||
getAllSessions(): SessionRow[] {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare(
|
||||
`SELECT session_id, provider, project_path, jsonl_path, custom_name, created_at, updated_at
|
||||
FROM sessions`
|
||||
)
|
||||
.all() as SessionRow[];
|
||||
},
|
||||
|
||||
getSessionsByProjectPath(projectPath: string): SessionRow[] {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
return db
|
||||
.prepare(
|
||||
`SELECT session_id, provider, project_path, jsonl_path, custom_name, created_at, updated_at
|
||||
FROM sessions
|
||||
WHERE project_path = ?`
|
||||
)
|
||||
.all(normalizedProjectPath) as SessionRow[];
|
||||
},
|
||||
|
||||
getSessionsByProjectPathPage(projectPath: string, limit: number, offset: number): SessionRow[] {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
return db
|
||||
.prepare(
|
||||
`SELECT session_id, provider, project_path, jsonl_path, custom_name, created_at, updated_at
|
||||
FROM sessions
|
||||
WHERE project_path = ?
|
||||
ORDER BY datetime(COALESCE(updated_at, created_at)) DESC, session_id DESC
|
||||
LIMIT ? OFFSET ?`
|
||||
)
|
||||
.all(normalizedProjectPath, limit, offset) as SessionRow[];
|
||||
},
|
||||
|
||||
countSessionsByProjectPath(projectPath: string): number {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
const row = db
|
||||
.prepare(
|
||||
`SELECT COUNT(*) AS count
|
||||
FROM sessions
|
||||
WHERE project_path = ?`
|
||||
)
|
||||
.get(normalizedProjectPath) as { count: number } | undefined;
|
||||
|
||||
return Number(row?.count ?? 0);
|
||||
},
|
||||
|
||||
deleteSessionsByProjectPath(projectPath: string): void {
|
||||
const db = getConnection();
|
||||
const normalizedProjectPath = normalizeProjectPath(projectPath);
|
||||
db.prepare(`DELETE FROM sessions WHERE project_path = ?`).run(normalizedProjectPath);
|
||||
},
|
||||
|
||||
getSessionName(sessionId: string, provider: string): string | null {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
`SELECT custom_name
|
||||
FROM sessions
|
||||
WHERE session_id = ? AND provider = ?`
|
||||
)
|
||||
.get(sessionId, provider) as { custom_name: string | null } | undefined;
|
||||
|
||||
return row?.custom_name ?? null;
|
||||
},
|
||||
|
||||
deleteSessionById(sessionId: string): boolean {
|
||||
const db = getConnection();
|
||||
return db.prepare('DELETE FROM sessions WHERE session_id = ?').run(sessionId).changes > 0;
|
||||
},
|
||||
};
|
||||
140
server/modules/database/repositories/users.ts
Normal file
140
server/modules/database/repositories/users.ts
Normal file
@@ -0,0 +1,140 @@
|
||||
/**
|
||||
* User repository.
|
||||
*
|
||||
* Provides typed CRUD operations for the `users` table.
|
||||
* This is a single-user system, but the schema supports multiple
|
||||
* users for forward compatibility.
|
||||
*/
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
type UserRow = {
|
||||
id: number;
|
||||
username: string;
|
||||
password_hash: string;
|
||||
created_at: string;
|
||||
last_login: string | null;
|
||||
is_active: number;
|
||||
git_name: string | null;
|
||||
git_email: string | null;
|
||||
has_completed_onboarding: number;
|
||||
};
|
||||
|
||||
type UserPublicRow = Pick<UserRow, 'id' | 'username' | 'created_at' | 'last_login'>;
|
||||
|
||||
type UserGitConfig = {
|
||||
git_name: string | null;
|
||||
git_email: string | null;
|
||||
};
|
||||
|
||||
type CreateUserResult = {
|
||||
id: number | bigint;
|
||||
username: string;
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Queries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const userDb = {
|
||||
/** Returns true if at least one user exists in the database. */
|
||||
hasUsers(): boolean {
|
||||
const db = getConnection();
|
||||
const row = db.prepare('SELECT COUNT(*) as count FROM users').get() as {
|
||||
count: number;
|
||||
};
|
||||
return row.count > 0;
|
||||
},
|
||||
|
||||
/** Inserts a new user and returns the created ID + username. */
|
||||
createUser(username: string, passwordHash: string): CreateUserResult {
|
||||
const db = getConnection();
|
||||
const result = db
|
||||
.prepare('INSERT INTO users (username, password_hash) VALUES (?, ?)')
|
||||
.run(username, passwordHash);
|
||||
return { id: result.lastInsertRowid, username };
|
||||
},
|
||||
|
||||
/**
|
||||
* Looks up an active user by username.
|
||||
* Returns the full row (including password hash) for auth verification.
|
||||
*/
|
||||
getUserByUsername(username: string): UserRow | undefined {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare('SELECT * FROM users WHERE username = ? AND is_active = 1')
|
||||
.get(username) as UserRow | undefined;
|
||||
},
|
||||
|
||||
/** Updates the last_login timestamp. Non-fatal — logs but does not throw. */
|
||||
updateLastLogin(userId: number): void {
|
||||
try {
|
||||
const db = getConnection();
|
||||
db.prepare(
|
||||
'UPDATE users SET last_login = CURRENT_TIMESTAMP WHERE id = ?'
|
||||
).run(userId);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
console.error('Failed to update last login', { error: message });
|
||||
}
|
||||
},
|
||||
|
||||
/** Returns public user fields by ID (no password hash). */
|
||||
getUserById(userId: number): UserPublicRow | undefined {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare(
|
||||
'SELECT id, username, created_at, last_login FROM users WHERE id = ? AND is_active = 1'
|
||||
)
|
||||
.get(userId) as UserPublicRow | undefined;
|
||||
},
|
||||
|
||||
/** Returns the first active user. Used for single-user mode lookups. */
|
||||
getFirstUser(): UserPublicRow | undefined {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare(
|
||||
'SELECT id, username, created_at, last_login FROM users WHERE is_active = 1 LIMIT 1'
|
||||
)
|
||||
.get() as UserPublicRow | undefined;
|
||||
},
|
||||
|
||||
/** Stores the user's preferred git name and email. */
|
||||
updateGitConfig(
|
||||
userId: number,
|
||||
gitName: string,
|
||||
gitEmail: string
|
||||
): void {
|
||||
const db = getConnection();
|
||||
db.prepare('UPDATE users SET git_name = ?, git_email = ? WHERE id = ?').run(
|
||||
gitName,
|
||||
gitEmail,
|
||||
userId
|
||||
);
|
||||
},
|
||||
|
||||
/** Retrieves the user's git identity (name + email). */
|
||||
getGitConfig(userId: number): UserGitConfig | undefined {
|
||||
const db = getConnection();
|
||||
return db
|
||||
.prepare('SELECT git_name, git_email FROM users WHERE id = ?')
|
||||
.get(userId) as UserGitConfig | undefined;
|
||||
},
|
||||
|
||||
/** Marks onboarding as complete for the given user. */
|
||||
completeOnboarding(userId: number): void {
|
||||
const db = getConnection();
|
||||
db.prepare(
|
||||
'UPDATE users SET has_completed_onboarding = 1 WHERE id = ?'
|
||||
).run(userId);
|
||||
},
|
||||
|
||||
/** Returns true if the user has finished the onboarding flow. */
|
||||
hasCompletedOnboarding(userId: number): boolean {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare('SELECT has_completed_onboarding FROM users WHERE id = ?')
|
||||
.get(userId) as { has_completed_onboarding: number } | undefined;
|
||||
return row?.has_completed_onboarding === 1;
|
||||
},
|
||||
};
|
||||
57
server/modules/database/repositories/vapid-keys.ts
Normal file
57
server/modules/database/repositories/vapid-keys.ts
Normal file
@@ -0,0 +1,57 @@
|
||||
/**
|
||||
* VAPID keys repository.
|
||||
*
|
||||
* Stores and retrieves the Web Push VAPID key pair.
|
||||
*/
|
||||
|
||||
import { getConnection } from '@/modules/database/connection.js';
|
||||
|
||||
type VapidKeyRow = {
|
||||
public_key: string;
|
||||
private_key: string;
|
||||
};
|
||||
|
||||
type VapidKeyPair = {
|
||||
publicKey: string;
|
||||
privateKey: string;
|
||||
};
|
||||
|
||||
export const vapidKeysDb = {
|
||||
/** Returns the latest stored VAPID key pair, or null when unset. */
|
||||
getVapidKeys(): VapidKeyPair | null {
|
||||
const db = getConnection();
|
||||
const row = db
|
||||
.prepare(
|
||||
'SELECT public_key, private_key FROM vapid_keys ORDER BY id DESC LIMIT 1'
|
||||
)
|
||||
.get() as Pick<VapidKeyRow, 'public_key' | 'private_key'> | undefined;
|
||||
|
||||
if (!row) return null;
|
||||
return {
|
||||
publicKey: row.public_key,
|
||||
privateKey: row.private_key,
|
||||
};
|
||||
},
|
||||
|
||||
/** Persists a new VAPID key pair. */
|
||||
createVapidKeys(publicKey: string, privateKey: string): void {
|
||||
const db = getConnection();
|
||||
db.prepare(
|
||||
'INSERT INTO vapid_keys (public_key, private_key) VALUES (?, ?)'
|
||||
).run(publicKey, privateKey);
|
||||
},
|
||||
|
||||
/** Replaces all existing keys with a fresh pair. */
|
||||
updateVapidKeys(publicKey: string, privateKey: string): void {
|
||||
const db = getConnection();
|
||||
db.prepare('DELETE FROM vapid_keys').run();
|
||||
vapidKeysDb.createVapidKeys(publicKey, privateKey);
|
||||
},
|
||||
|
||||
/** Deletes all VAPID key rows. */
|
||||
deleteVapidKeys(): void {
|
||||
const db = getConnection();
|
||||
db.prepare('DELETE FROM vapid_keys').run();
|
||||
},
|
||||
};
|
||||
|
||||
152
server/modules/database/schema.ts
Normal file
152
server/modules/database/schema.ts
Normal file
@@ -0,0 +1,152 @@
|
||||
const USER_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
username TEXT UNIQUE NOT NULL,
|
||||
password_hash TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
last_login DATETIME,
|
||||
is_active BOOLEAN DEFAULT 1,
|
||||
git_name TEXT,
|
||||
git_email TEXT,
|
||||
has_completed_onboarding BOOLEAN DEFAULT 0
|
||||
);
|
||||
`;
|
||||
|
||||
export const API_KEYS_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS api_keys (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_id INTEGER NOT NULL,
|
||||
key_name TEXT NOT NULL,
|
||||
api_key TEXT UNIQUE NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
last_used DATETIME,
|
||||
is_active BOOLEAN DEFAULT 1,
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
|
||||
);
|
||||
`;
|
||||
|
||||
export const USER_CREDENTIALS_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS user_credentials (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_id INTEGER NOT NULL,
|
||||
credential_name TEXT NOT NULL,
|
||||
credential_type TEXT NOT NULL, -- 'github_token', 'gitlab_token', 'bitbucket_token', etc.
|
||||
credential_value TEXT NOT NULL,
|
||||
description TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
is_active BOOLEAN DEFAULT 1,
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
|
||||
);
|
||||
`;
|
||||
|
||||
export const USER_NOTIFICATION_PREFERENCES_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS user_notification_preferences (
|
||||
user_id INTEGER PRIMARY KEY,
|
||||
preferences_json TEXT NOT NULL,
|
||||
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
|
||||
);
|
||||
`;
|
||||
|
||||
export const VAPID_KEYS_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS vapid_keys (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
public_key TEXT NOT NULL,
|
||||
private_key TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
`;
|
||||
|
||||
export const PUSH_SUBSCRIPTIONS_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS push_subscriptions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_id INTEGER NOT NULL,
|
||||
endpoint TEXT NOT NULL UNIQUE,
|
||||
keys_p256dh TEXT NOT NULL,
|
||||
keys_auth TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
|
||||
);
|
||||
`;
|
||||
|
||||
export const PROJECTS_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS projects (
|
||||
project_id TEXT PRIMARY KEY NOT NULL,
|
||||
project_path TEXT NOT NULL UNIQUE,
|
||||
custom_project_name TEXT DEFAULT NULL,
|
||||
isStarred BOOLEAN DEFAULT 0,
|
||||
isArchived BOOLEAN DEFAULT 0
|
||||
);
|
||||
`;
|
||||
|
||||
export const SESSIONS_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS sessions (
|
||||
session_id TEXT NOT NULL,
|
||||
provider TEXT NOT NULL DEFAULT 'claude',
|
||||
custom_name TEXT,
|
||||
project_path TEXT,
|
||||
jsonl_path TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (session_id),
|
||||
FOREIGN KEY (project_path) REFERENCES projects(project_path)
|
||||
ON DELETE SET NULL
|
||||
ON UPDATE CASCADE
|
||||
);
|
||||
`;
|
||||
|
||||
export const LAST_SCANNED_AT_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS scan_state (
|
||||
id INTEGER PRIMARY KEY CHECK (id = 1),
|
||||
last_scanned_at TIMESTAMP NULL
|
||||
);
|
||||
`;
|
||||
|
||||
export const APP_CONFIG_TABLE_SCHEMA_SQL = `
|
||||
CREATE TABLE IF NOT EXISTS app_config (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
`;
|
||||
|
||||
export const INIT_SCHEMA_SQL = `
|
||||
-- Initialize authentication database
|
||||
PRAGMA foreign_keys = ON;
|
||||
|
||||
${USER_TABLE_SCHEMA_SQL}
|
||||
-- Indexes for performance for user lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
|
||||
CREATE INDEX IF NOT EXISTS idx_users_active ON users(is_active);
|
||||
|
||||
${API_KEYS_TABLE_SCHEMA_SQL}
|
||||
CREATE INDEX IF NOT EXISTS idx_api_keys_key ON api_keys(api_key);
|
||||
CREATE INDEX IF NOT EXISTS idx_api_keys_user_id ON api_keys(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_api_keys_active ON api_keys(is_active);
|
||||
|
||||
${USER_CREDENTIALS_TABLE_SCHEMA_SQL}
|
||||
CREATE INDEX IF NOT EXISTS idx_user_credentials_user_id ON user_credentials(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_credentials_type ON user_credentials(credential_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_credentials_active ON user_credentials(is_active);
|
||||
|
||||
${USER_NOTIFICATION_PREFERENCES_TABLE_SCHEMA_SQL}
|
||||
CREATE INDEX IF NOT EXISTS idx_user_notification_preferences_user_id ON user_notification_preferences(user_id);
|
||||
|
||||
${VAPID_KEYS_TABLE_SCHEMA_SQL}
|
||||
|
||||
${PUSH_SUBSCRIPTIONS_TABLE_SCHEMA_SQL}
|
||||
CREATE INDEX IF NOT EXISTS idx_push_subscriptions_user_id ON push_subscriptions(user_id);
|
||||
|
||||
${PROJECTS_TABLE_SCHEMA_SQL}
|
||||
-- NOTE: These indexes are created in migrations after legacy table-shape repairs.
|
||||
-- Creating them here can fail on upgraded installs where projects lacks those columns.
|
||||
|
||||
${SESSIONS_TABLE_SCHEMA_SQL}
|
||||
CREATE INDEX IF NOT EXISTS idx_session_ids_lookup ON sessions(session_id);
|
||||
-- NOTE: This index is created in migrations after sessions is rebuilt to include project_path.
|
||||
-- Creating it here can fail on upgraded installs where the legacy sessions table has no project_path.
|
||||
|
||||
${LAST_SCANNED_AT_SQL}
|
||||
|
||||
${APP_CONFIG_TABLE_SCHEMA_SQL}
|
||||
`;
|
||||
Reference in New Issue
Block a user