Conversation
- Delete orphaned database schema files (audit-schema.sql, migrations.sql, rbac-schema.sql) - Rename initial migration to 000_initial_schema.sql for clarity - Update DatabaseService to load only numbered migrations instead of base schema files - Move all schema definitions to the migrations directory as single source of truth - Add documentation in copilot-instructions.md explaining migration-first approach - Create internal task documentation for database schema cleanup process - Eliminate duplicate table definitions that existed across base schemas and migrations
…a file - Update COPY instruction to include all database files and migrations - Change from copying only schema.sql to copying entire src/database/ directory - Ensures future database-related files are automatically included in builds - Improves maintainability by avoiding need to update Dockerfile when new database files are added
…tern - Add new Proxmox integration with ProxmoxClient, ProxmoxService, and ProxmoxIntegration classes - Implement Proxmox API client with authentication and resource management capabilities - Add comprehensive test coverage for Proxmox integration and service layer - Update IntegrationManager to register and manage Proxmox integration - Add dedicated Proxmox routes handler for API endpoints - Update integration types to include Proxmox configuration schema - Refactor ConfigService and schema to support Proxmox settings - Update server configuration to include Proxmox routes - Add Kiro specification files for puppet-pabawi refactoring workflow - Update vitest configuration for improved test execution - Improves infrastructure flexibility by adding virtualization platform support alongside existing integrations
…ycle management - Add ProvisionPage with dynamic form generation for VM and LXC creation - Add ManageTab component for node lifecycle actions (start, stop, reboot, destroy) - Add ProxmoxProvisionForm and ProxmoxSetupGuide components for integration setup - Add formGenerator utility for dynamic form rendering based on capability metadata - Add permissions system for RBAC-aware UI element visibility - Add comprehensive validation and error handling for provisioning operations - Add test utilities and generators for provisioning-related tests - Add documentation for Proxmox setup, provisioning, permissions, and management workflows - Add Kiro specification files for Proxmox frontend UI and integration features - Update Navigation component to include new Provision page route - Update IntegrationSetupPage to support Proxmox configuration - Update API client with provisioning endpoints and type definitions - Update package.json with required dependencies
- Move 7 completed task documents from .kiro/todo to .kiro/done directory - Add comprehensive REMAINING_TODOS_REPORT.md with prioritized task breakdown - Include test failure analysis, RBAC issues, and environment configuration items - Add SQLite test database temporary files (test-migration.db-shm, test-migration.db-wal) - Update frontend logger with minor improvements - Consolidate task tracking and provide clear roadmap for remaining work
- Add getNodes() method to retrieve PVE cluster nodes with status and resource info - Add getNextVMID() method to fetch next available VM ID from cluster - Add getISOImages() method to list ISO images available on node storage - Add getTemplates() method to list OS templates available on node storage - Implement caching for node and resource queries (60-120 second TTL) - Add corresponding integration layer methods to expose Proxmox service capabilities - Update frontend navigation and routing to support new provisioning workflows - Enhance ProxmoxProvisionForm with node selection and resource discovery UI - Update API client and type definitions for provisioning operations - Improve error handling and logging across Proxmox integration layer
- Update foundFilter initial state from 'all' to 'found' for better UX - Aligns with filterMode default behavior to show relevant data by default - Reduces noise by filtering out not-found entries on initial load
…ved API handling - Fix ProxmoxClient to use form-urlencoded encoding for POST/PUT/DELETE requests instead of JSON, matching Proxmox API requirements - Add detailed error messages in API responses by including response body text for better diagnostics - Add getStorages() method to ProxmoxService and ProxmoxIntegration to query available storage devices on nodes with optional content type filtering - Add getNetworkBridges() method to ProxmoxService and ProxmoxIntegration to query network interfaces on nodes with bridge type filtering - Implement caching for both storage and network queries with 120-second TTL to reduce API calls - Update ProxmoxProvisionForm frontend component to use new storage and network discovery endpoints - Extend provisioning types to support storage and network configuration options - Update API client to expose new storage and network discovery endpoints
… and code quality improvements Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
- DatabaseAdapter interface with query, execute, transaction support - SQLiteAdapter: full implementation with WAL mode, foreign keys - PostgresAdapter: full implementation with pg pool, transaction client - AdapterFactory: creates correct adapter based on DB_TYPE env var - Error types: DatabaseQueryError, DatabaseConnectionError - Tests for all three components (39 tests passing)
There was a problem hiding this comment.
Pull request overview
This PR introduces a database-adapter abstraction (SQLite + PostgreSQL), migrates services/tests to the new adapter API, and adds Proxmox integration endpoints/capabilities (including lifecycle actions and provisioning discovery), along with version bumps and Docker packaging updates.
Changes:
- Replace direct
sqlite3usage with a newDatabaseAdapterinterface, plusSQLiteAdapterandPostgresAdapter, updating services and tests accordingly. - Add Proxmox integration wiring (server registration, provisioning discovery endpoint, node lifecycle/destroy endpoints) and enhance inventory node linking to retain per-source identifiers.
- Update build/Docker packaging to ship migration files (migration-first), add
pgdependency, and bump version labels.
Reviewed changes
Copilot reviewed 87 out of 199 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/test/integration/batch-execution.test.ts | Switch test DB usage to SQLiteAdapter/DatabaseAdapter helpers. |
| backend/test/integration/auth-flow.test.ts | Update direct SQL execution to new adapter execute(). |
| backend/test/database/rbac-schema.test.ts | Migrate schema verification tests to adapter queryOne/execute. |
| backend/test/database/migration-integration.test.ts | Replace sqlite callback wrappers with adapter query/queryOne. |
| backend/test/database/index-verification.test.ts | Use adapter queries for index/pragma validation checks. |
| backend/test/database/SQLiteAdapter.test.ts | Add test coverage for SQLiteAdapter lifecycle/query/transactions. |
| backend/test/database/PostgresAdapter.test.ts | Add basic instantiation/placeholder/init-failure tests for PostgresAdapter. |
| backend/test/database/MigrationRunner.test.ts | Update migration runner tests for adapter + dialect-specific migration selection. |
| backend/test/database/ExecutionRepository.test.ts | Replace sqlite3 DB with adapter-backed in-memory DB. |
| backend/test/database/AdapterFactory.test.ts | Add tests for adapter factory + new DB error types. |
| backend/test/UserService.test.ts | Migrate to adapter + adjust role assignment expectations (default Viewer role). |
| backend/test/UserService.password.test.ts | Migrate to adapter + seed Viewer role; execute schema via adapter. |
| backend/test/RoleService.test.ts | Migrate to adapter; execute schema statements via adapter. |
| backend/test/GroupService.test.ts | Migrate to adapter; simplify query helpers and schema init. |
| backend/src/services/UserService.ts | Replace sqlite3-specific DB helpers with DatabaseAdapter methods. |
| backend/src/services/SetupService.ts | Replace sqlite callbacks with adapter queryOne/execute + structured logging. |
| backend/src/services/RoleService.ts | Replace sqlite helpers with adapter execute/query/queryOne. |
| backend/src/services/PermissionService.ts | Replace sqlite helpers with adapter execute/query/queryOne. |
| backend/src/services/GroupService.ts | Replace sqlite helpers with adapter execute/query/queryOne. |
| backend/src/services/BatchExecutionService.ts | Replace sqlite callbacks with adapter calls; simplify DB actions. |
| backend/src/services/AuthenticationService.ts | Replace sqlite helpers with adapter calls for auth/lockout tables. |
| backend/src/services/AuditLoggingService.ts | Replace sqlite helpers with adapter calls; return changes via adapter execute. |
| backend/src/server.ts | Register Proxmox plugin when configured; add initialization logging. |
| backend/src/routes/streaming.ts | Allow EventSource auth via token query param -> Authorization header fallback. |
| backend/src/routes/inventory.ts | Add Proxmox node lifecycle and destroy endpoints. |
| backend/src/routes/integrations/status.ts | Add “proxmox not configured” status entry for UI. |
| backend/src/routes/integrations/provisioning.ts | New provisioning discovery endpoint for frontend integration listing. |
| backend/src/routes/integrations.ts | Mount Proxmox + provisioning routers with optional auth/RBAC. |
| backend/src/middleware/rbacMiddleware.ts | Switch middleware DB dependency to DatabaseAdapter. |
| backend/src/middleware/authMiddleware.ts | Switch middleware DB dependency to DatabaseAdapter. |
| backend/src/integrations/types.ts | Add ProvisioningCapability and parameter validation metadata. |
| backend/src/integrations/proxmox/types.ts | New Proxmox integration type definitions and error classes. |
| backend/src/integrations/proxmox/ProxmoxIntegration.ts | New Proxmox plugin implementing inventory + execution tool behaviors. |
| backend/src/integrations/NodeLinkingService.ts | Extend linked nodes to keep per-source IDs/URIs; adjust identifier extraction. |
| backend/src/integrations/IntegrationManager.ts | Return LinkedNode[] in aggregated inventory; add provisioning capability aggregation. |
| backend/src/database/rbac-schema.sql | Remove legacy RBAC schema file (migration-first). |
| backend/src/database/migrations/000_initial_schema.sql | Add header docs; align with migration-first approach. |
| backend/src/database/migrations.sql | Remove legacy monolithic migrations file (migration-first). |
| backend/src/database/errors.ts | New DB error classes for query/connection errors. |
| backend/src/database/audit-schema.sql | Remove legacy audit schema file (migration-first). |
| backend/src/database/SQLiteAdapter.ts | New sqlite-backed DatabaseAdapter implementation (WAL/FKs + query helpers). |
| backend/src/database/PostgresAdapter.ts | New pg-backed DatabaseAdapter implementation with tx support. |
| backend/src/database/MigrationRunner.ts | Switch to adapter; add dialect-specific migration file selection. |
| backend/src/database/ExecutionRepository.ts | Switch to adapter; remove sqlite callback wrappers. |
| backend/src/database/DatabaseService.ts | Use adapter factory; simplify init to “migrations only” approach. |
| backend/src/database/DatabaseAdapter.ts | New unified DB interface shared across services. |
| backend/src/database/AdapterFactory.ts | New factory selecting sqlite vs postgres based on env. |
| backend/src/config/schema.ts | Add Zod schema/types for Proxmox integration config. |
| backend/src/config/ConfigService.ts | Parse Proxmox env vars into integration config structure. |
| backend/package.json | Bump version, add pg deps, update build to copy migrations directory. |
| Dockerfile.ubuntu | Update image label version. |
| Dockerfile.alpine | Update image label version. |
| Dockerfile | Update image label version; copy full database dir into dist in container. |
| CLAUDE.md | Update docs to reflect migration-first DB schema approach. |
| .kiro/todo/proxmox-restart-required.md | Add operational note about restarting server to clear cached code. |
| .kiro/todo/expert-mode-prototype-pollution.md | Add security TODO note about prototype-pollution sanitization. |
| .kiro/todo/REMAINING_TODOS_REPORT.md | Add report summarizing remaining work/test failures. |
| .kiro/steering/security-best-practices.md | Add allowlist-secret guidance note. |
| .kiro/specs/pabawi-release-1-0-0/.config.kiro | Add spec config metadata for release work. |
| .kiro/specs/090/puppet-pabawi-refactoring/tasks.md | Add refactoring plan/tasks document. |
| .kiro/specs/090/puppet-pabawi-refactoring/requirements.md | Add requirements document for Puppet module refactor. |
| .kiro/specs/090/puppet-pabawi-refactoring/.config.kiro | Add spec config metadata for Puppet refactor. |
| .kiro/specs/090/proxmox-integration/.config.kiro | Add spec config metadata for Proxmox integration. |
| .kiro/specs/090/proxmox-frontend-ui/requirements.md | Add frontend requirements spec for provisioning UI. |
| .kiro/specs/090/proxmox-frontend-ui/.config.kiro | Add spec config metadata for frontend UI feature. |
| .kiro/done/proxmox-ssl-fix.md | Document completed Proxmox SSL fix. |
| .kiro/done/provisioning-endpoint-fix.md | Document completed provisioning endpoint addition. |
| .kiro/done/node-linking-redesign.md | Document completed node linking redesign. |
| .kiro/done/docker-missing-schema-files.md | Document completed Docker schema packaging fix. |
| .kiro/done/database-schema-cleanup-task.md | Document DB migration-first cleanup completion. |
| .kiro/database-cleanup-prompt.md | Add DB cleanup prompt/context document. |
| .github/copilot-instructions.md | Update DB schema docs to migration-first guidance. |
| return "postgres"; | ||
| } | ||
|
|
||
| getPlaceholder(index: number): string { |
| async withTransaction<T>(fn: () => Promise<T>): Promise<T> { | ||
| if (!this._pool) { | ||
| throw new DatabaseQueryError( | ||
| "Database not connected", | ||
| "BEGIN TRANSACTION", | ||
| [], | ||
| ); | ||
| } | ||
| const client = await this._pool.connect(); | ||
| const previousClient = this._txClient; | ||
| this._txClient = client; | ||
| try { | ||
| await client.query("BEGIN"); | ||
| const result = await fn(); | ||
| await client.query("COMMIT"); | ||
| return result; | ||
| } catch (error) { | ||
| await client.query("ROLLBACK"); | ||
| throw error; | ||
| } finally { | ||
| client.release(); | ||
| this._txClient = previousClient; | ||
| } | ||
| } |
| reject( | ||
| new DatabaseConnectionError( | ||
| `Failed to enable WAL mode: ${walErr.message}`, | ||
| this._databasePath, | ||
| ), | ||
| ); | ||
| return; |
| // Use the first node's name as the primary identifier | ||
| // (all related nodes should have the same name) | ||
| const primaryName = node.name; | ||
|
|
||
| // Create linked node with common name | ||
| const linkedNode: LinkedNode = { | ||
| id: primaryName, // Use name as primary ID for lookups | ||
| name: primaryName, | ||
| uri: node.uri, // Will be overwritten with combined URIs |
backend/src/routes/inventory.ts
Outdated
| const result = await proxmoxTool.executeAction({ | ||
| type: "task", | ||
| target: nodeId, | ||
| action: "destroy_vm", |
| * Note: RBAC middleware should be applied at the route mounting level in server.ts | ||
| * Required permission: lifecycle:* or lifecycle:{action} | ||
| */ | ||
| router.post( | ||
| "/:id/action", | ||
| asyncHandler(async (req: Request, res: Response): Promise<void> => { |
| // Handle token from query parameter (EventSource doesn't support headers) | ||
| // Only move to Authorization header when no Authorization header is already present, | ||
| // then remove from query to reduce the chance of it being logged downstream. | ||
| if ( | ||
| typeof req.query.token === "string" && | ||
| !req.headers.authorization | ||
| ) { | ||
| req.headers.authorization = `Bearer ${req.query.token}`; | ||
| // eslint-disable-next-line @typescript-eslint/no-dynamic-delete | ||
| delete (req.query as Record<string, unknown>).token; | ||
| } | ||
|
|
| // Determine integration status based on health check | ||
| let status: 'connected' | 'degraded' | 'not_configured' = 'not_configured'; | ||
| const healthCheck = proxmox.getLastHealthCheck(); | ||
|
|
||
| if (healthCheck) { | ||
| if (healthCheck.healthy) { | ||
| status = 'connected'; | ||
| } else if (healthCheck.message?.includes('not initialized') || healthCheck.message?.includes('disabled')) { | ||
| status = 'not_configured'; | ||
| } else { | ||
| status = 'degraded'; | ||
| } |
Backend: - AWS plugin with EC2 inventory, provisioning, lifecycle, health check - Journal service with timeline aggregation and note support - Integration config service with AES-256-GCM encryption and merge - Proxmox VM/LXC compute type routing and separate create methods - API routes for journal, integration config, and AWS endpoints - ConfigService and schema updated for AWS env vars - Database migrations for journal_entries and integration_configs Frontend: - AWS provisioning form and setup guide - Proxmox VM and LXC provision forms (split from single form) - Journal timeline component with note form - Integration config management page - RBAC UI updated for new permission types - Navigation and routing updates Fixes: - Markdown table formatting in docs/integrations/aws.md - Allowlisted example AWS keys in AWSSetupGuide.svelte - Updated .secrets.baseline
There was a problem hiding this comment.
Pull request overview
Adds “Version 1” backend foundations for provisioning integrations (Proxmox + AWS), node linking improvements, and a migration-first, adapter-based database layer (SQLite/Postgres), with packaging updates for Docker builds.
Changes:
- Introduces Proxmox and AWS provisioning integrations and related API routes (including provisioning discovery).
- Refactors inventory node deduplication into a linking model with per-source node identity (
sourceData), and updates aggregation types accordingly. - Reworks database access around a
DatabaseAdapterabstraction, adds Postgres support, and moves schema fully into numbered migrations; updates Docker/build scripts to ship migrations.
Reviewed changes
Copilot reviewed 70 out of 239 changed files in this pull request and generated 15 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/src/routes/integrations/provisioning.ts | New endpoint to discover provisioning integrations and capabilities. |
| backend/src/routes/integrations/aws.ts | New AWS integration API routes for inventory/provision/lifecycle/resource discovery. |
| backend/src/routes/integrations.ts | Mounts new Proxmox + provisioning routers; adds RBAC/auth gating. |
| backend/src/routes/integrationConfig.ts | New integration-config CRUD routes, using auth/RBAC. |
| backend/src/middleware/rbacMiddleware.ts | Switches RBAC middleware DB type to DatabaseAdapter. |
| backend/src/middleware/authMiddleware.ts | Switches auth middleware DB type to DatabaseAdapter. |
| backend/src/integrations/types.ts | Adds ProvisioningCapability and capability parameter validation metadata. |
| backend/src/integrations/proxmox/types.ts | Adds Proxmox integration types + error classes. |
| backend/src/integrations/proxmox/ProxmoxIntegration.ts | New Proxmox plugin implementation (inventory/actions/provisioning/journal). |
| backend/src/integrations/proxmox/ProxmoxClient.ts | New Proxmox API client with retries and task polling. |
| backend/src/integrations/aws/types.ts | Adds AWS integration types + AWSAuthenticationError. |
| backend/src/integrations/aws/tests/AWSPlugin.healthCheck.test.ts | Adds health check unit tests for AWS plugin. |
| backend/src/integrations/aws/AWSPlugin.ts | New AWS plugin implementation (inventory/actions/provisioning/journal). |
| backend/src/integrations/NodeLinkingService.ts | Extends linked nodes with per-source identity (sourceData) and alters linking behavior. |
| backend/src/integrations/IntegrationManager.ts | Inventory now returns LinkedNode[]; adds provisioning capability aggregation. |
| backend/src/database/rbac-schema.sql | Removes legacy RBAC schema file (migration-first). |
| backend/src/database/audit-schema.sql | Removes legacy audit schema file (migration-first). |
| backend/src/database/migrations.sql | Removes legacy aggregated migrations file (migration-first). |
| backend/src/database/migrations/000_initial_schema.sql | Documents initial schema migration (header). |
| backend/src/database/migrations/007_permissions_and_provisioner_role.sql | Seeds new permissions/roles for provisioning/journal/config resources. |
| backend/src/database/migrations/008_journal_entries.sql | Adds journal_entries table migration. |
| backend/src/database/migrations/009_integration_configs.sql | Adds per-user integration_configs table migration. |
| backend/src/database/errors.ts | Adds typed DB error classes for adapters. |
| backend/src/database/DatabaseAdapter.ts | Introduces unified DB adapter interface. |
| backend/src/database/AdapterFactory.ts | Adds adapter factory controlled by DB_TYPE / DATABASE_URL. |
| backend/src/database/SQLiteAdapter.ts | Implements DatabaseAdapter using sqlite3. |
| backend/src/database/PostgresAdapter.ts | Implements DatabaseAdapter using pg. |
| backend/src/database/MigrationRunner.ts | Refactors migration runner to use DatabaseAdapter + dialect-specific migration selection. |
| backend/src/database/ExecutionRepository.ts | Refactors repository to use DatabaseAdapter (still sqlite-style SQL). |
| backend/src/database/DatabaseService.ts | Refactors service to create/init adapter and run migrations only. |
| backend/src/config/schema.ts | Adds config schemas for Proxmox + AWS integrations. |
| backend/src/config/ConfigService.ts | Parses Proxmox/AWS config from env; adds getAWSConfig(). |
| backend/package.json | Adds AWS SDK + pg deps; updates build to ship migrations directory. |
| Dockerfile | Copies full DB directory into image; bumps version label. |
| Dockerfile.alpine | Bumps version label. |
| Dockerfile.ubuntu | Bumps version label. |
| CLAUDE.md | Updates docs to reflect migration-first DB approach. |
| .pre-commit-config.yaml | Tightens duplicate/backup filename detection regex. |
| .kiro/todo/proxmox-restart-required.md | Adds operational note about restarting backend to pick up changes. |
| .kiro/todo/expert-mode-prototype-pollution.md | Adds security TODO doc for prototype pollution risk. |
| .kiro/todo/REMAINING_TODOS_REPORT.md | Adds TODO status report. |
| .kiro/steering/security-best-practices.md | Updates security steering doc with allowlist-secret guidance. |
| .kiro/specs/pabawi-release-1-0-0/.config.kiro | Adds spec config metadata. |
| .kiro/specs/090/puppet-pabawi-refactoring/tasks.md | Adds implementation plan for puppet module refactoring. |
| .kiro/specs/090/puppet-pabawi-refactoring/requirements.md | Adds requirements for puppet module refactoring. |
| .kiro/specs/090/puppet-pabawi-refactoring/.config.kiro | Adds spec config metadata. |
| .kiro/specs/090/proxmox-integration/requirements.md | Adds Proxmox integration requirements spec. |
| .kiro/specs/090/proxmox-integration/.config.kiro | Adds spec config metadata. |
| .kiro/specs/090/proxmox-frontend-ui/tasks.md | Adds frontend UI implementation plan spec. |
| .kiro/specs/090/proxmox-frontend-ui/requirements.md | Adds frontend UI requirements spec. |
| .kiro/specs/090/proxmox-frontend-ui/.config.kiro | Adds spec config metadata. |
| .kiro/done/proxmox-ssl-fix.md | Adds “done” note about Proxmox SSL handling approach. |
| .kiro/done/provisioning-endpoint-fix.md | Adds “done” note about provisioning endpoint addition. |
| .kiro/done/node-linking-redesign.md | Adds “done” note about node linking redesign. |
| .kiro/done/docker-missing-schema-files.md | Adds “done” note about Docker copying DB artifacts. |
| .kiro/done/database-schema-cleanup-task.md | Adds “done” note about migration-first DB cleanup. |
| .kiro/database-cleanup-prompt.md | Adds internal prompt doc for DB cleanup work. |
| .github/copilot-instructions.md | Updates repo instructions to reflect migration-first DB approach. |
| private async recordMigration(migration: MigrationFile): Promise<void> { | ||
| const now = new Date().toISOString(); | ||
|
|
||
| return new Promise((resolve, reject) => { | ||
| this.db.run( | ||
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", | ||
| [migration.id, migration.filename, now], | ||
| (err) => { | ||
| if (err) { | ||
| reject(new Error(`Failed to record migration: ${err.message}`)); | ||
| } else { | ||
| resolve(); | ||
| } | ||
| } | ||
| ); | ||
| }); | ||
| await this.db.execute( | ||
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", | ||
| [migration.id, migration.filename, now] | ||
| ); |
| // Ensure database directory exists (for SQLite) | ||
| const dbDir = dirname(this.databasePath); | ||
| if (!existsSync(dbDir)) { | ||
| mkdirSync(dbDir, { recursive: true }); | ||
| } |
| this.ensureInitialized(); | ||
|
|
||
| const startedAt = new Date().toISOString(); | ||
| const target = Array.isArray(action.target) ? action.target[0] : action.target; |
| return { | ||
| id: `aws-provision-${Date.now()}`, | ||
| type: "task", | ||
| targetNodes: [target], |
| { | ||
| nodeId: instanceId, |
| const linkedNode: LinkedNode = { | ||
| id: primaryName, // Use name as primary ID for lookups | ||
| name: primaryName, |
| if (db) { | ||
| const authMiddleware = createAuthMiddleware(db, jwtSecret); | ||
| const rbacMiddleware = createRbacMiddleware(db); | ||
|
|
||
| router.use( | ||
| "/provisioning", | ||
| asyncHandler(authMiddleware), | ||
| asyncHandler(rbacMiddleware('provisioning', 'read')), | ||
| createProvisioningRouter(integrationManager) | ||
| ); | ||
| } else { | ||
| // Fallback for cases where database is not available (e.g., tests) | ||
| router.use("/provisioning", createProvisioningRouter(integrationManager)); | ||
| } |
| const configService = new IntegrationConfigService( | ||
| databaseService.getConnection(), | ||
| process.env.JWT_SECRET ?? "", | ||
| ); | ||
| const authMiddleware = createAuthMiddleware(databaseService.getConnection()); |
| // for ALL HTTPS traffic, not just Proxmox. Instead, log a warning to guide the operator. | ||
| // Per-client TLS bypass via a custom HTTPS agent/dispatcher is the correct solution. | ||
| if (config.ssl && config.ssl.rejectUnauthorized === false) { | ||
| this.logger.warn( | ||
| "Proxmox ssl.rejectUnauthorized=false is set, but per-client TLS bypass is not yet implemented. " + | ||
| "Proxmox connections will use the default TLS verification. " + | ||
| "Configure a trusted CA certificate (ssl.ca) to connect to Proxmox with self-signed certs.", | ||
| { | ||
| component: "ProxmoxClient", | ||
| operation: "constructor", | ||
| } | ||
| ); |
| import { z } from "zod"; | ||
| import { ZodError } from "zod"; |
There was a problem hiding this comment.
Pull request overview
Adds initial provisioning/integration support (Proxmox + AWS), expands inventory/facts aggregation behavior, and refactors the database layer toward an adapter-based, migration-first approach (with optional PostgreSQL support).
Changes:
- Introduce provisioning discovery + AWS integration routes, plus Proxmox integration plugin/types/tests.
- Refactor database access behind a
DatabaseAdapterabstraction, add SQLite/PostgreSQL adapters, and migrate schema management tomigrations/. - Update inventory node linking model and add multi-source facts endpoint with per-source timeouts.
Reviewed changes
Copilot reviewed 72 out of 250 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/src/routes/integrations/provisioning.ts | New endpoint to list provisioning integrations + capabilities based on health state. |
| backend/src/routes/integrations/aws.ts | New AWS integration API routes (inventory, provisioning, lifecycle, region/resources discovery). |
| backend/src/routes/integrations.ts | Mount Proxmox + provisioning routers; update DB type to DatabaseAdapter for auth/RBAC wiring. |
| backend/src/routes/integrationConfig.ts | New routes to CRUD per-user integration configs with auth + RBAC. |
| backend/src/routes/facts.ts | Add /:id/facts endpoint to gather facts from all sources in parallel with timeouts. |
| backend/src/middleware/rbacMiddleware.ts | Switch RBAC middleware DB dependency to DatabaseAdapter. |
| backend/src/middleware/authMiddleware.ts | Switch auth middleware DB dependency to DatabaseAdapter. |
| backend/src/integrations/types.ts | Add ProvisioningCapability and extend capability parameter metadata with validation. |
| backend/src/integrations/proxmox/types.ts | New Proxmox integration domain types + error classes. |
| backend/src/integrations/proxmox/tests/ProxmoxIntegration.test.ts | Unit tests for ProxmoxIntegration behaviors and journaling hooks. |
| backend/src/integrations/proxmox/ProxmoxIntegration.ts | New Proxmox plugin implementing InformationSource + ExecutionTool + journaling. |
| backend/src/integrations/proxmox/ProxmoxClient.ts | New Proxmox API HTTP client with retry + task polling. |
| backend/src/integrations/aws/types.ts | New AWS integration types + AWSAuthenticationError. |
| backend/src/integrations/aws/tests/AWSPlugin.healthCheck.test.ts | Unit tests for AWSPlugin health check behavior. |
| backend/src/integrations/NodeLinkingService.ts | Extend linked nodes to preserve per-source IDs/URIs via sourceData; adjust identifier extraction. |
| backend/src/integrations/IntegrationManager.ts | Inventory now returns LinkedNode[]; add provisioning capability aggregation; add per-source inventory timeout; delegate dedup to NodeLinkingService. |
| backend/src/database/rbac-schema.sql | Remove legacy RBAC schema file (migration-first). |
| backend/src/database/migrations/009_integration_configs.sql | Add migration for integration_configs table and indexes. |
| backend/src/database/migrations/008_journal_entries.sql | Add migration for journal_entries table and indexes. |
| backend/src/database/migrations/007_permissions_and_provisioner_role.sql | Seed new permissions + Provisioner role; expand admin permissions. |
| backend/src/database/migrations/000_initial_schema.sql | Add migration header/comments for initial schema. |
| backend/src/database/migrations.sql | Remove legacy adhoc migrations file (migration-first). |
| backend/src/database/errors.ts | New typed DB error classes for query/connection failures. |
| backend/src/database/audit-schema.sql | Remove legacy audit schema file (migration-first). |
| backend/src/database/SQLiteAdapter.ts | Add SQLite DatabaseAdapter implementation (WAL + FK). |
| backend/src/database/PostgresAdapter.ts | Add Postgres DatabaseAdapter implementation with pool + transactions. |
| backend/src/database/MigrationRunner.ts | Refactor migration runner to use DatabaseAdapter and support dialect-specific migration filenames. |
| backend/src/database/ExecutionRepository.ts | Refactor repository to use DatabaseAdapter methods instead of sqlite3 callbacks. |
| backend/src/database/DatabaseService.ts | Refactor service to initialize via adapter factory, run migrations, and expose adapter. |
| backend/src/database/DatabaseAdapter.ts | Define unified DB adapter interface used by services/middleware. |
| backend/src/database/AdapterFactory.ts | Add env-driven adapter selection (sqlite default, postgres with DATABASE_URL). |
| backend/src/config/schema.ts | Add Proxmox + AWS integration configuration schemas to config model. |
| backend/src/config/ConfigService.ts | Parse Proxmox + AWS configs from environment; add getAWSConfig(). |
| backend/package.json | Bump version; add AWS SDK + pg deps; adjust build script to copy migrations directory. |
| Dockerfile.ubuntu | Update OCI image version label. |
| Dockerfile.alpine | Update OCI image version label. |
| Dockerfile | Update OCI image version label; change openbolt install step; copy full database directory; add PROXMOX/AWS env flags. |
| CLAUDE.md | Update documentation to reflect migration-first DB schema. |
| .pre-commit-config.yaml | Tighten duplicate/backup filename regex. |
| .kiro/todo/proxmox-restart-required.md | New internal note about cached server code / restart. |
| .kiro/todo/expert-mode-prototype-pollution.md | New internal note documenting prototype pollution finding. |
| .kiro/todo/REMAINING_TODOS_REPORT.md | New report summarizing remaining TODOs and test failures. |
| .kiro/steering/security-best-practices.md | Document detect-secrets allowlist pragma usage. |
| .kiro/steering/git-best-practices.md | Document pre-commit hooks and allowlist pragma usage. |
| .kiro/specs/pabawi-release-1-0-0/.config.kiro | Add spec config metadata file. |
| .kiro/specs/090/puppet-pabawi-refactoring/tasks.md | Add refactoring implementation plan doc. |
| .kiro/specs/090/puppet-pabawi-refactoring/requirements.md | Add refactoring requirements doc. |
| .kiro/specs/090/puppet-pabawi-refactoring/.config.kiro | Add spec config metadata file. |
| .kiro/specs/090/proxmox-integration/requirements.md | Add Proxmox integration requirements doc. |
| .kiro/specs/090/proxmox-integration/.config.kiro | Add spec config metadata file. |
| .kiro/specs/090/proxmox-frontend-ui/tasks.md | Add Proxmox frontend UI implementation plan doc. |
| .kiro/specs/090/proxmox-frontend-ui/requirements.md | Add Proxmox frontend UI requirements doc. |
| .kiro/specs/090/proxmox-frontend-ui/.config.kiro | Add spec config metadata file. |
| .kiro/done/proxmox-ssl-fix.md | Add internal note describing prior SSL fix approach. |
| .kiro/done/provisioning-endpoint-fix.md | Add internal note describing provisioning endpoint fix. |
| .kiro/done/node-linking-redesign.md | Add internal note describing node linking redesign. |
| .kiro/done/docker-missing-schema-files.md | Add internal note describing Docker schema copy fix. |
| .kiro/done/database-schema-cleanup-task.md | Add internal note documenting migration-first DB cleanup. |
| .kiro/database-cleanup-prompt.md | Add internal prompt describing DB cleanup task. |
| .github/copilot-instructions.md | Update project guidance to reflect migration-first schema policy. |
| build-essential \ | ||
| ansible \ | ||
| && gem install openbolt -v 5.1.0 --no-document \ | ||
| openbolt \ |
There was a problem hiding this comment.
This line appears to execute openbolt during the image build rather than installing it (the previous gem install openbolt -v ... was removed). This will likely fail the Docker build (command not found / non-zero exit) and also makes the build non-reproducible. Install the gem explicitly (pin version if needed) or install via the appropriate package manager, but avoid invoking the runtime binary in a RUN install step.
| const eventType = this.mapActionToEventType(action.action); | ||
| const entry: CreateJournalEntry = { | ||
| nodeId: target, | ||
| nodeUri: `proxmox:${target}`, |
There was a problem hiding this comment.
target is already a Proxmox node identifier like proxmox:pve1:100, so nodeUri: proxmox:${target}`` produces proxmox:proxmox:pve1:100. This will break downstream consumers that expect a stable URI format. Use the actual node URI (e.g., from the node data) or avoid double-prefixing by storing `target` directly if it already includes the scheme/prefix.
| nodeUri: `proxmox:${target}`, | |
| nodeUri: target, |
| // NOTE: Setting NODE_TLS_REJECT_UNAUTHORIZED is process-wide and would disable TLS verification | ||
| // for ALL HTTPS traffic, not just Proxmox. Instead, log a warning to guide the operator. | ||
| // Per-client TLS bypass via a custom HTTPS agent/dispatcher is the correct solution. | ||
| if (config.ssl && config.ssl.rejectUnauthorized === false) { | ||
| this.logger.warn( | ||
| "Proxmox ssl.rejectUnauthorized=false is set, but per-client TLS bypass is not yet implemented. " + | ||
| "Proxmox connections will use the default TLS verification. " + | ||
| "Configure a trusted CA certificate (ssl.ca) to connect to Proxmox with self-signed certs.", | ||
| { | ||
| component: "ProxmoxClient", | ||
| operation: "constructor", | ||
| } | ||
| ); |
There was a problem hiding this comment.
The integration accepts ssl.rejectUnauthorized=false (and ProxmoxIntegration logs that TLS verification is disabled), but the client currently does not apply any TLS override—so self-signed Proxmox setups will still fail. Either: (1) implement a supported per-request TLS configuration for Node's fetch (e.g., undici dispatcher/Agent) and ensure the dependency is present, or (2) explicitly document and/or implement the process-wide NODE_TLS_REJECT_UNAUTHORIZED=0 approach with clear warnings. Right now the config option is effectively non-functional.
| // NOTE: Setting NODE_TLS_REJECT_UNAUTHORIZED is process-wide and would disable TLS verification | |
| // for ALL HTTPS traffic, not just Proxmox. Instead, log a warning to guide the operator. | |
| // Per-client TLS bypass via a custom HTTPS agent/dispatcher is the correct solution. | |
| if (config.ssl && config.ssl.rejectUnauthorized === false) { | |
| this.logger.warn( | |
| "Proxmox ssl.rejectUnauthorized=false is set, but per-client TLS bypass is not yet implemented. " + | |
| "Proxmox connections will use the default TLS verification. " + | |
| "Configure a trusted CA certificate (ssl.ca) to connect to Proxmox with self-signed certs.", | |
| { | |
| component: "ProxmoxClient", | |
| operation: "constructor", | |
| } | |
| ); | |
| // NOTE: Ideally we would configure TLS verification per-client (e.g. via a custom HTTPS | |
| // agent / undici dispatcher). Given the current architecture and constraints, we fall | |
| // back to the process-wide NODE_TLS_REJECT_UNAUTHORIZED mechanism when explicitly | |
| // requested via config.ssl.rejectUnauthorized === false. | |
| // | |
| // This is inherently dangerous because it disables TLS verification for ALL HTTPS | |
| // traffic in this Node.js process, not just Proxmox. We therefore: | |
| // - Only change NODE_TLS_REJECT_UNAUTHORIZED when it is unset; | |
| // - Log loud warnings about the security implications; | |
| // - Encourage configuring a proper CA (config.ssl.ca) instead whenever possible. | |
| if (config.ssl && config.ssl.rejectUnauthorized === false) { | |
| const currentTlsSetting = process.env.NODE_TLS_REJECT_UNAUTHORIZED; | |
| if (currentTlsSetting === undefined) { | |
| // Only set the env var when it is not already defined, to avoid surprising | |
| // behavior if the operator configured it explicitly at process start. | |
| process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0"; | |
| this.logger.warn( | |
| "Proxmox ssl.rejectUnauthorized=false is set. NODE_TLS_REJECT_UNAUTHORIZED was not set; " + | |
| "it has now been set to '0', disabling TLS certificate verification for ALL HTTPS " + | |
| "connections in this process. This is insecure and should only be used in trusted " + | |
| "environments. Prefer configuring a trusted CA certificate via ssl.ca whenever possible.", | |
| { | |
| component: "ProxmoxClient", | |
| operation: "constructor", | |
| } | |
| ); | |
| } else if (currentTlsSetting === "0") { | |
| // TLS verification is already globally disabled; align with that and log at info level. | |
| this.logger.info( | |
| "Proxmox ssl.rejectUnauthorized=false is set and NODE_TLS_REJECT_UNAUTHORIZED is already '0'. " + | |
| "TLS certificate verification is disabled process-wide.", | |
| { | |
| component: "ProxmoxClient", | |
| operation: "constructor", | |
| } | |
| ); | |
| } else { | |
| // The operator explicitly configured a different global setting. Respect it, but make | |
| // clear that the Proxmox option cannot override it. | |
| this.logger.warn( | |
| "Proxmox ssl.rejectUnauthorized=false is set, but NODE_TLS_REJECT_UNAUTHORIZED is already " + | |
| `set to '${currentTlsSetting}'. The global setting will take precedence and TLS ` + | |
| "certificate verification will NOT be disabled unless NODE_TLS_REJECT_UNAUTHORIZED is '0'.", | |
| { | |
| component: "ProxmoxClient", | |
| operation: "constructor", | |
| } | |
| ); | |
| } |
| await this.db.execute( | ||
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", | ||
| [migration.id, migration.filename, now] | ||
| ); |
There was a problem hiding this comment.
This hardcodes SQLite-style ? placeholders. When DB_TYPE=postgres, this insert will fail because pg expects $1/$2/$3. Since DatabaseAdapter already exposes getPlaceholder(index), build the VALUES placeholder list using that API (or provide an adapter-level helper) so migration tracking works across dialects.
| router.get( | ||
| "/:id/facts", |
There was a problem hiding this comment.
Express route matching is order-dependent. If this router already defines a more general /:id (or similar) route earlier in the file, then /:id will also match /<nodeId>/facts and prevent this handler from running. To avoid accidental shadowing, register /:id/facts before any /:id-style routes, or consider a less ambiguous path shape (e.g., /:id/facts/all or a non-overlapping prefix).
| const timeoutPromise = new Promise<never>((_, reject) => | ||
| setTimeout( | ||
| () => reject(new Error(`Timeout after ${SOURCE_TIMEOUT_MS}ms`)), | ||
| SOURCE_TIMEOUT_MS, | ||
| ), | ||
| ); |
There was a problem hiding this comment.
The per-source timeout uses setTimeout(...) but the timer is never cleared when source.getNodeFacts() resolves first. This creates unnecessary pending timers under load (they still fire later, even though the race is already decided). Capture the timeout handle and clearTimeout(...) in a finally block around the raced await to keep the event loop/timer queue clean.
| const timeoutPromise = new Promise<never>((_, reject) => | ||
| setTimeout( | ||
| () => reject(new Error(`Source '${name}' timed out after ${String(SOURCE_TIMEOUT_MS)}ms`)), | ||
| SOURCE_TIMEOUT_MS, | ||
| ), | ||
| ); |
There was a problem hiding this comment.
Similar to the facts endpoint, this timeout promise leaves a live timer even if the source returns quickly. In inventory polling scenarios across many sources/users, this can create avoidable timer churn and log noise. Consider clearing the timeout in a finally once either branch wins the race (or switch to an AbortController-based approach so the underlying request can be cancelled where supported).
…sment - Add Azure support design and requirements specifications - Add ManageTab lifecycle flow analysis documenting end-to-end error tracing - Add missing lifecycle actions bugfix documentation and Kiro config - Add workspace analysis summary and product/tech steering documents - Update ProxmoxClient with improved integration handling - Update inventory routes with enhanced node lifecycle actions support - Update ManageTab component with better state management - Update API client with refined lifecycle actions fetching - Update NodeDetailPage with improved node detail rendering - Add security assessment documentation for v0.10.0
| // Fetch nodes and groups in parallel | ||
| // Fetch nodes and groups in parallel with per-source timeout | ||
| // Prevents a single slow source from blocking the entire inventory | ||
| const SOURCE_TIMEOUT_MS = 15_000; |
There was a problem hiding this comment.
The timeoutPromise created with setTimeout() is never cleared when Promise.all(...) finishes first. This can lead to unhandled rejections firing after the race is already resolved (and also leaks timers). Use a cancellable timeout (store the timer handle + clearTimeout in a finally), or use AbortController/signal-aware APIs for cancellation to avoid late rejections.
| const timeoutPromise = new Promise<never>((_, reject) => | ||
| setTimeout( | ||
| () => reject(new Error(`Source '${name}' timed out after ${String(SOURCE_TIMEOUT_MS)}ms`)), | ||
| SOURCE_TIMEOUT_MS, | ||
| ), | ||
| ); | ||
|
|
||
| const [nodes, groups] = await Promise.race([ | ||
| Promise.all([ | ||
| source.getInventory(), | ||
| source.getGroups().catch((error: unknown) => { |
There was a problem hiding this comment.
The timeoutPromise created with setTimeout() is never cleared when Promise.all(...) finishes first. This can lead to unhandled rejections firing after the race is already resolved (and also leaks timers). Use a cancellable timeout (store the timer handle + clearTimeout in a finally), or use AbortController/signal-aware APIs for cancellation to avoid late rejections.
| private async recordMigration(migration: MigrationFile): Promise<void> { | ||
| const now = new Date().toISOString(); | ||
|
|
||
| return new Promise((resolve, reject) => { | ||
| this.db.run( | ||
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", | ||
| [migration.id, migration.filename, now], | ||
| (err) => { | ||
| if (err) { | ||
| reject(new Error(`Failed to record migration: ${err.message}`)); | ||
| } else { | ||
| resolve(); | ||
| } | ||
| } | ||
| ); | ||
| }); | ||
| await this.db.execute( | ||
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", | ||
| [migration.id, migration.filename, now] | ||
| ); | ||
| } |
There was a problem hiding this comment.
MigrationRunner uses hardcoded SQLite-style ? placeholders, which will fail when the active adapter is Postgres (expects $1, $2, ...). Since DatabaseAdapter already exposes getPlaceholder(index), build this SQL using adapter placeholders (or have the adapter expose a helper to interpolate placeholders) so migrations can be recorded on both dialects.
| @@ -223,7 +223,7 @@ export class ExecutionRepository { | |||
| const sql = `UPDATE executions SET ${updateFields.join(", ")} WHERE id = ?`; | |||
There was a problem hiding this comment.
ExecutionRepository still embeds SQLite ? placeholders in queries. After switching the repository to DatabaseAdapter, this will break under the Postgres adapter. Consider centralizing placeholder generation (e.g., a small helper that replaces ? with adapter.getPlaceholder(i) sequentially), or rewrite queries to use adapter-generated placeholders directly.
| @@ -247,7 +247,7 @@ export class ExecutionRepository { | |||
| const sql = "SELECT * FROM executions WHERE id = ?"; | |||
There was a problem hiding this comment.
ExecutionRepository still embeds SQLite ? placeholders in queries. After switching the repository to DatabaseAdapter, this will break under the Postgres adapter. Consider centralizing placeholder generation (e.g., a small helper that replaces ? with adapter.getPlaceholder(i) sequentially), or rewrite queries to use adapter-generated placeholders directly.
| const linkedNode: LinkedNode = { | ||
| id: primaryName, // Use name as primary ID for lookups | ||
| name: primaryName, | ||
| uri: node.uri, // Will be overwritten with combined URIs | ||
| transport: node.transport, | ||
| config: node.config, | ||
| sources: [], | ||
| linked: false, | ||
| sourceData: {}, | ||
| }; |
There was a problem hiding this comment.
The previous implementation spread ...node into linkedNode, but the new object literal no longer carries over other Node fields (notably source, which is present in several integration nodes and often used downstream for UI labels/logic). If Node.source is required/relied on elsewhere, this is a behavioral regression. Either preserve the original spread (...node) and then overwrite id/name as needed, or explicitly set a stable top-level source (e.g., highest-priority source / first source) while still keeping sources + sourceData for attribution.
| // Build a Response-compatible object so the rest of the client code is unchanged | ||
| const response = new Response(bodyText, { | ||
| status: res.statusCode ?? 500, | ||
| statusText: res.statusMessage ?? "", | ||
| headers: new Headers(res.headers as Record<string, string>), |
There was a problem hiding this comment.
res.headers values can be string | string[] | undefined in Node, and new Headers(...) expects an iterable/record of string values. Casting to Record<string, string> can hide runtime failures (e.g., Set-Cookie arrays) or produce incorrect header values. Normalize header entries (join arrays, skip undefined) before constructing Headers to avoid runtime exceptions and preserve correct semantics.
| // Build a Response-compatible object so the rest of the client code is unchanged | |
| const response = new Response(bodyText, { | |
| status: res.statusCode ?? 500, | |
| statusText: res.statusMessage ?? "", | |
| headers: new Headers(res.headers as Record<string, string>), | |
| // Normalize Node.js IncomingHttpHeaders (string | string[] | undefined) to string values | |
| const headerEntries: [string, string][] = []; | |
| for (const [key, value] of Object.entries(res.headers)) { | |
| if (typeof value === "string") { | |
| headerEntries.push([key, value]); | |
| } else if (Array.isArray(value)) { | |
| headerEntries.push([key, value.join(", ")]); | |
| } | |
| // Skip undefined values | |
| } | |
| // Build a Response-compatible object so the rest of the client code is unchanged | |
| const response = new Response(bodyText, { | |
| status: res.statusCode ?? 500, | |
| statusText: res.statusMessage ?? "", | |
| headers: new Headers(headerEntries), |
- Add ALLOW_DESTRUCTIVE_PROVISIONING environment variable to control whether destructive provisioning actions (destroy VM/LXC, terminate EC2) are allowed - Add ProvisioningConfig schema and provisioning safety configuration to ConfigService - Add isDestructiveProvisioningAllowed() method to ConfigService for checking if destructive actions are enabled - Add /api/config/provisioning endpoint to expose provisioning safety settings to frontend - Add allowDestructiveActions guard to AWS EC2 terminate endpoint, returning 403 DESTRUCTIVE_ACTION_DISABLED when disabled - Add allowDestructiveActions guard to Proxmox destroy endpoints, returning 403 when disabled - Pass provisioning safety options through integration routers (AWS, Proxmox) from main integrations router - Update ManageTab.svelte frontend to respect provisioning safety configuration - Update configuration documentation and provisioning guide with safety settings - Update setup.sh to include provisioning safety configuration - Defaults to false (destructive actions disabled) for safer out-of-box experience
- Add AWS_REGIONS configuration to support querying multiple regions via JSON array or comma-separated string - Update ConfigService to parse and validate multi-region configuration - Add regions field to AWSConfigSchema for type safety - Improve AWS credential validation to support default credential chain (env vars, ~/.aws/credentials, instance profile) - Remove requirement for explicit credentials or profile when using default chain - Update AWSPlugin health check to include regions in diagnostic output - Fix NodeLinkingService to skip AWS URIs when extracting hostname identifiers - Add source field to linked nodes for backward compatibility with code reading node.source - Update .env.example with comprehensive AWS and Proxmox configuration documentation - Clean up test database temporary files (test-migration.db-shm, test-migration.db-wal) - Update integration and performance tests to reflect configuration changes
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 73 out of 270 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
Dockerfile:1
- This changes OpenBolt installation from a pinned RubyGems install to an
apt-get install openbolt. On Debian/Ubuntu,openboltis unlikely to exist as an apt package (and even if it does, it removes the previously pinned version). This is likely to break image builds or change behavior unexpectedly. Consider restoringgem install openbolt -v <pinned>(or verify the apt package exists and pin its version explicitly).
# Stage 1: Build frontend with Vite
| const eventType = this.mapActionToEventType(action.action); | ||
| const entry: CreateJournalEntry = { | ||
| nodeId: target, | ||
| nodeUri: `proxmox:${target}`, |
There was a problem hiding this comment.
Same issue as in recordJournal(): nodeUri will become proxmox:proxmox:... when target is already prefixed. Align nodeUri with the canonical URI format used elsewhere (or use target directly if that’s what consumers expect).
| nodeUri: `proxmox:${target}`, | |
| nodeUri: target, |
| await this.db.execute( | ||
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", |
There was a problem hiding this comment.
MigrationRunner now supports Postgres via DatabaseAdapter, but it still hardcodes SQLite ? placeholders. This will fail with pg which expects $1, $2, .... Use this.db.getPlaceholder(1..n) to build dialect-correct SQL (or expose an adapter helper for parameterized INSERTs).
| await this.db.execute( | |
| "INSERT INTO migrations (id, name, appliedAt) VALUES (?, ?, ?)", | |
| const placeholders = [ | |
| this.db.getPlaceholder(1), | |
| this.db.getPlaceholder(2), | |
| this.db.getPlaceholder(3), | |
| ].join(", "); | |
| await this.db.execute( | |
| `INSERT INTO migrations (id, name, appliedAt) VALUES (${placeholders})`, |
|
|
||
| try { | ||
| await this.run(sql, params); | ||
| await this.db.execute(sql, params); |
There was a problem hiding this comment.
ExecutionRepository has been switched to DatabaseAdapter, but the repository SQL (e.g., WHERE id = ?, VALUES (?, ?, ...)) is still SQLite-style. This will break when DB_TYPE=postgres. To truly support both dialects, build queries using db.getPlaceholder(n) and avoid SQLite-specific syntax (or provide a small query builder that emits the right placeholders).
| getAllProvisioningCapabilities(): Array<{ | ||
| source: string; | ||
| capabilities: Array<{ | ||
| name: string; | ||
| description: string; | ||
| operation: "create" | "destroy"; | ||
| parameters: Array<{ | ||
| name: string; | ||
| type: string; | ||
| required: boolean; | ||
| default?: unknown; | ||
| }>; | ||
| }>; | ||
| }> { |
There was a problem hiding this comment.
This duplicates the provisioning capability shape inline instead of reusing the existing ProvisioningCapability (and parameter) types. That increases the chance of drift between plugins and the manager. Prefer returning Array<{ source: string; capabilities: ProvisioningCapability[] }> (or a shared DTO type) so changes to capability contracts are type-checked centrally.
| private getMigrationFiles(): MigrationFile[] { | ||
| const dialect = this.db.getDialect(); | ||
|
|
||
| try { | ||
| const files = readdirSync(this.migrationsDir); |
There was a problem hiding this comment.
The migration selection logic is now dialect-aware (shared vs *.sqlite.sql / *.postgres.sql precedence) and is critical for correctness, but there are no tests shown covering: (1) precedence when both shared + dialect-specific exist, (2) ignoring the other dialect, and (3) stable ordering. Adding unit tests for getMigrationFiles()/getPendingMigrations() would help prevent subtle migration regressions.
- Remove source-specific prefixes (ansible:, bolt:) from node identifiers in AnsibleService and BoltPlugin - Replace node.id with node.name in AWS region, VPC, and tag grouping methods for consistency - Update Proxmox service to use node.name instead of node.id across all grouping operations - Add dual-mode AMI input to AWSProvisionForm with search and direct ID entry options - Implement switchAMIMode and onDirectAMIInput functions for flexible AMI selection - Remove redundant describeAllInstances method from AWSService - Ensures consistent node naming convention across all integration services and improves user experience with flexible AMI input methods
…lution - Add comprehensive AWS instance metadata collection including CPU options, network interfaces, and hardware details - Implement node ID resolution in AWSPlugin.getNodeFacts() to support both aws:region:instanceId format and node name/id lookup - Expand command whitelist with system inspection commands and change matching mode from exact to prefix - Add JournalCollectors service for centralized journal event collection - Update IntegrationManager.getLinkedInventory() to support cache control parameter - Remove terminate_instance action from AWS provisioning operations for safety - Fix AWS availability zone regex pattern to correctly extract region identifier - Enhance network interface data collection with DNS names and detailed NIC information - Add instance name tag extraction and platform metadata to AWS facts - Update frontend components to reflect improved inventory and journal data handling
| const sql = `UPDATE executions SET ${updateFields.join(", ")} WHERE id = ?`; | ||
|
|
||
| try { | ||
| await this.run(sql, params); | ||
| await this.db.execute(sql, params); |
There was a problem hiding this comment.
The repository SQL uses SQLite-style ? placeholders, but pg (PostgreSQL) requires $1, $2, ... placeholders. With DB_TYPE=postgres, these queries will fail at runtime. A tangible fix is to either (a) update repositories to build SQL using this.db.getPlaceholder(i) for each parameter position, or (b) implement a translation layer in PostgresAdapter that converts ? placeholders to $n before executing queries (keeping existing SQL strings unchanged).
| if (config.ssl.ca) { | ||
| try { | ||
| agentOptions.ca = readFileSync(config.ssl.ca); | ||
| } catch (err) { | ||
| this.logger.error("Failed to read Proxmox SSL CA file", { | ||
| component: "ProxmoxClient", | ||
| operation: "constructor", | ||
| metadata: { path: config.ssl.ca }, | ||
| }, err instanceof Error ? err : undefined); | ||
| } | ||
| } |
There was a problem hiding this comment.
If a CA/cert/key path is configured but cannot be read, the client currently logs and continues. That can lead to surprising TLS failures later (or unintentionally falling back to system trust store), and makes misconfiguration harder to detect. Consider failing fast (throwing) when an explicitly configured TLS file cannot be loaded—especially when rejectUnauthorized is true—so the integration surfaces a clear initialization error.
| private async requestWithRetry( | ||
| method: string, | ||
| endpoint: string, | ||
| data?: unknown | ||
| ): Promise<unknown> { |
There was a problem hiding this comment.
This PR adds substantial new behavior (retry/backoff rules, 429 handling, ticket refresh retry, task polling, and custom TLS handling). There are unit tests for ProxmoxIntegration, but no tests here covering ProxmoxClient’s retry/authentication and HTTP error mapping logic. Adding focused tests for requestWithRetry() (including 4xx no-retry, 429 retryAfter sleep, auth refresh) and waitForTask() (OK/fail/timeout) would help prevent regressions.
No description provided.