Builder containers are started with the --rm flag, which automatically removes them when stopped. The explicit docker rm -f is redundant and adds unnecessary steps to deployment logs.
This change adds a skipRemove parameter to graceful_shutdown_container() and sets it to true for builder container shutdowns (uuid-based) while keeping the default behavior for application containers.
Fixes#7566🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Helper containers are started with --rm flag which automatically removes the container when it stops. Removed redundant docker rm commands from graceful_shutdown_container in ApplicationDeploymentJob and replaced docker rm with docker stop in DatabaseBackupJob.
🤖 Generated with Claude Code
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Remove per-line x-effect directives that re-evaluated for every log line during polling
- Replace with efficient applySearch() function that updates logs once after Livewire morph
- Remove unnecessary caching mechanisms (renderTrigger, decodeCache, matchCountCache)
- Remove double HTML encoding of log lines (e() + Blade escaping)
- Add decodeHtml() helper to properly decode HTML entities from data attributes
- Use morph.updated hook instead of commit hook for efficient DOM updates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Create new Server/Swarm.php Livewire component and view for Swarm configuration
- Create new Server/Sentinel.php Livewire component and view for Sentinel settings
- Add server.swarm and server.sentinel routes
- Move Swarm and Sentinel sections from General page to sidebar menu items
- Improve organization by separating concerns into dedicated pages
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Use atomic update pattern in backend: collect logs into temp variable
before replacing outputs (prevents empty state flash)
- Remove per-line x-effect directives that caused 4000+ reactive
evaluations on every update
- Replace inline Alpine.js class bindings with CSS utility classes
- Use single $watch and morph.updated hook instead of renderTrigger
- Remove HTML entity decode cache (no longer needed)
- Fix search highlight padding that caused text shifting
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add missing edge case test for root path (/) and expand test coverage
for preview FQDN generation. Tests verify that ports and paths are
correctly preserved in preview URLs while excluding root paths.
Fixes#2184
Add copy-to-clipboard functionality for both deployment logs and runtime container logs with success notification. Fixed event dispatch to use Livewire.dispatch() for proper toast notification handling. Reorganized copy and download buttons to appear consecutively in runtime logs toolbar.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
The Resources tab threw a "Queueing collections with multiple model types is not supported" error because the Livewire component was storing a mixed-type Eloquent collection (Applications, Databases, Services) as a public property, causing Livewire's serialization to fail.
Fixed by: storing only the unmanaged containers array in the component, and calling definedResources() directly in the Blade view for the managed tab.
Fixes#7666🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Consolidate duplicate restart tracking logic in GetContainersStatus
- Add last_restart_type string cast to all 8 standalone database models
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix status flickering: Track databases in active/transient states (restarting, starting, created, paused) not just running
- Add isActiveOrTransient() helper to distinguish between active states and terminal states (exited, dead)
- Add safeguard: Protect updateNotFoundDatabaseStatus() from marking as exited when containers collection is empty
- Add restart_count tracking: New migration adds restart_count, last_restart_at, last_restart_type to all standalone database tables
- Update 8 database models with $casts for new restart tracking fields
- Update GetContainersStatus to extract RestartCount from Docker and update database models
- Reset restart tracking when database exits completely
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Allow users to select PostgreSQL version instead of automatically creating postgres:16-alpine when using global search. The fix includes:
- Remove hardcoded database_image parameter from GlobalSearch
- Update Create.php to fall through to Select component when database_image is not provided
- Add type and destination to Select component query string with proper URL mapping
- Jump directly to PostgreSQL version selection step when navigating from global search
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Implement instance-wide SPA navigation toggle that enables smooth page transitions with prefetching on hover. Excludes terminal links which require full page lifecycle for WebSocket connections. Adds defensive checks to global-search component for SPA navigation compatibility.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Set modal width to consistent 48rem for both upgrade states
- Remove max-width constraint from progress stepper
- Add dev mode with Simulate button for local testing
- Simulate cycles through all upgrade steps with 2-second delays
- Removed unnecessary SVG icons from the environment edit view for cleaner UI.
- Deleted the environment select component as it was no longer needed.
- Enhanced the project resource index view with a dropdown for environments and resources, improving navigation.
- Implemented dynamic dropdowns for environments and their associated resources, allowing for better user interaction.
- Added transitions and hover effects for a more responsive design.
- Updated the layout to ensure a consistent user experience across different project resources.
Add a copy button to individual container logs that strips sensitive
data before copying to clipboard. Includes sanitization for emails,
database URLs with passwords, JWT tokens, API keys, private key blocks,
and git access tokens.
- Eager load service applications and databases to eliminate N+1 queries
- Replace individual model updates with batch database updates for applications, previews, and services
- Move connectProxyToNetworks to async ConnectProxyToNetworksJob to avoid blocking status updates
- Optimize Server.databases() and applications() methods with efficient database queries
- Use flatMap for cleaner collection transformations
🤖 Generated with Claude Code
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Add model saving event to trim key/secret fields (encrypted casts)
- Add attribute mutators to trim endpoint, bucket, region fields
- Create migration to fix existing S3 storage records with whitespace
- Use chunking in migration to handle large datasets efficiently
- Verify re-encryption validity before committing changes
Fixes#7469#7594🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Prevent text selection from being cleared when logs are re-rendered during polling
- Preserve fullscreen state when toggling debug logs or other Livewire updates
- Fix log filtering to properly apply when debug mode is toggled
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Fix null-safe operator on currentTeam() call in Upgrade.php
- Add --rm flag to docker run in upgrade.sh for cleanup consistency
- Store beforeunload handler as named reference and clean up on success
- Add clarifying comments for upgrade method calls
- Add error state handling with close option in upgrade modal
- Add step mapping documentation comment in upgrade-progress component
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When build server is enabled, $this->server points to the build server.
The log drain configuration check was using $this->server which would
incorrectly check the build server's settings instead of the deployment
server where the container actually runs.
This fix ensures log drain configuration is correctly applied based on
the deployment server's settings by using $this->original_server.
- Remove /api/upgrade-status endpoint and route
- Add getUpgradeStatus() method to Upgrade Livewire component
- Update frontend to call Livewire method via $wire.getUpgradeStatus()
- Simpler approach: no separate API, auth handled by Livewire
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Temporary debug fields added to identify why status returns 'none'
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The status file is on the host filesystem, not inside the container.
Use instant_remote_process() to read the file via SSH to Server::find(0).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add auth:sanctum middleware to /api/upgrade-status route
- Check user belongs to root team (id 0) before returning status
- Return 403 if user is not authorized
- Update frontend to send credentials with fetch request
- Update OpenAPI docs with 401/403 responses
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Delete status file 10 seconds after upgrade completes
- Reduce stale timeout from 30 to 10 minutes
- Remove timestamp from API response (internal detail)
- Treat timestamp parse failures as stale for security
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- upgrade.sh now writes status to /data/coolify/source/.upgrade-status
- New /api/upgrade-status endpoint reads status file for real progress
- Frontend polls status API instead of simulating progress
- Falls back to health check when service goes down during restart
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add step-by-step progress indicator (Preparing → Helper → Image → Restart)
- Display elapsed time during upgrade (MM:SS format)
- Show version transition in header (v4.0.0-beta.454 → v4.0.0-beta.456)
- Add expandable changelog preview before upgrading
- Reduce reload delay from 5s to 3s with countdown timer
- Add "Reload Now" button to skip countdown
- Improve status messages with step-specific descriptions
- Add success state with clear indication when upgrade completes
- Create new upgrade-progress component for visual step tracking
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Show clear progress with numbered steps (1/6 through 6/6)
- Display header and footer banners
- Show individual image pull progress
- Show which containers are being stopped
- Display final success message with version and log location
- Keep detailed logging to file for debugging
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
OAuth users don't have passwords set, so they should not be prompted for password confirmation when performing destructive actions. This fix:
- Detects OAuth users via the hasPassword() method
- Skips password confirmation in modal for OAuth users
- Keeps text name confirmation as the final step
- Centralizes logic in helper functions for maintainability
- Changes button text to "Confirm" when password step is skipped
Fixes#4457🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Filter webhook-triggered deployments by source_id to ensure only applications
associated with the GitHub App that sent the webhook are deployed, preventing
duplicate deployments when the same repository is configured in multiple teams.
Test emails should work with any recipient email address for verification purposes, not just team members. Added an isTestNotification flag to both Test notification classes and modified EmailChannel to skip team membership validation for test notifications.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Allow manually-added servers to be linked to Hetzner Cloud instances by
matching IP address. Once linked, servers gain power controls and status
monitoring.
Changes:
- Add getServers() and findServerByIp() methods to HetznerService
- Add Hetzner linking UI section to Server General page
- Add unit tests for new HetznerService methods
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add isServiceResource() and shouldBeReadOnlyInUI() to LocalFileVolume
- Update path matching to handle leading slashes in volume comparisons
- Update FileStorage and Show components to use shouldBeReadOnlyInUI()
- Show consolidated warning message for service/compose resources in all.blade.php
- Remove redundant per-volume warnings for service resources
- Clean up configuration.blade.php formatting
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use relationLoaded() check before accessing the application relationship
to avoid triggering individual queries for each volume when rendering
storage lists. Update Storage.php to eager load the relationship.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fixed isReadOnlyVolume() to detect both short-form (:ro) and long-form (read_only: true) Docker Compose volume syntax
- Fixed path matching to use mount_path only (fs_path is transformed during parsing from ./file to absolute path)
- Added "Load from server" button for read-only volumes to allow users to refresh content
- Changed loadStorageOnServer() authorization from 'update' to 'view' since loading is a read operation
- Added helper text to Content field warning users that content may be outdated
- Applied fixes to both LocalFileVolume and LocalPersistentVolume models
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Current version of infrastructure images (coolify-helper, coolify-realtime) are now protected from deletion during docker cleanup, regardless of which registry they're pulled from (ghcr.io, docker.io, or Docker Hub implicit). Old versions continue to be cleaned up as intended.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Add 429 response with Retry-After header for Hetzner server creation
- Create RateLimitException for proper rate limit error handling
- Rename cloud_provider_token_id to cloud_provider_token_uuid with deprecation
- Fix prices array schema in server-types endpoint with proper items definition
- Add explicit default: true to autogenerate_domain properties
- Add timeout and retry options to Docker install curl commands
- Fix race condition in deployment status update using atomic query
Adds Retry-After: 60 header to all deployment queue full responses,
helping webhook clients know when to retry their requests.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove unused server filtering logic in Kernel.php that was querying servers
but never using the results. Simplify Sentinel update checks in ServerManagerJob
by reusing the $isSentinelEnabled variable and removing unnecessary timezone
parameter for hourly cron execution.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Prevent deployment status from regressing to FAILED after it's marked as FINISHED by:
1. Calling completeDeployment() first in post_deployment() before any operations that could fail
2. Wrapping all post-deployment side effects in try-catch blocks
3. Adding FINISHED to terminal states that cannot be changed
4. Protecting ExecuteRemoteCommand from overwriting FINISHED status
This fixes the issue where a deployment with a healthy container and successful rolling update was still marked as Failed in the UI.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Replace client-side JavaScript URL checking with Laravel's routeIs() for determining when to reduce indicator opacity. This simplifies the code and uses route names as the source of truth.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Move restart counter reset from Livewire to ApplicationDeploymentJob to prevent race conditions with GetContainersStatus
- Remove artificial restart_type=manual tracking (never used in codebase)
- Add Crash Loop Example in seeder for testing restart tracking UI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The method was defined twice with the first (outdated) definition using
-Syyy and lacking proper flags. Keep the improved version that uses -Syu
with --needed for idempotency and proper systemctl ordering.
Add flag to ensure event is only dispatched once, avoiding wasteful
duplicate dispatches during the race condition window before Livewire
removes wire:poll from the DOM.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Removed auto-disable behaviors that caused follow logs to stop unexpectedly:
- Removed scroll detection that disabled following when user scrolled >50px from bottom
- Removed fullscreen exit handler that disabled following
- Removed ServiceChecked event listener that caused unnecessary flickers
Follow logs now only stops when:
- User explicitly clicks the Follow Logs button
- Deployment finishes (auto-scrolls to end first, then disables after 500ms delay)
Also improved get-logs component with memory optimizations:
- Limited display to last 2000 lines to prevent memory exhaustion
- Added debounced search (300ms) and scroll handling (150ms)
- Optimized DOM rendering
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Return the specific error from validateProviderToken() instead of
generic "Failed to validate token." message
- Update test to expect the actual error message
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add validateProviderToken() helper method to reduce code duplication
- Use request body only ($request->json()->all()) to avoid route parameter conflicts
- Add proper logging for token validation failures
- Add missing DB import to migration file
- Minor test formatting fix
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Add uuid column to cloud_provider_tokens table via migration
- Update CloudProviderToken to extend BaseModel for auto UUID generation
- Generate UUIDs for existing records in migration
- Fixes null uuid issue in API responses
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The validate() method conflicted with Controller::validate(). Renamed to
validateToken() to resolve the declaration compatibility issue.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add complete API support for Hetzner server provisioning, matching UI functionality:
Cloud Provider Token Management:
- POST /api/v1/cloud-tokens - Create and validate tokens
- GET /api/v1/cloud-tokens - List all tokens
- GET /api/v1/cloud-tokens/{uuid} - Get specific token
- PATCH /api/v1/cloud-tokens/{uuid} - Update token name
- DELETE /api/v1/cloud-tokens/{uuid} - Delete token
- POST /api/v1/cloud-tokens/{uuid}/validate - Validate token
Hetzner Resource Discovery:
- GET /api/v1/hetzner/locations - List datacenters
- GET /api/v1/hetzner/server-types - List server types
- GET /api/v1/hetzner/images - List OS images
- GET /api/v1/hetzner/ssh-keys - List SSH keys
Server Provisioning:
- POST /api/v1/servers/hetzner - Create server with full options
Features:
- Token validation against provider APIs before storage
- Smart SSH key management with MD5 fingerprint deduplication
- IPv4/IPv6 network configuration with preference logic
- Cloud-init script support with YAML validation
- Team-based isolation and security
- Comprehensive test coverage (40+ test cases)
- Complete documentation with curl examples and Yaak collection
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The Application model stores domain as 'fqdn' not 'domains'. The API response
was incorrectly using data_get($application, 'domains') which always returned
null. Fixed all 5 application creation endpoint responses.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Adds a new Laravel validation rule to prevent path traversal, hidden files, and invalid filenames in the dynamic proxy configuration feature. Validates filenames to ensure they contain only safe characters, don't exceed filesystem limits, and don't use reserved names.
- New Rule: ValidProxyConfigFilename with comprehensive validation
- Updated: NewDynamicConfiguration to use the new rule
- Added: 13 unit tests covering all validation scenarios
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add support for restoring/importing backups in ServiceDatabase (Docker Compose databases).
Changes:
- Add ServiceDatabase case in buildRestoreCommand() method
- Handle ServiceDatabase container naming in getContainers()
- Support PostgreSQL, MySQL, MariaDB, MongoDB detection via databaseType()
- Mark unsupported ServiceDatabase types (Redis, KeyDB, etc.)
Fixes#7529
Add visibility API handling to pause heartbeat monitoring when the browser tab is hidden, preventing false disconnection timeouts. When the tab becomes visible again, verify the connection is still alive or attempt reconnection.
Also remove the ApplicationStatusChanged event listener that was triggering terminal reloads whenever any application status changed across the team.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add shell escaping with escapeshellarg() for container names in the
docker rm command to prevent command injection. Also add validation
to skip containers with missing names and log a warning.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add CleanupOrphanedPreviewContainersJob that runs daily to find and remove any PR preview containers that weren't properly cleaned up when their PR was closed.
The job:
- Scans all functional servers for containers with coolify.pullRequestId label
- Checks if the corresponding ApplicationPreview record exists in the database
- Removes containers where the preview record no longer exists (truly orphaned)
- Acts as a safety net for webhook failures or race conditions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Create a shared CleanupPreviewDeployment action that unifies PR cleanup logic across all Git providers. Previously, GitHub had comprehensive cleanup (cancels active deployments, kills helper containers, removes all PR containers), while GitLab, Bitbucket, and Gitea only did basic cleanup (delete preview record and remove one container by name).
This fix ensures all providers properly clean up orphaned PR containers when a PR is closed/merged, preventing security issues and resource waste. Also fixes early return bug in GitLab webhook handler.
Fixes#2610🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add explicit validation in UpdatePackage to require package name when
'all' is false, preventing empty package commands being sent to servers
- Add --needed flag to pacman install in InstallDocker for idempotent
Docker installation on Arch Linux
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Arch Linux (pacman) support to server operations: CheckUpdates, InstallDocker, InstallPrerequisites, UpdatePackage
- Implement parsePacmanOutput() to parse 'pacman -Qu' output format
- Add security improvement: package name sanitization to prevent command injection
- Initialize variables in CheckUpdates to prevent undefined variable errors in catch block
- Use proper Arch pacman flags: -Syu for full system upgrade before operations
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The {{port}} template variable was undocumented and caused a double port bug
when used in preview URL templates. Since ports are always appended to the final
URL anyway, we remove {{port}} substitution entirely and ensure consistent port
handling across ApplicationPreview, PreviewsCompose, and the applicationParser helper.
Also fix PreviewsCompose.php which wasn't preserving ports at all, and improve
the Blade template formatting in previews-compose.blade.php.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add sortBranchesByPriority() helper to sort branches with priority:
main first, master second, then alphabetically. This improves UX
by pre-selecting the most commonly used default branches.
Allows API consumers to control domain auto-generation behavior. When autogenerate_domain is true (default) and no custom domains are provided, the system auto-generates a domain using the server's wildcard domain or sslip.io fallback.
- Add autogenerate_domain parameter to all 5 application creation endpoints
- Add validation and allowlist rules
- Implement domain auto-generation logic across all application types
- Add comprehensive unit tests for the feature
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Initialize logsLoaded as false to ensure init() triggers log loading
- Set logsLoaded=true after calling getLogs() in init()
- Allow services/PRs to load logs automatically when expandByDefault=true (single container)
- Previously, services would skip initial load unless refresh=true, now single containers work
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Adds support for deploying Garage (S3-compatible object storage) as a
one-click service in Coolify. Includes service template with TOML config,
automatic URL generation for S3, Web, and Admin endpoints with reverse
proxy configuration, and UI fields for credentials and access tokens.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Replace preg_quote() with proper ERE escaping since grep -E uses
extended regex syntax, not PHP/PCRE. This ensures special characters
in registry URLs (dots, etc.) are properly escaped.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Adds a new server-level setting that allows administrators to disable
per-application image retention globally for all applications on a server.
When enabled, Docker cleanup will only keep the currently running image
regardless of individual application retention settings.
Changes:
- Add migration for disable_application_image_retention boolean field
- Update ServerSetting model with cast
- Add checkbox in DockerCleanup page (Advanced section)
- Modify CleanupDocker action to check server-level setting
- Update Rollback page to show warning and disable inputs when server
retention is disabled
- Add helper text noting server-level override capability
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
For Docker Compose applications with build directives, inject commit-based
image tags (uuid_servicename:commit) to enable rollback functionality.
Previously these services always used 'latest' tags, making rollback impossible.
- Only injects tags for services with build: but no explicit image:
- Uses pr-{id} tags for pull request deployments
- Respects user-defined image: fields (preserves user intent)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Support for Docker Compose applications with build: directives that create
images with uuid_servicename naming pattern (e.g., app-uuid_web:commit).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implement a per-application setting (`docker_images_to_keep`) in `application_settings` table to control how many Docker images are preserved during cleanup. The cleanup process now:
- Respects per-application retention settings (default: 2 images)
- Preserves the N most recent images per application for easy rollback
- Always deletes PR images and keeps the currently running image
- Dynamically excludes application images from general Docker image prune
- Cleans up non-Coolify unused images to prevent disk bloat
Fixes issues where cleanup would delete all images needed for rollback.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When multiple scheduled tasks or database backups run concurrently on
the same server, they compete for the same SSH multiplexed connection
socket, causing race conditions and SSH exit code 255 errors.
This fix adds a `disableMultiplexing` parameter to bypass SSH
multiplexing for jobs that may run concurrently:
- Add `disableMultiplexing` param to `generateSshCommand()`
- Add `disableMultiplexing` param to `instant_remote_process()`
- Update `ScheduledTaskJob` to use `disableMultiplexing: true`
- Update `DatabaseBackupJob` to use `disableMultiplexing: true`
- Add debug logging to track execution without multiplexing
- Add unit tests for the new parameter
Each backup and scheduled task now gets an isolated SSH connection,
preventing contention on the shared multiplexed socket.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added a new `collapsible` property to GetLogs component that allows disabling the expandable header, useful for log viewers in dedicated pages and slide-overs. Applied this to Sentinel logs, Proxy logs, and Coolify Proxy log pages. Also improved the toolbar by moving the lines counter to the left side with an inline prefix label and repositioning the match counter next to it for better organization.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Filter out null and empty environment variables when generating Nixpacks build
configuration to prevent JSON parsing errors. Environment variables with null or
empty values were being passed as `--env KEY=` which created invalid JSON with
null values, causing deployment failures.
This fix ensures only valid non-empty environment variables are included in both
user-defined and auto-generated COOLIFY_* environment variables.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Add configurable deployment_queue_limit server setting (default: 25)
- Check queue size before accepting new deployments
- Return 429 status for webhooks/API when queue is full (allows retry)
- Show error toast in UI when queue limit reached
- Add UI control in Server Advanced settings
Fixes#6708🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Moved .log-highlight styles from Livewire component views to resources/css/app.css for better separation of concerns and reusability. This follows Laravel and Livewire best practices by keeping styles in the appropriate location rather than inline in component views.
Changes:
- Added .log-highlight styles to resources/css/app.css
- Removed inline <style> tags from deployment/show.blade.php
- Removed inline <style> tags from get-logs.blade.php
- Added XSS security test for log viewer
- Applied code formatting with Laravel Pint
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Features:
- Add client-side search filtering for runtime and deployment logs
- Add log download functionality (respects search filters)
- Make runtime log sections collapsible by default
- Auto-expand single container and lazy load logs on first expand
- Match deployment and runtime log view heights (40rem)
- Add debug toggle for deployment logs
- Improve scroll behavior with follow logs feature
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
The guard was setting and immediately resetting the flag in the same
synchronous execution, providing no actual protection. Now the flag
stays true until proxy reaches a stable state (running/exited/error)
via WebSocket notification, with additional client-side guard.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When stopping a service that's currently deploying, mark any IN_PROGRESS or QUEUED activities as CANCELLED. This prevents the status from remaining stuck at "starting" after containers are stopped.
Follows the existing pattern used in forceDeploy().
Extended the maximum allowed timeout for scheduled tasks from 3600 to 36000 seconds (10 hours). Also passes the configured timeout to instant_remote_process() so the SSH command respects the timeout setting.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Allows user-configured backup timeouts > 3600 to be respected. Previously, the SSH process used a hardcoded 3600 second timeout regardless of the job timeout setting. Now the timeout is passed through to instant_remote_process() for all backup operations.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When restarting the proxy on localhost (server id 0), shows a warning
banner in the logs sidebar explaining that the connection may be
temporarily lost and to refresh the browser if logs stop updating.
Also cleans up notification noise by commenting out intermediate
status notifications (restarting, starting, stopping) that were
redundant with the visual status indicators.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The error "container name already in use" occurred because the container
wasn't fully removed before docker compose up tried to create a new one.
Changes:
- Removed redundant stop/remove logic from START PHASE (was duplicating STOP PHASE)
- Made STOP PHASE more robust:
- Increased wait iterations from 10 to 15
- Added force remove on each iteration in case container got stuck
- Added final verification and force cleanup after the loop
- Added better logging to show removal progress
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Instead of calling StopProxy::run() (synchronous) then StartProxy::run()
(async), now we build a single command sequence that includes both stop
and start phases. This creates one Activity immediately via remote_process(),
so the UI receives the activity ID right away and can show logs in real-time
from the very beginning of the restart operation.
Key changes:
- Removed dependency on StopProxy and StartProxy actions
- Build combined command sequence inline in buildRestartCommands()
- Use remote_process() directly which returns Activity immediately
- Increased timeout from 60s to 120s to accommodate full restart
- Activity ID dispatched to UI within milliseconds of job starting
Flow is now:
1. Job starts → sets "restarting" status
2. Commands built synchronously (fast, no SSH)
3. remote_process() creates Activity and dispatches CoolifyTask job
4. Activity ID sent to UI immediately via WebSocket
5. UI opens activity monitor with real-time streaming logs
6. Logs show "Stopping proxy..." then "Starting proxy..." as they happen
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Set proxy status to 'restarting' and dispatch ProxyStatusChangedUI event
at the very beginning of handle() method, before StopProxy runs. This
notifies the UI immediately so users know a restart is in progress,
rather than waiting until after the stop operation completes.
Also simplified unit tests to focus on testable job configuration
(middleware, tries, timeout) without complex SchemalessAttributes mocking.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add restartInitiated flag to prevent duplicate "Proxy restart initiated" messages
- Restore ProxyStatusChangedUI dispatch with activityId in RestartProxyJob
- This allows the UI to open the activity monitor and show logs during restart
- Simplified restart message (removed redundant "Monitor progress" text)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The Application::loadComposeFile method's finally block always saves
the model, which was persisting invalid base_directory values when
validation failed.
Changes:
- Add restoreBaseDirectory and restoreDockerComposeLocation parameters
to loadComposeFile() in both Application model and General component
- The finally block now restores BOTH base_directory and
docker_compose_location to the provided original values before saving
- When called from submit(), pass the original DB values so they are
restored on failure instead of the new invalid values
This ensures invalid paths are never persisted to the database.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When restarting the proxy on localhost (where Coolify is running), the UI becomes inaccessible because the connection is lost. This change makes all proxy restarts run as background jobs with WebSocket notifications, allowing the operation to complete even after connection loss.
Changes:
- Enhanced ProxyStatusChangedUI event to carry activityId for log monitoring
- Updated RestartProxyJob to dispatch status events and track activity
- Simplified Navbar restart() to always dispatch job for all servers
- Enhanced showNotification() to handle activity monitoring and new statuses
- Added comprehensive unit and feature tests
Benefits:
- Prevents localhost lockout during proxy restarts
- Consistent behavior across all server types
- Non-blocking UI with real-time progress updates
- Automatic activity log monitoring
- Proper error handling and recovery
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move compose file validation BEFORE database save to prevent invalid
base directory and docker compose location values from being persisted
when validation fails.
Changes:
- Move compose file validation before $this->application->save()
- Restore original values when validation fails
- Add resetErrorBag() to clear stale validation errors
This fixes two bugs:
1. Invalid paths were saved to DB even when validation failed
2. Error messages persisted after correcting to valid path
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Apply the same frontend path normalization pattern from commit f6398f7cf
to the General Settings page for consistency across all forms.
Changes:
- Add Alpine.js path normalization to Docker Compose section (base directory + compose location)
- Add Alpine.js path normalization to non-Docker Compose section (base directory + dockerfile location)
- Change wire:model to wire:model.defer to prevent backend requests during tab navigation
- Add @blur event handlers for immediate path normalization feedback
- Backend normalization remains as defensive fallback
This ensures consistent validation behavior and fixes potential tab focus
issues on the General Settings page.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Change wire:model.blur to wire:model.defer to prevent backend requests
during form navigation. Add Alpine.js path normalization functions that
run on blur, fixing tab focus issues while keeping path validation
purely on the frontend.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When Sentinel is enabled and in sync, ServerStorageCheckJob was being
dispatched from two locations causing unnecessary duplication:
1. PushServerUpdateJob (every ~30s with real-time filesystem data)
2. ServerManagerJob (scheduled cron check via SSH)
This commit modifies ServerManagerJob to only dispatch ServerStorageCheckJob
when Sentinel is out of sync or disabled. When Sentinel is active and in sync,
PushServerUpdateJob provides real-time storage data, making the scheduled SSH
check redundant.
Benefits:
- Eliminates duplicate storage checks when Sentinel is working
- Reduces unnecessary SSH overhead
- Storage checks still run as fallback when Sentinel fails
- Maintains scheduled checks for servers without Sentinel
Updated tests to reflect new behavior:
- Storage check NOT dispatched when Sentinel is in sync
- Storage check dispatched when Sentinel is out of sync or disabled
- All timezone and frequency tests updated accordingly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Dispatch ProxyStatusChangedUI event after version check completes so the UI updates in real-time without requiring page refresh.
Changes:
- Add ProxyStatusChangedUI::dispatch() at all exit points in CheckTraefikVersionForServerJob
- Ensures UI refreshes automatically via WebSocket when version check completes
- Works for all scenarios: version detected, using latest tag, outdated version, up-to-date
User experience:
- User restarts proxy
- Warning clears automatically in real-time (no refresh needed)
- Leverages existing WebSocket infrastructure
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When users updated Traefik configuration or version and restarted the proxy, the warning triangle icon showing outdated version info persisted until the weekly CheckTraefikVersionJob ran (Sundays at 00:00).
This was caused by the UI warning indicators reading from cached database columns (detected_traefik_version, traefik_outdated_info) that were only updated by the weekly scheduled job, not after proxy restarts.
Solution: Add version check to ProxyStatusChangedNotification listener that triggers automatically after proxy status changes to "running".
Changes:
- Add Traefik version check in ProxyStatusChangedNotification::handle()
- Triggers automatically when ProxyStatusChanged event fires with status="running"
- Removed duplicate version check from Navbar::restart() (now handled by event)
- Event fires after StartProxy/StopProxy actions complete via async jobs
- Gracefully handles missing versions.json data with warning log
Benefits:
- Version check happens AFTER proxy is confirmed running (more accurate)
- Reuses existing event infrastructure (ProxyStatusChanged)
- Works for all proxy restart scenarios (manual restart, config save + restart, etc.)
- No duplicate checks - single source of truth in event listener
- Async job runs in background (5-10 seconds) to update database
- User sees warning cleared after page refresh
Flow:
1. User updates config and restarts proxy (or manually restarts)
2. StartProxy action completes async, dispatches ProxyStatusChanged event
3. ProxyStatusChangedNotification listener receives event
4. Listener checks proxy status = "running", dispatches CheckTraefikVersionForServerJob
5. Job detects version via SSH, updates database columns
6. UI re-renders with cleared warnings
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add preserveRestarting parameter to ContainerStatusAggregator to allow applications
and service sub-resources to display "Restarting" status instead of being marked as
"Degraded". This gives better visibility into container restart behavior.
- Update ContainerStatusAggregator to accept preserveRestarting parameter (defaults to false)
- Update GetContainersStatus to use preserveRestarting: true for applications and service sub-resources
- Update PushServerUpdateJob to use preserveRestarting: true for applications and service sub-resources
- Add comprehensive documentation explaining the parameter behavior and when to use it
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add support for degraded status from sub-resources as highest priority
- Handle mixed running+starting state to show service as not fully ready
- Update state priority hierarchy from 8 to 10 levels
- Add comprehensive test coverage for new status scenarios
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes#7439 where successful deployments were being marked as FAILED due to exceptions during old container cleanup.
Root cause: Commit 97550f406 wrapped stop_running_container() in try-catch that re-throws ALL exceptions as DeploymentException. When old containers are already removed (a common scenario), the "No such container" error propagates and marks successful deployments as failed.
Solution: Check if deployment has already succeeded (newVersionIsHealthy || force) before re-throwing exceptions from cleanup operations. Cleanup failures are logged but don't fail the deployment.
- Add conditional handling in stop_running_container() catch block
- Log cleanup warnings with hidden: true to avoid UI clutter
- Only re-throw exceptions if deployment hasn't succeeded yet
- Preserves backward compatibility and expected behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Pass the server timezone parameter to shouldRunNow() call at line 127,
ensuring ServerCheckJob dispatch respects the server's local timezone
instead of falling back to the instance default.
This aligns the behavior with other scheduled tasks in the same method:
- ServerStorageCheckJob (line 137)
- ServerPatchCheckJob (line 144)
- Sentinel restart (line 152)
All scheduled tasks in processServerTasks() now consistently use the
server's configured timezone for cron evaluation.
Added unit test to verify timezone-aware cron schedule evaluation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed a critical bug where $this->executionTime was being mutated during
the server processing loop, causing incorrect scheduling calculations for
subsequent servers.
The issue occurred at line 123 where subSeconds() was called directly on
the shared executionTime instance. This caused the baseline time to shift
by waitTime seconds with each server iteration, resulting in compounding
scheduling errors (e.g., 1680 seconds drift over 5 servers).
Changed:
- app/Jobs/ServerManagerJob.php:123
Added .copy() before .subSeconds() to prevent mutation
Added comprehensive unit tests that verify:
- Immutability when using .copy()
- Demonstration of the bug without .copy()
- Correct behavior across multiple iterations
This follows the existing pattern in shouldRunNow() (line 167) and aligns
with other jobs in the codebase.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This feature stored incoming webhooks during maintenance mode and replayed them
when maintenance ended. The behavior adds unnecessary complexity without clear
value. Standard approach is to let webhooks fail during maintenance and let
senders retry.
Removes:
- Listener classes that handled maintenance mode events and webhook replay
- Maintenance mode checks from all webhook controllers (Github, Gitea, Gitlab, Bitbucket, Stripe)
- webhooks-during-maintenance filesystem disk configuration
- Feature mention from CHANGELOG
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Server disk usage checks now run on their configured schedule regardless of Sentinel status, eliminating monitoring blind spots when Sentinel is offline, out of sync, or disabled. Storage checks now respect server timezone settings, consistent with patch checks.
Changes:
- Moved server timezone calculation to top of processServerTasks()
- Extracted ServerStorageCheckJob dispatch from Sentinel conditional
- Fixed default frequency to '0 23 * * *' (11 PM daily)
- Added timezone parameter to storage check scheduling
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add URL generation to notification class using base_url() helper
- Replace config('app.url') with proper base_url() for accurate instance URL
- Make server names clickable links to proxy configuration page
- Use data_get() with fallback values for safer template data access
- Add comprehensive tests for URL generation and email rendering
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The "Final Build Command (Preview)" field now shows build arguments
that will be injected during deployment, matching the actual command
that runs. This provides transparency and helps users debug build issues.
Changes:
- Modified getDockerComposeBuildCommandPreviewProperty() to inject build args
- Uses same helper functions as deployment (generateDockerBuildArgs, injectDockerComposeBuildArgs)
- Respects use_build_secrets setting (build args only shown when disabled)
- Filters environment variables where is_buildtime = true
Example output:
docker compose -f ./docker-compose.yaml --env-file /artifacts/build-time.env build --build-arg FOO --build-arg BAR backend
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add instantSaveSettings() method to save gzip, stripprefix, and
exclude_from_status checkboxes without triggering port validation modal.
These settings don't require domain/port validation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The regex pattern in injectDockerComposeBuildArgs() was too restrictive
and failed to match `docker compose build servicename` commands. Changed
the lookahead from `(?=\s+(?:--|-)|\s+(?:&&|\|\||;|\|)|$)` to the
simpler `(?=\s|$)` to allow any content after the build command,
including service names with hyphens/underscores and flags.
Also improved the ApplicationDeploymentJob to use the new helper function
and added comprehensive test coverage for service-specific builds.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Arch Linux was listed in SUPPORTED_OS but InstallDocker.php had no
specific handler for it, causing 'Unsupported OS' errors when trying
to add Arch Linux servers.
This adds:
- Detection of 'arch' OS type in the install flow
- New getArchDockerInstallCommand() method using pacman:
- pacman -Syyy (refresh package databases)
- pacman -S docker docker-compose (install Docker)
- systemctl start/enable docker
Fixes#4523
Fixes two critical issues preventing Traefik proxy startup:
1. TypeError when restarting proxy: Handle null return from get_traefik_versions()
- Add null check before dispatching CheckTraefikVersionForServerJob
- Log warning when version data is unavailable
- Prevents: "Argument #2 must be of type array, null given"
2. Docker network error: Filter out predefined Docker networks
- Add isDockerPredefinedNetwork() helper to centralize network filtering
- Apply filtering in collectDockerNetworksByServer() before operations
- Apply filtering in generateDefaultProxyConfiguration()
- Prevents: "operation is not permitted on predefined default network"
Also: Move $cachedVersionsFile assignment after null check in Proxy.php
Tests: Added 7 new unit tests for network filtering function
All existing tests pass with no regressions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix incorrect Alpine state reference: Changed `this.$wire.showProgress` to `this.showProgress` in upgrade.blade.php:155
- Remove unused `$showProgress` property from Upgrade.php Livewire component
- The backend property was never set or used; all progress tracking is handled by Alpine state
- This fixes potential race conditions where the guard condition was not working as intended
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed from `->before('-')` to `->beforeLast('-')` to correctly parse service
names with hyphens. This fixes prerequisite application for ~230+ services
containing hyphens in their template names (e.g., docker-registry,
elasticsearch-with-kibana).
Added comprehensive test coverage for hyphenated service names and fixed
existing tests to use realistic CUID2 UUID format. All unit tests pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Refactors the Appwrite and Beszel service-specific application settings
to use a centralized constant-based approach, following the same pattern
as NEEDS_TO_CONNECT_TO_PREDEFINED_NETWORK.
Changes:
- Added NEEDS_TO_DISABLE_GZIP constant for services requiring gzip disabled
- Added NEEDS_TO_DISABLE_STRIPPREFIX constant for services requiring stripprefix disabled
- Created applyServiceApplicationPrerequisites() helper function in bootstrap/helpers/services.php
- Updated all service creation flows to use the centralized helper:
* app/Livewire/Project/Resource/Create.php (web handler)
* app/Http/Controllers/Api/ServicesController.php (API handler - BUG FIX)
* app/Livewire/Project/New/DockerCompose.php (custom compose handler)
* app/Http/Controllers/Api/ApplicationsController.php (API custom compose handler)
- Added comprehensive unit tests for the new helper function
Benefits:
- Single source of truth for service prerequisites
- DRY - eliminates code duplication between web and API handlers
- Fixes bug where API-created services didn't get prerequisites applied
- Easy to extend for future services (just edit the constant)
- More maintainable and testable
Related commits: 3a94f1ea1 (Beszel), 02b18c86e (Appwrite)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Updated color classes in NotifyDemo.php to use warning colors.
- Added new warning color variables in app.css.
- Changed warning icon colors in callout.blade.php.
- Updated loading spinner and hover states in global-search.blade.php.
- Refactored warning messages and styles in project application views.
- Adjusted log display colors in get-logs.blade.php.
- Updated private key status indicators in index.blade.php.
- Changed hover and text colors for documentation links in cloudflare-tunnel.blade.php.
- Refactored server creation messages in by-hetzner.blade.php.
- Updated proxy warning button colors in proxy.blade.php.
- Changed loading spinner colors in show.blade.php.
- Updated deployment status colors in deployments.blade.php and show.blade.php.
## Changes
- **CheckForUpdatesJob**: Add triple version comparison (CDN vs cache vs running)
- Never allows version downgrade from currently running version
- Uses data_set() for safer nested array mutation
- Prevents incorrect new_version_available flag setting
- **UpdateCoolify**: Add cache validation before fallback
- Validates cache against running version on CDN failure
- Throws exception if cache is corrupted/older than running
- Applies to both manual and automated updates
- **Tests**: Add comprehensive test coverage
- tests/Unit/CheckForUpdatesJobTest.php (5 tests)
- tests/Unit/UpdateCoolifyTest.php (3 tests)
## Impact
- Prevents all downgrade scenarios (CDN rollback, corrupted cache, etc.)
- Maintains backward compatibility
- Provides clear logging for debugging
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Updated the Clickhouse service template to use the official `clickhouse/clickhouse-server` image.
- Removed the usage of the deprecated `bitnamilegacy/clickhouse` image.
- fixes#7110
Migrates 8 database start action files from deprecated --time=10 to compatible -t 10 flag for Docker v28+ compatibility. Also updates test expectations in StopProxyTest.php.
Docker deprecated the --time flag in v28.0. The -t shorthand works on all Docker versions (pre-28 and 28+), ensuring backward and forward compatibility.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Migrates 8 database start action files from deprecated --time=10 to compatible -t 10 flag for Docker v28+ compatibility. Also updates test expectations in StopProxyTest.php.
Docker deprecated the --time flag in v28.0. The -t shorthand works on all Docker versions (pre-28 and 28+), ensuring backward and forward compatibility.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Refactors generate_buildtime_environment_variables() to use an associative
array (dictionary) approach instead of sequential push() calls. This prevents
duplicate variable declarations in the buildtime.env file.
**Problem:**
After adding nixpacks plan variables to buildtime.env, the same variable
could appear twice in the file:
- Once from nixpacks plan (e.g., NIXPACKS_NODE_VERSION='22')
- Once from user-defined variables (e.g., NIXPACKS_NODE_VERSION="22")
This caused shell errors and undefined behavior during Docker builds.
**Root Cause:**
The push() method adds items sequentially without checking for duplicate
keys. When a variable existed in both nixpacks plan AND user-defined vars,
both would be written to the file.
**Solution:**
- Use associative array ($envs_dict) for automatic deduplication
- Establish clear override precedence:
1. Nixpacks plan variables (lowest priority)
2. COOLIFY_* variables (medium priority)
3. SERVICE_* variables (medium priority)
4. User-defined variables (highest priority - can override everything)
- Convert to collection format at the end
- Add debug logging when user variables override plan variables
**Benefits:**
- Automatic deduplication (array keys are unique by nature)
- User variables properly override nixpacks plan values
- Clear, explicit precedence order
- No breaking changes to existing functionality
Fixes#7114🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Include 'Inject Build Args to Dockerfile' and 'Include Source Commit in Build' settings in the configuration hash calculation. These settings affect Docker build behavior, so changes to them should trigger the restart required notification. Add unit tests to verify hash changes when these settings are modified.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix sudo prefix bug: Use word boundary matching to prevent 'do' keyword from matching 'docker' commands
- Add ensureProxyNetworksExist() helper to create networks before docker compose up
- Ensure networks exist synchronously before dispatching async proxy startup to prevent race conditions
- Update comprehensive unit tests for sudo parsing (50 tests passing)
This resolves issues where Docker commands failed to execute with sudo on non-root servers and where proxy networks were not created before the proxy container started.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Without this fix users have to manually uncheck strip prefix option for appwrite, appwrite-console, and appwrite-realtime services for the service to work
Reduce excessive logging in CleanupRedis and CleanupNames commands to output only a single summary line. Remove per-item logs and detailed status messages while keeping the final count of items cleaned up. Detail logs still available in dry-run mode for preview.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add two new application settings to control Docker build cache invalidation:
- inject_build_args_to_dockerfile (default: true) - Skip Dockerfile ARG injection
- include_source_commit_in_build (default: false) - Exclude SOURCE_COMMIT from build context
These toggles let users preserve Docker cache when SOURCE_COMMIT or custom ARGs change frequently. Development-only logging shows which ARGs are being injected for debugging.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes the "Snapshot missing on Livewire component" error that occurs when
toggling the "Backup includes all databases" checkbox during MariaDB database
import operations.
Root Cause:
- ActivityMonitor component was initialized without proper lifecycle hooks
- When parent Import component re-rendered (via checkbox toggle), the
ActivityMonitor's Livewire snapshot became stale
- Missing null checks caused errors when querying with undefined activityId
- No state cleanup when slide-over closed, causing issues on subsequent opens
Changes:
- Add updatedActivityId() lifecycle hook to ActivityMonitor for proper hydration
- Add defensive null check in hydrateActivity() to prevent query errors
- Track activityId in Import component for state management
- Add slideOverClosed event dispatch in slide-over component
- Add event listener in Import component to reset activityId on close
Testing:
- Manually verify checkbox toggle doesn't trigger popup
- Verify actual restore operations work correctly
- Test both file-based and S3-based restore methods
- Ensure X button properly closes the modal
- Verify no console errors or Livewire warnings
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move the success dispatch outside the DB transaction closure to ensure
it only fires after the transaction has successfully committed. Use
reference variable to track changes across the closure boundary.
- Remove COOLIFY_CONTAINER_NAME from build-time ARGs (timestamp-based, breaks cache)
- Use APP_KEY instead of random_bytes for COOLIFY_BUILD_SECRETS_HASH (deterministic)
- Add forBuildTime parameter to generate_coolify_env_variables() to control injection
- Keep COOLIFY_CONTAINER_NAME available at runtime for container identification
- Fix misleading log message about .env file purpose
Fixes#7040🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add missing traefik_outdated_webhook_notifications field to migration schema and population logic
- Remove incorrect docker_cleanup_webhook_notifications from model (split into success/failure variants)
- Consolidate webhook notification migrations from 2025_10_10 to 2025_11_25 for proper execution order
- Ensure all 15 notification fields are properly defined and consistent across migration, model, and Livewire component
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Add path attribute mutator to S3Storage model ensuring paths start with /
- Add updatedS3Path hook to normalize path and reset validation state on blur
- Add updatedS3StorageId hook to reset validation state when storage changes
- Add Enter key support to trigger file check from path input
- Use wire:model.live for S3 storage select, wire:model.blur for path input
- Improve shell escaping in restore job cleanup commands
- Fix isSafeTmpPath helper logic for directory validation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace hardcoded URL paths in getScopeUrl() with Laravel's route() helper
- Add scopeUrls property to EnvVarInput component with named routes
- Pass projectUuid and environmentUuid to enable context-specific environment links
- Environment scope link now navigates to the specific project/environment shared variables page
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Exited containers don't run health checks, so showing "(unhealthy)" is
misleading. This fix ensures exited status displays without health
suffixes across all monitoring systems (SSH, Sentinel, services, etc.)
and at the UI layer for backward compatibility with existing data.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit enhances the boarding flow to handle prerequisite installation asynchronously with proper retry logic and user feedback:
- Add retry mechanism with max 3 attempts for prerequisite installation
- Display live installation logs via ActivityMonitor during boarding
- Reset ActivityMonitor state when starting new activity to prevent stale event dispatching
- Support dynamic header updates in ActivityMonitor
- Add prerequisitesInstalled event handler to revalidate after installation completes
- Extract validation logic into continueValidation() method for cleaner flow
- Add unit tests for prerequisite installation logic
This improves UX by showing users real-time progress during prerequisite installation and handles installation failures gracefully with automatic retries.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Parse template variables directly instead of generating from container names. Always create both SERVICE_URL and SERVICE_FQDN pairs together. Properly separate scheme handling (URL has scheme, FQDN doesn't). Add comprehensive test coverage.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
This commit adds comprehensive validation improvements and DRY principles for handling Coolify's custom Docker Compose extensions.
## Changes
### 1. Created Reusable stripCoolifyCustomFields() Function
- Added shared helper in bootstrap/helpers/docker.php
- Removes all Coolify custom fields (exclude_from_hc, content, isDirectory, is_directory)
- Handles both long syntax (arrays) and short syntax (strings) for volumes
- Well-documented with comprehensive docblock
- Follows DRY principle for consistent field stripping
### 2. Fixed Docker Compose Modal Validation
- Updated validateComposeFile() to use stripCoolifyCustomFields()
- Now removes ALL custom fields before Docker validation (previously only removed content)
- Fixes validation errors when using templates with custom fields (e.g., traccar.yaml)
- Users can now validate compose files with Coolify extensions in UI
### 3. Enhanced YAML Validation in CalculatesExcludedStatus
- Added proper exception handling with ParseException vs generic Exception
- Added structure validation (checks if parsed result and services are arrays)
- Comprehensive logging with context (error message, line number, snippet)
- Maintains safe fallback behavior (returns empty collection on error)
### 4. Added Integer Validation to ContainerStatusAggregator
- Validates maxRestartCount parameter in both aggregateFromStrings() and aggregateFromContainers()
- Corrects negative values to 0 with warning log
- Logs warnings for suspiciously high values (> 1000)
- Prevents logic errors in crash loop detection
### 5. Comprehensive Unit Tests
- tests/Unit/StripCoolifyCustomFieldsTest.php (NEW) - 9 tests, 43 assertions
- tests/Unit/ContainerStatusAggregatorTest.php - Added 6 tests for integer validation
- tests/Unit/ExcludeFromHealthCheckTest.php - Added 4 tests for YAML validation
- All tests passing with proper Log facade mocking
### 6. Documentation
- Added comprehensive Docker Compose extensions documentation to .ai/core/deployment-architecture.md
- Documents all custom fields: exclude_from_hc, content, isDirectory/is_directory
- Includes examples, use cases, implementation details, and test references
- Updated .ai/README.md with navigation links to new documentation
## Benefits
- Better UX: Users can validate compose files with custom fields
- Better Debugging: Comprehensive logging for errors
- Better Code Quality: DRY principle with reusable validation
- Better Reliability: Prevents logic errors from invalid parameters
- Better Maintainability: Easy to add new custom fields in future
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduced tests for ContainerStatusAggregator to validate status aggregation logic across various container states.
- Implemented tests to ensure serverStatus accessor correctly checks server infrastructure health without being affected by container status.
- Updated ExcludeFromHealthCheckTest to verify excluded status handling in various components.
- Removed obsolete PushServerUpdateJobStatusAggregationTest as its functionality is covered elsewhere.
- Updated version number for sentinel to 0.0.17 in versions.json.
Prevents removal and re-download of database images on every restart. Docker cleanup was removing Docker Hub images (postgres, mysql, redis, etc.) that lack the coolify.managed=true label, causing them to be immediately re-pulled. Restart now preserves images while stopping/starting containers.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes inconsistency where Service model used manual state machine logic while
all other components (Application, ComplexStatusCheck, GetContainersStatus)
use the centralized ContainerStatusAggregator service.
Changes:
- Refactored Service::aggregateResourceStatuses() to use ContainerStatusAggregator
- Removed ~60 lines of duplicated state machine logic
- Added comprehensive ServiceExcludedStatusTest with 24 test cases
- Fixed bugs in old logic where paused/starting containers were incorrectly
marked as unhealthy (should be unknown)
Benefits:
- Single source of truth for status aggregation across all models
- Leverages 42 existing ContainerStatusAggregator tests
- Consistent behavior between Service and Application/Database models
- Easier maintenance (state machine changes only in one place)
All tests pass (37 total):
- ServiceExcludedStatusTest: 24/24 passed
- AllExcludedContainersConsistencyTest: 13/13 passed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit addresses container status reporting issues and removes debug logging:
**Primary Fix:**
- Changed PushServerUpdateJob to default to 'unknown' instead of 'unhealthy' when health_status field is missing from Sentinel data
- This ensures containers WITHOUT healthcheck defined are correctly reported as "unknown" not "unhealthy"
- Matches SSH path behavior (GetContainersStatus) which already defaulted to 'unknown'
**Service Multi-Container Aggregation:**
- Implemented service container status aggregation (same pattern as applications)
- Added serviceContainerStatuses collection to both Sentinel and SSH paths
- Services now aggregate status using priority: unhealthy > unknown > healthy
- Prevents race conditions where last-processed container would win
**Debug Logging Cleanup:**
- Removed all [STATUS-DEBUG] logging statements (25 total)
- Removed all ray() debugging calls (3 total)
- Removed proof_unknown_preserved and health_status_was_null debug fields
- Code is now production-ready
**Test Coverage:**
- Added 2 new tests for Sentinel default health status behavior
- Added 5 new tests for service aggregation in SSH path
- All 16 tests pass (66 assertions)
**Note:** The root cause was identified as Sentinel (Go binary) also defaulting to "unhealthy". That will need a separate fix in the Sentinel codebase.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added comprehensive logging to track why applicationContainerStatuses
collection is empty in PushServerUpdateJob.
## Logging Added
### 1. Raw Sentinel Data (line 113-118)
**Logs**: Complete container data received from Sentinel
**Purpose**: See exactly what Sentinel is sending
**Data**: Container count and full container array with all labels
### 2. Container Processing Loop (line 157-163)
**Logs**: Every container as it's being processed
**Purpose**: Track which containers enter the processing loop
**Data**: Container name, status, all labels, coolify.managed flag
### 3. Skipped Containers - Not Managed (line 165-171)
**Logs**: Containers without coolify.managed label
**Purpose**: Identify containers being filtered out early
**Data**: Container name
### 4. Successful Container Addition (line 193-198)
**Logs**: When container is successfully added to applicationContainerStatuses
**Purpose**: Confirm containers ARE being processed
**Data**: Application ID, container name, container status
### 5. Missing com.docker.compose.service Label (line 200-206)
**Logs**: Containers skipped due to missing com.docker.compose.service
**Purpose**: Identify the most likely root cause
**Data**: Container name, application ID, all labels
## Why This Matters
User reported applicationContainerStatuses is empty (`[]`) even though
Sentinel is pushing updates. This logging will reveal:
1. Is Sentinel sending containers at all?
2. Are containers filtered by coolify.managed check?
3. Is com.docker.compose.service label missing? (most likely)
4. What labels IS Sentinel actually sending?
## Expected Findings
Based on investigation, the issue is likely:
- Sentinel is NOT sending com.docker.compose.service in labels
- Or Sentinel uses a different label format/name
- Containers pass all other checks but fail on line 190-206
## Next Steps
After logs appear, we'll see exactly which filter is blocking containers
and can fix the root cause (likely need to extract com.docker.compose.service
from container name or use a different label source).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added detailed debug logging to all status update paths to help
diagnose why "unhealthy" status appears in the UI.
## Logging Added
### 1. PushServerUpdateJob (Sentinel updates)
**Location**: Lines 303-315
**Logs**: Status changes from Sentinel push updates
**Data tracked**:
- Old vs new status
- Container statuses that led to aggregation
- Status flags (hasRunning, hasUnhealthy, hasUnknown)
### 2. GetContainersStatus (SSH updates)
**Location**: Lines 441-449, 346-354, 358-365
**Logs**: Status changes from SSH-based checks
**Scenarios**:
- Normal status aggregation
- Recently restarted containers (kept as degraded)
- Applications not running (set to exited)
**Data tracked**:
- Old vs new status
- Container statuses
- Restart count and timing
- Whether containers exist
### 3. Application Model Status Accessor
**Location**: Lines 706-712, 726-732
**Logs**: When status is set without explicit health information
**Issue**: Highlights cases where health defaults to "unhealthy"
**Data tracked**:
- Raw value passed to setter
- Final result after default applied
## How to Use
### Enable Debug Logging
Edit `.env` or `config/logging.php` to set log level to debug:
```
LOG_LEVEL=debug
```
### Monitor Logs
```bash
tail -f storage/logs/laravel.log | grep STATUS-DEBUG
```
### Log Format
All logs use `[STATUS-DEBUG]` prefix for easy filtering:
```
[2025-11-19 13:00:00] local.DEBUG: [STATUS-DEBUG] Sentinel status change
{
"source": "PushServerUpdateJob",
"app_id": 123,
"app_name": "my-app",
"old_status": "running:unknown",
"new_status": "running:healthy",
"container_statuses": [...],
"flags": {...}
}
```
## What to Look For
1. **Default to unhealthy**: Check Application model accessor logs
2. **Status flipping**: Compare timestamps between Sentinel and SSH updates
3. **Incorrect aggregation**: Check flags and container_statuses
4. **Stale database values**: Check if old_status persists across multiple logs
## Next Steps
After gathering logs, we can:
1. Identify the exact source of "unhealthy" status
2. Determine if it's a default issue, aggregation bug, or timing problem
3. Apply targeted fix based on evidence
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
## Problem
Services with "running (unknown)" status were periodically changing
to "running (healthy)" every ~30 seconds when Sentinel pushed updates.
This was confusing for users and inconsistent with SSH-based status checks.
## Root Cause
`PushServerUpdateJob::aggregateMultiContainerStatuses()` was missing
logic to track "unknown" health state. It only tracked "unhealthy" and
defaulted everything else to "healthy".
When Sentinel pushed updates with "running (unknown)" containers:
- The job saw `hasRunning = true` and `hasUnhealthy = false`
- It incorrectly returned "running (healthy)" instead of "running (unknown)"
## Solution
Updated `PushServerUpdateJob` to match the logic in `GetContainersStatus`:
1. Added `$hasUnknown` tracking variable
2. Check for "unknown" in status strings (alongside "unhealthy")
3. Implement 3-way priority: unhealthy > unknown > healthy
This ensures consistency between:
- SSH-based updates (`GetContainersStatus`)
- Sentinel-based updates (`PushServerUpdateJob`)
- UI display logic
## Changes
- **app/Jobs/PushServerUpdateJob.php**: Added unknown status tracking
- **tests/Unit/PushServerUpdateJobStatusAggregationTest.php**: New comprehensive tests
- **tests/Unit/ExcludeFromHealthCheckTest.php**: Updated to match current implementation
## Testing
All 31 status-related unit tests passing:
- 18 tests in ContainerHealthStatusTest
- 8 tests in ExcludeFromHealthCheckTest (updated)
- 6 tests in PushServerUpdateJobStatusAggregationTest (new)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes container health status aggregation to correctly handle
unknown health states and edge case container states across all resource types.
Changes:
1. **Preserve Unknown Health State**
- Add three-way priority: unhealthy > unknown > healthy
- Detect containers without healthchecks (null health) as unknown
- Apply across GetContainersStatus, ComplexStatusCheck, and Service models
2. **Handle Edge Case Container States**
- Add support for: created, starting, paused, dead, removing
- Map to appropriate statuses: starting (unknown), paused (unknown), degraded (unhealthy)
- Prevent containers in transitional states from showing incorrect status
3. **Add :excluded Suffix for Excluded Containers**
- Parse exclude_from_hc flag from docker-compose YAML
- Append :excluded suffix to individual container statuses
- Skip :excluded containers in non-excluded aggregation sections
- Strip :excluded suffix in excluded aggregation sections
- Makes it clear in UI which containers are excluded from monitoring
Files Modified:
- app/Actions/Docker/GetContainersStatus.php
- app/Actions/Shared/ComplexStatusCheck.php
- app/Models/Service.php
- tests/Unit/ContainerHealthStatusTest.php
Tests: 18 passed (82 assertions)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When all containers are excluded from health checks, display their actual status
with :excluded suffix instead of misleading hardcoded statuses. This prevents
broken UI state with incorrect action buttons and provides clarity that monitoring
is disabled.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Adds a new EnvVarInput component that provides autocomplete suggestions for shared environment variables from team, project, and environment scopes. Users can reference variables using {{ syntax.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
When all services in a Docker Compose file have `exclude_from_hc: true`,
the status aggregation logic was returning invalid states causing broken UI.
**Problems fixed:**
- ComplexStatusCheck returned 'running:healthy' for apps with no monitored containers
- Service model returned ':' (null status) when all services excluded
- UI showed active start/stop buttons for non-running services
**Changes:**
- ComplexStatusCheck: Return 'exited:healthy' when relevantContainerCount is 0
- Service model: Return 'exited:healthy' when both status and health are null
- Added comprehensive unit tests to verify the fixes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix double-slash issue in Docker Compose preview paths when baseDirectory is "/"
- Normalize baseDirectory using rtrim() to prevent path concatenation issues
- Replace hardcoded '/artifacts/build-time.env' with ApplicationDeploymentJob::BUILD_TIME_ENV_PATH
- Make BUILD_TIME_ENV_PATH constant public for reusability
- Add comprehensive unit tests (11 test cases, 25 assertions)
Fixes preview path generation in:
- getDockerComposeBuildCommandPreviewProperty()
- getDockerComposeStartCommandPreviewProperty()
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When using a custom Docker Compose build command, environment variables
were being lost because the --env-file flag was not included. This fix
automatically injects the --env-file flag to ensure build-time environment
variables are available during custom builds.
Changes:
- Auto-inject --env-file /artifacts/build-time.env after docker compose
- Respect user-provided --env-file flags (no duplication)
- Append build arguments when not using build secrets
- Update UI helper text to inform users about automatic env injection
- Add comprehensive unit tests (7 test cases, all passing)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move notification logic from NotifyOutdatedTraefikServersJob into CheckTraefikVersionForServerJob to send immediate notifications when outdated Traefik is detected. This is more suitable for cloud environments with thousands of servers.
Changes:
- CheckTraefikVersionForServerJob now sends notifications immediately after detecting outdated Traefik
- Remove NotifyOutdatedTraefikServersJob (no longer needed)
- Remove delay calculation logic from CheckTraefikVersionJob
- Update tests to reflect new immediate notification pattern
Trade-offs:
- Pro: Faster notifications (immediate alerts)
- Pro: Simpler codebase (removed complex delay calculation)
- Pro: Better scalability for thousands of servers
- Con: Teams may receive multiple notifications if they have many outdated servers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When a user restarts the proxy from the Navbar UI component, the system now automatically dispatches a version check job immediately after the restart completes. This provides immediate feedback about available Traefik updates without waiting for the weekly scheduled check.
Changes:
- Import CheckTraefikVersionForServerJob in Navbar component
- After successful proxy restart, dispatch version check for Traefik servers
- Version check only runs for servers using Traefik proxy
This ensures users get up-to-date version information right after restarting their proxy infrastructure.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit introduces several improvements to the Traefik version tracking
feature and proxy configuration UI:
## Caching Improvements
1. **New centralized helper functions** (bootstrap/helpers/versions.php):
- `get_versions_data()`: Redis-cached access to versions.json (1 hour TTL)
- `get_traefik_versions()`: Extract Traefik versions from cached data
- `invalidate_versions_cache()`: Clear cache when file is updated
2. **Performance optimization**:
- Single Redis cache key: `coolify:versions:all`
- Eliminates 2-4 file reads per page load
- 95-97.5% reduction in disk I/O time
- Shared cache across all servers in distributed setup
3. **Updated all consumers to use cached helpers**:
- CheckTraefikVersionJob: Use get_traefik_versions()
- Server/Proxy: Two-level caching (Redis + in-memory per-request)
- CheckForUpdatesJob: Auto-invalidate cache after updating file
- bootstrap/helpers/shared.php: Use cached data for Coolify version
## UI/UX Improvements
1. **Navbar warning indicator**:
- Added yellow warning triangle icon next to "Proxy" menu item
- Appears when server has outdated Traefik version
- Uses existing traefik_outdated_info data for instant checks
- Provides at-a-glance visibility of version issues
2. **Proxy sidebar persistence**:
- Fixed sidebar disappearing when clicking "Switch Proxy"
- Configuration link now always visible (needed for proxy selection)
- Dynamic Configurations and Logs only show when proxy is configured
- Better navigation context during proxy switching workflow
## Code Quality
- Added comprehensive PHPDoc for Server::$traefik_outdated_info property
- Improved code organization with centralized helper approach
- All changes formatted with Laravel Pint
- Maintains backward compatibility
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Store both patch update and newer minor version information simultaneously
- Display patch update availability alongside minor version upgrades in notifications
- Add newer_branch_target and newer_branch_latest fields to traefik_outdated_info
- Update all notification channels (Discord, Telegram, Slack, Pushover, Email, Webhook)
- Show minor version in format (e.g., v3.6) for upgrade targets instead of patch version
- Enhance UI callouts with clearer messaging about available upgrades
- Remove verbose logging in favor of cleaner code structure
- Handle edge case where SSH command returns empty response
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Addresses critical performance issues identified in code review by refactoring the monolithic CheckTraefikVersionJob into a distributed architecture with parallel processing.
Changes:
- Split version checking into CheckTraefikVersionForServerJob for parallel execution
- Extract notification logic into NotifyOutdatedTraefikServersJob
- Dispatch individual server checks concurrently to handle thousands of servers
- Add comprehensive unit tests for the new job architecture
- Update feature tests to cover the refactored workflow
Performance improvements:
- Sequential SSH calls replaced with parallel queue jobs
- Scales efficiently for large installations with thousands of servers
- Reduces job execution time from hours to minutes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add wait loops to ensure containers are fully removed before restarting.
This fixes race conditions where docker compose would fail because an
existing container was still being cleaned up.
Changes:
- StartProxy: Add explicit stop, wait loop before docker compose up
- StopProxy: Add wait loop after container removal
- Both actions now poll up to 10 seconds for complete removal
- Add error suppression to handle non-existent containers gracefully
Tests:
- Add StartProxyTest.php with 3 tests for cleanup logic
- Add StopProxyTest.php with 4 tests for stop behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes a critical N+1 query issue in CheckTraefikVersionJob
that was loading ALL proxy servers into memory then filtering in PHP,
causing potential OOM errors with thousands of servers.
Changes:
- Added scopeWhereProxyType() query scope to Server model for
database-level filtering using JSON column arrow notation
- Updated CheckTraefikVersionJob to use new scope instead of
collection filter, moving proxy type filtering into the SQL query
- Added comprehensive unit tests for the new query scope
Performance impact:
- Before: SELECT * FROM servers WHERE proxy IS NOT NULL (all servers)
- After: SELECT * FROM servers WHERE proxy->>'type' = 'TRAEFIK' (filtered)
- Eliminates memory overhead of loading non-Traefik servers
- Critical for cloud instances with thousands of connected servers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add automated Traefik version checking job running weekly on Sundays
- Implement version detection from running containers and comparison with versions.json
- Add notifications across all channels (Email, Discord, Slack, Telegram, Pushover, Webhook) for outdated versions
- Create dismissible callout component with localStorage persistence
- Display cross-branch upgrade warnings (e.g., v3.5 -> v3.6) with changelog links
- Show patch update notifications within same branch
- Add warning icon that appears when callouts are dismissed
- Prevent duplicate notifications during proxy restart by adding restarting parameter
- Fix notification spam with transition-based logic for status changes
- Enable system email settings by default in development mode
- Track last saved/applied proxy settings to detect configuration drift
Move buildpack switching cleanup from Livewire component to Application model's boot lifecycle. This improves separation of concerns and ensures cleanup happens consistently regardless of how the buildpack change is triggered. Also clears Dockerfile-specific data when switching away from dockerfile buildpack.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix double-slash issue in Docker Compose preview paths when baseDirectory is "/"
- Normalize baseDirectory using rtrim() to prevent path concatenation issues
- Replace hardcoded '/artifacts/build-time.env' with ApplicationDeploymentJob::BUILD_TIME_ENV_PATH
- Make BUILD_TIME_ENV_PATH constant public for reusability
- Add comprehensive unit tests (11 test cases, 25 assertions)
Fixes preview path generation in:
- getDockerComposeBuildCommandPreviewProperty()
- getDockerComposeStartCommandPreviewProperty()
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When using a custom Docker Compose build command, environment variables
were being lost because the --env-file flag was not included. This fix
automatically injects the --env-file flag to ensure build-time environment
variables are available during custom builds.
Changes:
- Auto-inject --env-file /artifacts/build-time.env after docker compose
- Respect user-provided --env-file flags (no duplication)
- Append build arguments when not using build secrets
- Update UI helper text to inform users about automatic env injection
- Add comprehensive unit tests (7 test cases, all passing)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move notification logic from NotifyOutdatedTraefikServersJob into CheckTraefikVersionForServerJob to send immediate notifications when outdated Traefik is detected. This is more suitable for cloud environments with thousands of servers.
Changes:
- CheckTraefikVersionForServerJob now sends notifications immediately after detecting outdated Traefik
- Remove NotifyOutdatedTraefikServersJob (no longer needed)
- Remove delay calculation logic from CheckTraefikVersionJob
- Update tests to reflect new immediate notification pattern
Trade-offs:
- Pro: Faster notifications (immediate alerts)
- Pro: Simpler codebase (removed complex delay calculation)
- Pro: Better scalability for thousands of servers
- Con: Teams may receive multiple notifications if they have many outdated servers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move buildpack switching cleanup from Livewire component to Application model's boot lifecycle. This improves separation of concerns and ensures cleanup happens consistently regardless of how the buildpack change is triggered. Also clears Dockerfile-specific data when switching away from dockerfile buildpack.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When a user restarts the proxy from the Navbar UI component, the system now automatically dispatches a version check job immediately after the restart completes. This provides immediate feedback about available Traefik updates without waiting for the weekly scheduled check.
Changes:
- Import CheckTraefikVersionForServerJob in Navbar component
- After successful proxy restart, dispatch version check for Traefik servers
- Version check only runs for servers using Traefik proxy
This ensures users get up-to-date version information right after restarting their proxy infrastructure.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Merged latest changes from the next branch to keep the feature branch
up to date. No conflicts were encountered during the merge.
Changes from next branch:
- Updated application deployment job error logging
- Updated server manager job and instance settings
- Removed PullHelperImageJob in favor of updated approach
- Database migration refinements
- Updated versions.json with latest component versions
All automatic merges were successful and no manual conflict resolution
was required.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit introduces several improvements to the Traefik version tracking
feature and proxy configuration UI:
## Caching Improvements
1. **New centralized helper functions** (bootstrap/helpers/versions.php):
- `get_versions_data()`: Redis-cached access to versions.json (1 hour TTL)
- `get_traefik_versions()`: Extract Traefik versions from cached data
- `invalidate_versions_cache()`: Clear cache when file is updated
2. **Performance optimization**:
- Single Redis cache key: `coolify:versions:all`
- Eliminates 2-4 file reads per page load
- 95-97.5% reduction in disk I/O time
- Shared cache across all servers in distributed setup
3. **Updated all consumers to use cached helpers**:
- CheckTraefikVersionJob: Use get_traefik_versions()
- Server/Proxy: Two-level caching (Redis + in-memory per-request)
- CheckForUpdatesJob: Auto-invalidate cache after updating file
- bootstrap/helpers/shared.php: Use cached data for Coolify version
## UI/UX Improvements
1. **Navbar warning indicator**:
- Added yellow warning triangle icon next to "Proxy" menu item
- Appears when server has outdated Traefik version
- Uses existing traefik_outdated_info data for instant checks
- Provides at-a-glance visibility of version issues
2. **Proxy sidebar persistence**:
- Fixed sidebar disappearing when clicking "Switch Proxy"
- Configuration link now always visible (needed for proxy selection)
- Dynamic Configurations and Logs only show when proxy is configured
- Better navigation context during proxy switching workflow
## Code Quality
- Added comprehensive PHPDoc for Server::$traefik_outdated_info property
- Improved code organization with centralized helper approach
- All changes formatted with Laravel Pint
- Maintains backward compatibility
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add exception class names to error messages for better debugging
- Mark technical details (error type, code, location, stack trace) as hidden in logs
- Preserve original exception types when wrapping in DeploymentException
- Update ServerManagerJob to include exception class in log messages
- Enhance unit tests to verify hidden log entry behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Optimizations:
- Add immediate cleanup of helper container and server temp files after copying to database
- Add pre-cleanup to handle interrupted restores
- Combine restore + cleanup commands to remove DB temp files immediately after restore
- Reduce temp file lifetime from minutes to seconds (70-80% reduction)
- Add progress tracking via MinIO client (shows by default)
- Update user message to mention progress visibility
Benefits:
- Temp files exist only as long as needed (not until end of process)
- Real-time S3 download progress shown in activity monitor
- Better disk space management through aggressive cleanup
- Improved error recovery with pre-cleanup
Compatibility:
- Works with all database types (PostgreSQL, MySQL, MariaDB, MongoDB)
- All existing tests passing
- Event-based cleanup acts as safety net for edge cases
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Wraps rolling_update(), health_check(), stop_running_container(), and
start_by_compose_file() with try-catch to ensure comprehensive error logging
happens in one place. Removes duplicate logging from intermediate catch blocks
since the failed() method already provides full error details including stack trace
and chained exception information.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Major architectural improvements:
- Merged download and restore into single atomic operation
- Eliminated separate S3DownloadFinished event (redundant)
- Files now transfer directly: S3 → helper container → server → database container
- Removed download progress tracking in favor of unified restore progress
UI/UX improvements:
- Unified restore method selection with visual cards
- Consistent "File Information" display between local and S3 restore
- Single slide-over for all restore operations (removed separate S3 download monitor)
- Better visual feedback with loading states
Security enhancements:
- Added isSafeTmpPath() helper for path traversal protection
- URL decode validation to catch encoded attacks
- Canonical path resolution to prevent symlink attacks
- Comprehensive path validation in all cleanup events
Cleanup improvements:
- S3RestoreJobFinished now handles all cleanup (helper container + all temp files)
- RestoreJobFinished uses new isSafeTmpPath() validation
- CoolifyTask dispatches cleanup events even on job failure
- All cleanup uses non-throwing commands (2>/dev/null || true)
Other improvements:
- S3 storage policy authorization on Show component
- Storage Form properly syncs is_usable state after test
- Removed debug code and improved error handling
- Better command organization and documentation
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Store both patch update and newer minor version information simultaneously
- Display patch update availability alongside minor version upgrades in notifications
- Add newer_branch_target and newer_branch_latest fields to traefik_outdated_info
- Update all notification channels (Discord, Telegram, Slack, Pushover, Email, Webhook)
- Show minor version in format (e.g., v3.6) for upgrade targets instead of patch version
- Enhance UI callouts with clearer messaging about available upgrades
- Remove verbose logging in favor of cleaner code structure
- Handle edge case where SSH command returns empty response
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Addresses critical performance issues identified in code review by refactoring the monolithic CheckTraefikVersionJob into a distributed architecture with parallel processing.
Changes:
- Split version checking into CheckTraefikVersionForServerJob for parallel execution
- Extract notification logic into NotifyOutdatedTraefikServersJob
- Dispatch individual server checks concurrently to handle thousands of servers
- Add comprehensive unit tests for the new job architecture
- Update feature tests to cover the refactored workflow
Performance improvements:
- Sequential SSH calls replaced with parallel queue jobs
- Scales efficiently for large installations with thousands of servers
- Reduces job execution time from hours to minutes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add wait loops to ensure containers are fully removed before restarting.
This fixes race conditions where docker compose would fail because an
existing container was still being cleaned up.
Changes:
- StartProxy: Add explicit stop, wait loop before docker compose up
- StopProxy: Add wait loop after container removal
- Both actions now poll up to 10 seconds for complete removal
- Add error suppression to handle non-existent containers gracefully
Tests:
- Add StartProxyTest.php with 3 tests for cleanup logic
- Add StopProxyTest.php with 4 tests for stop behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes a critical N+1 query issue in CheckTraefikVersionJob
that was loading ALL proxy servers into memory then filtering in PHP,
causing potential OOM errors with thousands of servers.
Changes:
- Added scopeWhereProxyType() query scope to Server model for
database-level filtering using JSON column arrow notation
- Updated CheckTraefikVersionJob to use new scope instead of
collection filter, moving proxy type filtering into the SQL query
- Added comprehensive unit tests for the new query scope
Performance impact:
- Before: SELECT * FROM servers WHERE proxy IS NOT NULL (all servers)
- After: SELECT * FROM servers WHERE proxy->>'type' = 'TRAEFIK' (filtered)
- Eliminates memory overhead of loading non-Traefik servers
- Critical for cloud instances with thousands of connected servers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add automated Traefik version checking job running weekly on Sundays
- Implement version detection from running containers and comparison with versions.json
- Add notifications across all channels (Email, Discord, Slack, Telegram, Pushover, Webhook) for outdated versions
- Create dismissible callout component with localStorage persistence
- Display cross-branch upgrade warnings (e.g., v3.5 -> v3.6) with changelog links
- Show patch update notifications within same branch
- Add warning icon that appears when callouts are dismissed
- Prevent duplicate notifications during proxy restart by adding restarting parameter
- Fix notification spam with transition-based logic for status changes
- Enable system email settings by default in development mode
- Track last saved/applied proxy settings to detect configuration drift
Stop dispatching PullHelperImageJob to thousands of servers when the helper image version changes. Instead, rely on Docker's automatic image pulling during actual deployments and backups. Inline the helper image pull in UpdateCoolify for the single use case.
This eliminates queue flooding on cloud instances while maintaining all functionality through Docker's built-in image management.
The first click did nothing because instant_remote_process() blocked the
Livewire response, preventing UI state updates. The button also remained
visible during download, allowing multiple clicks.
- Replace blocking instant_remote_process() with async command in queue
- Add container cleanup to command queue with error suppression
- Hide "Download & Prepare" button when s3DownloadInProgress is true
- Button now properly disappears when clicked, preventing double-clicks
- No more blocking operations in downloadFromS3() method
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The s3DownloadedFile was being set immediately when download started,
causing the "Restore" button to appear while still downloading and
the download message to not hide properly.
- Remove immediate setting of s3DownloadedFile in downloadFromS3()
- Set s3DownloadedFile only in handleS3DownloadFinished() event handler
- Add broadcastWith() to S3DownloadFinished to send downloadPath
- Store downloadPath as public property for broadcasting
- Now download message hides and restore button shows only when complete
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The event was broadcasting to the first user in the team instead of
the actual user who triggered the download. This caused the download
message to never hide for other team members.
- Pass userId in S3DownloadFinished event data
- Use the specific userId from event data for broadcasting
- Remove unused User model import
- Ensures broadcast reaches the correct user's private channel
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Make S3DownloadFinished implement ShouldBroadcast
- Add listener in Import component to handle S3DownloadFinished event
- Set s3DownloadInProgress to false when download completes
- This hides "Downloading from S3..." message after download finishes
- Follows the same pattern as DatabaseStatusChanged event
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove App\\Events\\ prefix from event class names
- RunRemoteProcess already prepends App\\Events\\ to the class name
- Use 'S3DownloadFinished' instead of 'App\\Events\\S3DownloadFinished'
- Use 'S3RestoreJobFinished' instead of 'App\\Events\\S3RestoreJobFinished'
- Fixes "Class 'App\Events\App\Events\S3DownloadFinished' not found" error
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Create S3DownloadFinished event to cleanup MinIO containers
- Create S3RestoreJobFinished event to cleanup temp files and S3 downloads
- Add formatBytes() helper function for human-readable file sizes
- Update Import component to use full Event class names in callEventOnFinish
- Fix activity monitor visibility issues with proper event dispatching
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The formatBytes function was used in the view but never defined, causing
a runtime error. This function was needed to display S3 file sizes in
human-readable format (e.g., "1.5 MB" instead of "1572864").
Added formatBytes() helper to bootstrap/helpers/shared.php:
- Converts bytes to human-readable format (B, KB, MB, GB, TB, PB)
- Uses base 1024 for proper binary conversion
- Configurable precision (defaults to 2 decimal places)
- Handles zero bytes case
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add debugging to understand why the download message stays visible after completion.
This will help us see if:
1. The event is being dispatched by ActivityMonitor
2. The event is being received by Import component
3. The property is being set to false
4. The entangle is syncing to Alpine properly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Root cause analysis:
- Changed from dispatch to property binding broke the activity monitor completely
- ActivityMonitor component expects activityMonitor event, not property binding
- Original approach was correct: use dispatch + event listeners
Solution:
- Revert to original dispatch('activityMonitor', $activity->id) calls
- Use @if conditionals to render only one monitor at a time (removes from DOM)
- Add unique wire:key to each monitor instance to prevent conflicts
- S3 download monitor: wire:key="s3-download-{{ $resource->uuid }}"
- Database restore monitor: wire:key="database-restore-{{ $resource->uuid }}"
This ensures:
- Activity monitors display correctly when processes start
- Only one monitor is rendered at a time (S3 download OR database restore)
- Each monitor has unique identity via wire:key
- Event listeners work as designed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add updatedActivityId method to watch for changes to activityId property
- When activityId is set/updated, automatically hydrate the activity and enable polling
- This allows the activity monitor to display content when activityId is bound from parent component
- Fixes issue where activity monitor was empty because activity wasn't loaded
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add S3DownloadFinished event listener to Import component
- Add handleS3DownloadFinished method to set s3DownloadInProgress to false
- This ensures the 'Downloading from S3...' message is hidden when download completes
- The success message now properly displays after download finishes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add currentActivityId property to track the active process
- Replace event dispatching with property assignment for cleaner state management
- S3 download monitor only renders during download and is removed when complete
- Database restore monitor only renders during restore operation
- Both monitors now share the same activity-monitor component instance with proper lifecycle management
- When user starts restore after S3 download, S3 monitor is removed from DOM
- Fixes issue where S3 download and database restore showed identical output
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add unique wire keys to activity-monitor components (s3-download-monitor and database-restore-monitor)
- Update dispatch calls to target specific components using ->to() method
- This prevents both activity monitors from listening to the same activityMonitor event and displaying identical output
- S3 download now shows in s3-download-monitor component
- Database restore now shows in database-restore-monitor component
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Alpine.js entangle bindings for s3StorageId and s3Path to enable
reactive button state without server requests
- Change button disabled binding from PHP :disabled to Alpine x-bind:disabled
for client-side reactivity using deferred wire:model inputs
- Replace S3Storage::findOrFail with ownedByCurrentTeam()->findOrFail in
checkS3File() and downloadFromS3() methods
- Remove redundant manual team verification since ownedByCurrentTeam scope
automatically filters to current team
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit introduces functionality for integrating S3 storage into the import process. It allows users to select S3 storage, check for file existence, and download files directly from S3. This enhancement improves the flexibility of the import feature by enabling users to work with files stored in S3, addressing a common use case for teams that utilize cloud storage solutions.
Removed duplicate queue_application_deployment() call in Heading.php deploy method that was causing "Deployment already queued for this commit" error to display even though deployment was successfully queued.
Also changed notification type from 'success' to 'error' when deployment is actually skipped for proper user feedback.
Add detection system for PORT environment variable to help users configure applications correctly:
- Add detectPortFromEnvironment() method to Application model to detect PORT env var
- Add getDetectedPortInfoProperty() computed property in General Livewire component
- Display contextual info banners in UI when PORT is detected:
- Warning when PORT exists but ports_exposes is empty
- Warning when PORT doesn't match ports_exposes configuration
- Info message when PORT matches ports_exposes
- Add deployment logging to warn about PORT/ports_exposes mismatches
- Include comprehensive unit tests for port detection logic
The ports_exposes field remains authoritative for proxy configuration, while
PORT detection provides helpful suggestions to users.
Track container restart counts from Docker and detect crash loops to provide better visibility into application health issues.
- Add restart_count, last_restart_at, and last_restart_type columns to applications table
- Detect restart count increases from Docker inspect data and send notifications
- Show restart count badge in UI with warning icon on Logs navigation
- Distinguish between crash restarts and manual restarts
- Implement 30-second grace period to prevent false "exited" status during crash loops
- Reset restart count on manual stop, restart, and redeploy actions
- Add unit tests for restart count tracking logic
This helps users quickly identify when containers are in crash loops and need attention, even when the container status flickers between states during Docker's restart backoff period.
- Add retry configuration to CoolifyTask (3 tries, 600s timeout)
- Add retry configuration to ScheduledTaskJob (3 tries, configurable timeout)
- Add retry configuration to DatabaseBackupJob (2 tries)
- Implement exponential backoff for all jobs (30s, 60s, 120s intervals)
- Add failed() handlers with comprehensive error logging to scheduled-errors channel
- Add execution tracking: started_at, retry_count, duration (decimal), error_details
- Add configurable timeout field to scheduled tasks (60-3600s, default 300s)
- Update UI to include timeout configuration in task creation/editing forms
- Increase ScheduledJobManager lock expiration from 60s to 90s for high-load environments
- Implement safe queue cleanup with restart vs runtime modes
- Restart mode: aggressive cleanup (marks all processing jobs as failed)
- Runtime mode: conservative cleanup (only marks jobs >12h as failed, skips deployments)
- Add cleanup:redis --restart flag for system startup
- Integrate cleanup into Dev.php init() for development environment
- Increase scheduled-errors log retention from 7 to 14 days
- Create comprehensive test suite (unit and feature tests)
- Add TESTING_GUIDE.md with manual testing instructions
Fixes issues with jobs failing after single attempt and "attempted too many times" errors
- Fix container filtering to properly distinguish base deployments (pullRequestId=0) from PR deployments
- Add deployment cancellation when PR closes via webhook to prevent race conditions
- Prevent CleanupHelperContainersJob from killing active deployment containers
- Enhance error messages with exit codes and actual errors instead of vague "Oops" messages
- Protect status transitions in finally blocks to ensure proper job failure handling
- Added `requiredPort` property to `ServiceApplicationView` to track the required port for services.
- Introduced modal confirmation for removing required ports, including methods to confirm or cancel the action.
- Enhanced `Service` model with `getRequiredPort` and `requiresPort` methods to retrieve port information from service templates.
- Implemented `extractPortFromUrl` method in `ServiceApplication` to extract port from FQDN URLs.
- Updated frontend views to display warnings when required ports are missing from domains.
- Created unit tests for service port validation and extraction logic, ensuring correct behavior for various scenarios.
- Added feature tests for Livewire component handling of domain submissions with required ports.
The function previously named syncGitHubReleases has been renamed to syncReleasesToGitHubRepo for clarity, as it now focuses on syncing releases directly to the GitHub repository instead of the CDN. Additionally, error handling has been enhanced to provide more informative messages during the cloning, branching, and committing processes. This refactor aims to improve the maintainability of the code and ensure better feedback in case of failures.
This change organizes the command within the appropriate Cloud namespace, improving code structure and maintainability. By grouping related commands together, it enhances clarity for future developers and helps in locating files more efficiently.
- Fix SPA toggle not triggering nginx configuration regeneration by capturing old value before syncData
- Fix similar issue with is_http_basic_auth_enabled using value comparison instead of isDirty
- Remove redundant application settings save() call
- Add confirmation modal to nginx generation button to prevent accidental overwrites
- Pass correct type parameter (spa/static) to generateNginxConfiguration method
- Changed `$cast` to `$casts` in ApplicationSetting model to enable proper boolean casting for new fields.
- Added boolean fields: `is_spa`, `is_build_server_enabled`, `is_preserve_repository_enabled`, `is_container_label_escape_enabled`, `is_container_label_readonly_enabled`, and `use_build_secrets`.
fix: Update Livewire component to reflect new property names
- Updated references in the Livewire component for the new camelCase property names.
- Adjusted bindings and IDs for consistency with the updated model.
test: Add unit tests for ApplicationSetting boolean casting
- Created tests to verify boolean casting for `is_static` and other boolean fields in ApplicationSetting.
- Ensured all boolean fields are correctly defined in the casts array.
test: Implement tests for SynchronizesModelData trait
- Added tests to verify the functionality of the SynchronizesModelData trait, ensuring it correctly syncs properties between the component and the model.
- Included tests for handling non-existent properties gracefully.
The `custom_labels` attribute was being concatenated twice into the configuration hash calculation within the `isConfigurationChanged` method. This commit removes the redundant inclusion to ensure accurate configuration change detection.
The `is_array` check for `custom_network_aliases_array` was too strict and could lead to issues when the value was an empty string or null. This commit changes the check to `!empty()` for more robust handling.
Additionally, the unit tests for `custom_network_aliases` have been refactored to directly use the `Application::isConfigurationChanged()` method. This provides a more accurate and integrated test of the configuration change detection logic, rather than relying on a manual hash calculatio
The custom_network_aliases attribute in the Application model was being cast to an array directly. This commit refactors the attribute to provide both a string representation (for compatibility with older configurations and hashing) and an array representation for internal use. This ensures that network aliases are correctly parsed and utilized, preventing potential issues during deployment and configuration updates.
This command queries Docker registries (Docker Hub, GHCR, Quay, Codeberg) to find and update Docker image versions in service template files.
Features:
- Automatically updates 'latest' tags to semantic versions using digest matching
- Supports multiple version formats: semantic (1.2.3), date-based (2025.10.20), RELEASE timestamps
- Prefers shorter version tags (1.8 over 1.8.1) when both available
- In-memory caching to avoid duplicate API queries for same images
- Detects and reports services with available major version updates
- Preserves YAML formatting and comments
- Supports dry-run mode for preview
Usage:
php artisan services:update-versions [--dry-run] [--service=name]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The login and forgot-password rate limiters were vulnerable to bypass
by manipulating the X-Forwarded-For header. Attackers could rotate
this header value to circumvent the 5 attempts per minute limit.
Changed both rate limiters to use server('REMOTE_ADDR') instead of
ip() to prevent header spoofing. REMOTE_ADDR gives the actual
connecting IP before proxy headers are processed.
Also added comprehensive unit tests to verify the fix.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add null check before updating OAuth settings to prevent calling methods on null
- Apply couldBeEnabled() validation for all settings in bulk update (not just instant save)
- Disable OAuth providers that fail validation and collect error messages
- Surface all validation errors to the user instead of silently failing
- Update oauth_settings_map with fresh data after saving each setting
This ensures bulk updates follow the same validation logic as instant-save paths
and prevents bypassing model validation by directly calling update.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes two related issues preventing the Monaco editor from displaying Docker Compose file content:
1. Data Sync Issue:
- After loadComposeFile() fetches the compose content from Git and updates the database model, the Livewire component properties were never synced
- Monaco editor binds to component properties via wire:model, so it remained empty
- Fixed by calling syncFromModel() after refresh() in loadComposeFile() method
2. Script Duplication Issue:
- Multiple Monaco editors on the same page (compose files, dockerfile, labels) caused race condition
- Each instance tried to inject the Monaco loader script simultaneously
- Resulted in "SyntaxError: Identifier '_amdLoaderGlobal' has already been declared"
- Fixed by adding a global flag to prevent duplicate script injection
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add defensive null/empty checks in externalDbUrl() for all standalone database models to prevent "invalid proto:" errors when server IP is not available.
**Problem:**
When `$this->destination->server->getIp` returns null or empty string, database URLs become malformed (e.g., `mongodb://user:pass@:27017` with empty host), causing "invalid proto:" validation errors.
**Solution:**
Added early return with null check in externalDbUrl() method for all 8 database types:
- Check if server IP is empty before building URL
- Return null instead of generating malformed URL
- Maintains graceful degradation - UI handles null URLs appropriately
**Defense in Depth:**
While mount() guard (from commit 74c70b431) prevents most cases, this adds an additional safety layer for edge cases:
- Race conditions during server updates
- State changes between mount and URL access
- Direct model access bypassing Livewire lifecycle
**Affected Models:**
- StandaloneMongodb
- StandalonePostgresql
- StandaloneMysql
- StandaloneMariadb
- StandaloneClickhouse
- StandaloneRedis
- StandaloneKeydb
- StandaloneDragonfly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit adds official Docker repository installation methods as fallbacks
when Rancher and get.docker.com convenience scripts fail, providing more
reliable Docker installation across all supported operating systems.
Changes:
- Add apt repository fallback for Debian-based systems (Ubuntu, Debian, Raspbian)
- Fixes installation on Debian 13 (Trixie) where get.docker.com fails
- Uses VERSION_CODENAME for automatic OS version detection
- Add dnf repository fallback for RHEL-based systems (CentOS, Fedora, Rocky, AlmaLinux)
- Add zypper repository fallback for SUSE-based systems (SLES, OpenSUSE)
- Refactor installation methods into dedicated private methods for better maintainability
Installation fallback chain:
1. Rancher install-docker script (preserves version pinning)
2. Docker get.docker.com convenience script
3. Official repository method (new, most reliable)
Benefits:
- Future-proof: Works with new OS releases automatically
- Production-ready: Uses Docker's recommended installation method
- Comprehensive: Covers 95%+ of Linux servers in production
- Maintainable: Clean code structure with single-responsibility methods
Fixes issue where Debian 13 (Trixie) servers fail validation because
get.docker.com script incorrectly uses numeric version "13" instead of
codename "trixie" in repository URLs.