Add RustFS service definition with Docker Compose configuration and SVG logo for Coolify's service marketplace. Includes S3-compatible object storage setup with health checks and configurable environment variables.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Features:
- Add client-side search filtering for runtime and deployment logs
- Add log download functionality (respects search filters)
- Make runtime log sections collapsible by default
- Auto-expand single container and lazy load logs on first expand
- Match deployment and runtime log view heights (40rem)
- Add debug toggle for deployment logs
- Improve scroll behavior with follow logs feature
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
The guard was setting and immediately resetting the flag in the same
synchronous execution, providing no actual protection. Now the flag
stays true until proxy reaches a stable state (running/exited/error)
via WebSocket notification, with additional client-side guard.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When stopping a service that's currently deploying, mark any IN_PROGRESS or QUEUED activities as CANCELLED. This prevents the status from remaining stuck at "starting" after containers are stopped.
Follows the existing pattern used in forceDeploy().
Extended the maximum allowed timeout for scheduled tasks from 3600 to 36000 seconds (10 hours). Also passes the configured timeout to instant_remote_process() so the SSH command respects the timeout setting.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Allows user-configured backup timeouts > 3600 to be respected. Previously, the SSH process used a hardcoded 3600 second timeout regardless of the job timeout setting. Now the timeout is passed through to instant_remote_process() for all backup operations.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When restarting the proxy on localhost (server id 0), shows a warning
banner in the logs sidebar explaining that the connection may be
temporarily lost and to refresh the browser if logs stop updating.
Also cleans up notification noise by commenting out intermediate
status notifications (restarting, starting, stopping) that were
redundant with the visual status indicators.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The error "container name already in use" occurred because the container
wasn't fully removed before docker compose up tried to create a new one.
Changes:
- Removed redundant stop/remove logic from START PHASE (was duplicating STOP PHASE)
- Made STOP PHASE more robust:
- Increased wait iterations from 10 to 15
- Added force remove on each iteration in case container got stuck
- Added final verification and force cleanup after the loop
- Added better logging to show removal progress
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Instead of calling StopProxy::run() (synchronous) then StartProxy::run()
(async), now we build a single command sequence that includes both stop
and start phases. This creates one Activity immediately via remote_process(),
so the UI receives the activity ID right away and can show logs in real-time
from the very beginning of the restart operation.
Key changes:
- Removed dependency on StopProxy and StartProxy actions
- Build combined command sequence inline in buildRestartCommands()
- Use remote_process() directly which returns Activity immediately
- Increased timeout from 60s to 120s to accommodate full restart
- Activity ID dispatched to UI within milliseconds of job starting
Flow is now:
1. Job starts → sets "restarting" status
2. Commands built synchronously (fast, no SSH)
3. remote_process() creates Activity and dispatches CoolifyTask job
4. Activity ID sent to UI immediately via WebSocket
5. UI opens activity monitor with real-time streaming logs
6. Logs show "Stopping proxy..." then "Starting proxy..." as they happen
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Set proxy status to 'restarting' and dispatch ProxyStatusChangedUI event
at the very beginning of handle() method, before StopProxy runs. This
notifies the UI immediately so users know a restart is in progress,
rather than waiting until after the stop operation completes.
Also simplified unit tests to focus on testable job configuration
(middleware, tries, timeout) without complex SchemalessAttributes mocking.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add restartInitiated flag to prevent duplicate "Proxy restart initiated" messages
- Restore ProxyStatusChangedUI dispatch with activityId in RestartProxyJob
- This allows the UI to open the activity monitor and show logs during restart
- Simplified restart message (removed redundant "Monitor progress" text)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The Application::loadComposeFile method's finally block always saves
the model, which was persisting invalid base_directory values when
validation failed.
Changes:
- Add restoreBaseDirectory and restoreDockerComposeLocation parameters
to loadComposeFile() in both Application model and General component
- The finally block now restores BOTH base_directory and
docker_compose_location to the provided original values before saving
- When called from submit(), pass the original DB values so they are
restored on failure instead of the new invalid values
This ensures invalid paths are never persisted to the database.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When restarting the proxy on localhost (where Coolify is running), the UI becomes inaccessible because the connection is lost. This change makes all proxy restarts run as background jobs with WebSocket notifications, allowing the operation to complete even after connection loss.
Changes:
- Enhanced ProxyStatusChangedUI event to carry activityId for log monitoring
- Updated RestartProxyJob to dispatch status events and track activity
- Simplified Navbar restart() to always dispatch job for all servers
- Enhanced showNotification() to handle activity monitoring and new statuses
- Added comprehensive unit and feature tests
Benefits:
- Prevents localhost lockout during proxy restarts
- Consistent behavior across all server types
- Non-blocking UI with real-time progress updates
- Automatic activity log monitoring
- Proper error handling and recovery
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move compose file validation BEFORE database save to prevent invalid
base directory and docker compose location values from being persisted
when validation fails.
Changes:
- Move compose file validation before $this->application->save()
- Restore original values when validation fails
- Add resetErrorBag() to clear stale validation errors
This fixes two bugs:
1. Invalid paths were saved to DB even when validation failed
2. Error messages persisted after correcting to valid path
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Apply the same frontend path normalization pattern from commit f6398f7cf
to the General Settings page for consistency across all forms.
Changes:
- Add Alpine.js path normalization to Docker Compose section (base directory + compose location)
- Add Alpine.js path normalization to non-Docker Compose section (base directory + dockerfile location)
- Change wire:model to wire:model.defer to prevent backend requests during tab navigation
- Add @blur event handlers for immediate path normalization feedback
- Backend normalization remains as defensive fallback
This ensures consistent validation behavior and fixes potential tab focus
issues on the General Settings page.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Change wire:model.blur to wire:model.defer to prevent backend requests
during form navigation. Add Alpine.js path normalization functions that
run on blur, fixing tab focus issues while keeping path validation
purely on the frontend.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Move Carbon::setTestNow() to the beginning of each test before creating
test data. Previously, tests created servers using now() (real current
time) and only afterwards called Carbon::setTestNow(), making
sentinel_updated_at inconsistent with the test clock.
This caused staleness calculations to use different timelines:
- sentinel_updated_at was based on real time (e.g., Dec 2024)
- Test execution time was frozen at 2025-01-15
Now all timestamps use the same frozen test time, making staleness
checks predictable and tests reliable regardless of when they run.
Affected tests (all 7 test cases in the file):
- does not dispatch storage check when sentinel is in sync
- dispatches storage check when sentinel is out of sync
- dispatches storage check when sentinel is disabled
- respects custom hourly storage check frequency when sentinel is out of sync
- handles VALID_CRON_STRINGS mapping correctly when sentinel is out of sync
- respects server timezone for storage checks when sentinel is out of sync
- does not dispatch storage check outside schedule
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When Sentinel is enabled and in sync, ServerStorageCheckJob was being
dispatched from two locations causing unnecessary duplication:
1. PushServerUpdateJob (every ~30s with real-time filesystem data)
2. ServerManagerJob (scheduled cron check via SSH)
This commit modifies ServerManagerJob to only dispatch ServerStorageCheckJob
when Sentinel is out of sync or disabled. When Sentinel is active and in sync,
PushServerUpdateJob provides real-time storage data, making the scheduled SSH
check redundant.
Benefits:
- Eliminates duplicate storage checks when Sentinel is working
- Reduces unnecessary SSH overhead
- Storage checks still run as fallback when Sentinel fails
- Maintains scheduled checks for servers without Sentinel
Updated tests to reflect new behavior:
- Storage check NOT dispatched when Sentinel is in sync
- Storage check dispatched when Sentinel is out of sync or disabled
- All timezone and frequency tests updated accordingly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Dispatch ProxyStatusChangedUI event after version check completes so the UI updates in real-time without requiring page refresh.
Changes:
- Add ProxyStatusChangedUI::dispatch() at all exit points in CheckTraefikVersionForServerJob
- Ensures UI refreshes automatically via WebSocket when version check completes
- Works for all scenarios: version detected, using latest tag, outdated version, up-to-date
User experience:
- User restarts proxy
- Warning clears automatically in real-time (no refresh needed)
- Leverages existing WebSocket infrastructure
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When users updated Traefik configuration or version and restarted the proxy, the warning triangle icon showing outdated version info persisted until the weekly CheckTraefikVersionJob ran (Sundays at 00:00).
This was caused by the UI warning indicators reading from cached database columns (detected_traefik_version, traefik_outdated_info) that were only updated by the weekly scheduled job, not after proxy restarts.
Solution: Add version check to ProxyStatusChangedNotification listener that triggers automatically after proxy status changes to "running".
Changes:
- Add Traefik version check in ProxyStatusChangedNotification::handle()
- Triggers automatically when ProxyStatusChanged event fires with status="running"
- Removed duplicate version check from Navbar::restart() (now handled by event)
- Event fires after StartProxy/StopProxy actions complete via async jobs
- Gracefully handles missing versions.json data with warning log
Benefits:
- Version check happens AFTER proxy is confirmed running (more accurate)
- Reuses existing event infrastructure (ProxyStatusChanged)
- Works for all proxy restart scenarios (manual restart, config save + restart, etc.)
- No duplicate checks - single source of truth in event listener
- Async job runs in background (5-10 seconds) to update database
- User sees warning cleared after page refresh
Flow:
1. User updates config and restarts proxy (or manually restarts)
2. StartProxy action completes async, dispatches ProxyStatusChanged event
3. ProxyStatusChangedNotification listener receives event
4. Listener checks proxy status = "running", dispatches CheckTraefikVersionForServerJob
5. Job detects version via SSH, updates database columns
6. UI re-renders with cleared warnings
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Delete the custom SVG icon now that the official PNG icon has been
added and referenced in the service template.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Replace custom SVG with the official app-icon.png (512x512) from
the Fizzy repository. This ensures brand consistency and uses the
authentic Fizzy branding.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add preserveRestarting parameter to ContainerStatusAggregator to allow applications
and service sub-resources to display "Restarting" status instead of being marked as
"Degraded". This gives better visibility into container restart behavior.
- Update ContainerStatusAggregator to accept preserveRestarting parameter (defaults to false)
- Update GetContainersStatus to use preserveRestarting: true for applications and service sub-resources
- Update PushServerUpdateJob to use preserveRestarting: true for applications and service sub-resources
- Add comprehensive documentation explaining the parameter behavior and when to use it
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add support for degraded status from sub-resources as highest priority
- Handle mixed running+starting state to show service as not fully ready
- Update state priority hierarchy from 8 to 10 levels
- Add comprehensive test coverage for new status scenarios
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add SOLID_QUEUE_CONNECTS_TO=false environment variable to tell
Solid Queue to use the primary SQLite database instead of looking
for a separate queue database. This fixes the ActiveRecord adapter
error about missing queue database configuration.
With this setting, Fizzy will use a single SQLite database for both
the application data and Solid Queue job processing, which is the
recommended approach for single-instance deployments.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove MariaDB dependency and use Fizzy's default SQLite database.
This simplifies deployment by:
- Removing external database container
- Using DATABASE_ADAPTER=sqlite3 environment variable
- Mounting /rails/db volume for SQLite database persistence
- Reducing resource requirements and startup time
Fizzy supports SQLite by default and it's the recommended setup
for single-instance deployments.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed MARIADB_ROOT_PASSWORD from $SERVICE_PASSWORD_MARIADB_ROOT
to $SERVICE_PASSWORD_ROOT to match Coolify's standard naming convention
for root passwords. This fixes the MariaDB initialization error.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fizzy uses MySQL/MariaDB, not PostgreSQL. Update the service template to use:
- MariaDB 11 instead of PostgreSQL
- mysql2:// protocol in DATABASE_URL
- Proper MariaDB environment variables and health checks
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Replace custom logo with official Fizzy equalizer/bar chart design from
the basecamp/fizzy repository.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add Basecamp's Fizzy Kanban tracking tool as a one-click deployable service.
- Uses official Docker image: ghcr.io/basecamp/fizzy:main
- PostgreSQL database with auto-generated credentials
- Rails environment with SECRET_KEY_BASE and RAILS_MASTER_KEY
- VAPID keys for web push notifications
- Health checks on /up endpoint
- Persistent storage for Rails storage directory
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>