diff --git a/.claude/agents/project-manager-backlog.md b/.claude/agents/project-manager-backlog.md deleted file mode 100644 index 1cc6ad612..000000000 --- a/.claude/agents/project-manager-backlog.md +++ /dev/null @@ -1,193 +0,0 @@ ---- -name: project-manager-backlog -description: Use this agent when you need to manage project tasks using the backlog.md CLI tool. This includes creating new tasks, editing tasks, ensuring tasks follow the proper format and guidelines, breaking down large tasks into atomic units, and maintaining the project's task management workflow. Examples: Context: User wants to create a new task for adding a feature. user: "I need to add a new authentication system to the project" assistant: "I'll use the project-manager-backlog agent that will use backlog cli to create a properly structured task for this feature." Since the user needs to create a task for the project, use the Task tool to launch the project-manager-backlog agent to ensure the task follows backlog.md guidelines. Context: User has multiple related features to implement. user: "We need to implement user profiles, settings page, and notification preferences" assistant: "Let me use the project-manager-backlog agent to break these down into atomic, independent tasks." The user has a complex set of features that need to be broken down into proper atomic tasks following backlog.md structure. Context: User wants to review if their task description is properly formatted. user: "Can you check if this task follows our guidelines: 'task-123 - Implement user login'" assistant: "I'll use the project-manager-backlog agent to review this task against our backlog.md standards." The user needs task review, so use the project-manager-backlog agent to ensure compliance with project guidelines. -color: blue ---- - -You are an expert project manager specializing in the backlog.md task management system. You have deep expertise in creating well-structured, atomic, and testable tasks that follow software development best practices. - -## Backlog.md CLI Tool - -**IMPORTANT: Backlog.md uses standard CLI commands, NOT slash commands.** - -You use the `backlog` CLI tool to manage project tasks. This tool allows you to create, edit, and manage tasks in a structured way using Markdown files. You will never create tasks manually; instead, you will use the CLI commands to ensure all tasks are properly formatted and adhere to the project's guidelines. - -The backlog CLI is installed globally and available in the PATH. Here are the exact commands you should use: - -### Creating Tasks -```bash -backlog task create "Task title" -d "Description" --ac "First criteria,Second criteria" -l label1,label2 -``` - -### Editing Tasks -```bash -backlog task edit 123 -s "In Progress" -a @claude -``` - -### Listing Tasks -```bash -backlog task list --plain -``` - -**NEVER use slash commands like `/create-task` or `/edit`. These do not exist in Backlog.md.** -**ALWAYS use the standard CLI format: `backlog task create` (without any slash prefix).** - -### Example Usage - -When a user asks you to create a task, here's exactly what you should do: - -**User**: "Create a task to add user authentication" -**You should run**: -```bash -backlog task create "Add user authentication system" -d "Implement a secure authentication system to allow users to register and login" --ac "Users can register with email and password,Users can login with valid credentials,Invalid login attempts show appropriate error messages" -l authentication,backend -``` - -**NOT**: `/create-task "Add user authentication"` ❌ (This is wrong - slash commands don't exist) - -## Your Core Responsibilities - -1. **Task Creation**: You create tasks that strictly adhere to the backlog.md cli commands. Never create tasks manually. Use available task create parameters to ensure tasks are properly structured and follow the guidelines. -2. **Task Review**: You ensure all tasks meet the quality standards for atomicity, testability, and independence and task anatomy from below. -3. **Task Breakdown**: You expertly decompose large features into smaller, manageable tasks -4. **Context understanding**: You analyze user requests against the project codebase and existing tasks to ensure relevance and accuracy -5. **Handling ambiguity**: You clarify vague or ambiguous requests by asking targeted questions to the user to gather necessary details - -## Task Creation Guidelines - -### **Title (one liner)** - -Use a clear brief title that summarizes the task. - -### **Description**: (The **"why"**) - -Provide a concise summary of the task purpose and its goal. Do not add implementation details here. It -should explain the purpose, the scope and context of the task. Code snippets should be avoided. - -### **Acceptance Criteria**: (The **"what"**) - -List specific, measurable outcomes that define what means to reach the goal from the description. Use checkboxes (`- [ ]`) for tracking. -When defining `## Acceptance Criteria` for a task, focus on **outcomes, behaviors, and verifiable requirements** rather -than step-by-step implementation details. -Acceptance Criteria (AC) define *what* conditions must be met for the task to be considered complete. -They should be testable and confirm that the core purpose of the task is achieved. -**Key Principles for Good ACs:** - -- **Outcome-Oriented:** Focus on the result, not the method. -- **Testable/Verifiable:** Each criterion should be something that can be objectively tested or verified. -- **Clear and Concise:** Unambiguous language. -- **Complete:** Collectively, ACs should cover the scope of the task. -- **User-Focused (where applicable):** Frame ACs from the perspective of the end-user or the system's external behavior. - - - *Good Example:* "- [ ] User can successfully log in with valid credentials." - - *Good Example:* "- [ ] System processes 1000 requests per second without errors." - - *Bad Example (Implementation Step):* "- [ ] Add a new function `handleLogin()` in `auth.ts`." - -### Task file - -Once a task is created using backlog cli, it will be stored in `backlog/tasks/` directory as a Markdown file with the format -`task- - .md` (e.g. `task-42 - Add GraphQL resolver.md`). - -## Task Breakdown Strategy - -When breaking down features: -1. Identify the foundational components first -2. Create tasks in dependency order (foundations before features) -3. Ensure each task delivers value independently -4. Avoid creating tasks that block each other - -### Additional task requirements - -- Tasks must be **atomic** and **testable**. If a task is too large, break it down into smaller subtasks. - Each task should represent a single unit of work that can be completed in a single PR. - -- **Never** reference tasks that are to be done in the future or that are not yet created. You can only reference - previous tasks (id < current task id). - -- When creating multiple tasks, ensure they are **independent** and they do not depend on future tasks. - Example of correct tasks splitting: task 1: "Add system for handling API requests", task 2: "Add user model and DB - schema", task 3: "Add API endpoint for user data". - Example of wrong tasks splitting: task 1: "Add API endpoint for user data", task 2: "Define the user model and DB - schema". - -## Recommended Task Anatomy - -```markdown -# task‑42 - Add GraphQL resolver - -## Description (the why) - -Short, imperative explanation of the goal of the task and why it is needed. - -## Acceptance Criteria (the what) - -- [ ] Resolver returns correct data for happy path -- [ ] Error response matches REST -- [ ] P95 latency ≤ 50 ms under 100 RPS - -## Implementation Plan (the how) (added after putting the task in progress but before implementing any code change) - -1. Research existing GraphQL resolver patterns -2. Implement basic resolver with error handling -3. Add performance monitoring -4. Write unit and integration tests -5. Benchmark performance under load - -## Implementation Notes (for reviewers) (only added after finishing the code implementation of a task) - -- Approach taken -- Features implemented or modified -- Technical decisions and trade-offs -- Modified or added files -``` - -## Quality Checks - -Before finalizing any task creation, verify: -- [ ] Title is clear and brief -- [ ] Description explains WHY without HOW -- [ ] Each AC is outcome-focused and testable -- [ ] Task is atomic (single PR scope) -- [ ] No dependencies on future tasks - -You are meticulous about these standards and will guide users to create high-quality tasks that enhance project productivity and maintainability. - -## Self reflection -When creating a task, always think from the perspective of an AI Agent that will have to work with this task in the future. -Ensure that the task is structured in a way that it can be easily understood and processed by AI coding agents. - -## Handy CLI Commands - -| Action | Example | -|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Create task | `backlog task create "Add OAuth System"` | -| Create with description | `backlog task create "Feature" -d "Add authentication system"` | -| Create with assignee | `backlog task create "Feature" -a @sara` | -| Create with status | `backlog task create "Feature" -s "In Progress"` | -| Create with labels | `backlog task create "Feature" -l auth,backend` | -| Create with priority | `backlog task create "Feature" --priority high` | -| Create with plan | `backlog task create "Feature" --plan "1. Research\n2. Implement"` | -| Create with AC | `backlog task create "Feature" --ac "Must work,Must be tested"` | -| Create with notes | `backlog task create "Feature" --notes "Started initial research"` | -| Create with deps | `backlog task create "Feature" --dep task-1,task-2` | -| Create sub task | `backlog task create -p 14 "Add Login with Google"` | -| Create (all options) | `backlog task create "Feature" -d "Description" -a @sara -s "To Do" -l auth --priority high --ac "Must work" --notes "Initial setup done" --dep task-1 -p 14` | -| List tasks | `backlog task list [-s <status>] [-a <assignee>] [-p <parent>]` | -| List by parent | `backlog task list --parent 42` or `backlog task list -p task-42` | -| View detail | `backlog task 7` (interactive UI, press 'E' to edit in editor) | -| View (AI mode) | `backlog task 7 --plain` | -| Edit | `backlog task edit 7 -a @sara -l auth,backend` | -| Add plan | `backlog task edit 7 --plan "Implementation approach"` | -| Add AC | `backlog task edit 7 --ac "New criterion,Another one"` | -| Add notes | `backlog task edit 7 --notes "Completed X, working on Y"` | -| Add deps | `backlog task edit 7 --dep task-1 --dep task-2` | -| Archive | `backlog task archive 7` | -| Create draft | `backlog task create "Feature" --draft` | -| Draft flow | `backlog draft create "Spike GraphQL"` → `backlog draft promote 3.1` | -| Demote to draft | `backlog task demote <id>` | - -Full help: `backlog --help` - -## Tips for AI Agents - -- **Always use `--plain` flag** when listing or viewing tasks for AI-friendly text output instead of using Backlog.md - interactive UI. diff --git a/.cursor/rules/backlog-guildlines.md b/.cursor/rules/backlog-guildlines.md deleted file mode 100644 index ea95eb0b5..000000000 --- a/.cursor/rules/backlog-guildlines.md +++ /dev/null @@ -1,398 +0,0 @@ - -# === BACKLOG.MD GUIDELINES START === -# Instructions for the usage of Backlog.md CLI Tool - -## What is Backlog.md? - -**Backlog.md is the complete project management system for this codebase.** It provides everything needed to manage tasks, track progress, and collaborate on development - all through a powerful CLI that operates on markdown files. - -### Core Capabilities - -✅ **Task Management**: Create, edit, assign, prioritize, and track tasks with full metadata -✅ **Acceptance Criteria**: Granular control with add/remove/check/uncheck by index -✅ **Board Visualization**: Terminal-based Kanban board (`backlog board`) and web UI (`backlog browser`) -✅ **Git Integration**: Automatic tracking of task states across branches -✅ **Dependencies**: Task relationships and subtask hierarchies -✅ **Documentation & Decisions**: Structured docs and architectural decision records -✅ **Export & Reporting**: Generate markdown reports and board snapshots -✅ **AI-Optimized**: `--plain` flag provides clean text output for AI processing - -### Why This Matters to You (AI Agent) - -1. **Comprehensive system** - Full project management capabilities through CLI -2. **The CLI is the interface** - All operations go through `backlog` commands -3. **Unified interaction model** - You can use CLI for both reading (`backlog task 1 --plain`) and writing (`backlog task edit 1`) -4. **Metadata stays synchronized** - The CLI handles all the complex relationships - -### Key Understanding - -- **Tasks** live in `backlog/tasks/` as `task-<id> - <title>.md` files -- **You interact via CLI only**: `backlog task create`, `backlog task edit`, etc. -- **Use `--plain` flag** for AI-friendly output when viewing/listing -- **Never bypass the CLI** - It handles Git, metadata, file naming, and relationships - ---- - -# ⚠️ CRITICAL: NEVER EDIT TASK FILES DIRECTLY - -**ALL task operations MUST use the Backlog.md CLI commands** -- ✅ **DO**: Use `backlog task edit` and other CLI commands -- ✅ **DO**: Use `backlog task create` to create new tasks -- ✅ **DO**: Use `backlog task edit <id> --check-ac <index>` to mark acceptance criteria -- ❌ **DON'T**: Edit markdown files directly -- ❌ **DON'T**: Manually change checkboxes in files -- ❌ **DON'T**: Add or modify text in task files without using CLI - -**Why?** Direct file editing breaks metadata synchronization, Git tracking, and task relationships. - ---- - -## 1. Source of Truth & File Structure - -### 📖 **UNDERSTANDING** (What you'll see when reading) -- Markdown task files live under **`backlog/tasks/`** (drafts under **`backlog/drafts/`**) -- Files are named: `task-<id> - <title>.md` (e.g., `task-42 - Add GraphQL resolver.md`) -- Project documentation is in **`backlog/docs/`** -- Project decisions are in **`backlog/decisions/`** - -### 🔧 **ACTING** (How to change things) -- **All task operations MUST use the Backlog.md CLI tool** -- This ensures metadata is correctly updated and the project stays in sync -- **Always use `--plain` flag** when listing or viewing tasks for AI-friendly text output - ---- - -## 2. Common Mistakes to Avoid - -### ❌ **WRONG: Direct File Editing** -```markdown -# DON'T DO THIS: -1. Open backlog/tasks/task-7 - Feature.md in editor -2. Change "- [ ]" to "- [x]" manually -3. Add notes directly to the file -4. Save the file -``` - -### ✅ **CORRECT: Using CLI Commands** -```bash -# DO THIS INSTEAD: -backlog task edit 7 --check-ac 1 # Mark AC #1 as complete -backlog task edit 7 --notes "Implementation complete" # Add notes -backlog task edit 7 -s "In Progress" -a @agent-k # Multiple commands: change status and assign the task -``` - ---- - -## 3. Understanding Task Format (Read-Only Reference) - -⚠️ **FORMAT REFERENCE ONLY** - The following sections show what you'll SEE in task files. -**Never edit these directly! Use CLI commands to make changes.** - -### Task Structure You'll See - -```markdown ---- -id: task-42 -title: Add GraphQL resolver -status: To Do -assignee: [@sara] -labels: [backend, api] ---- - -## Description -Brief explanation of the task purpose. - -## Acceptance Criteria -<!-- AC:BEGIN --> -- [ ] #1 First criterion -- [x] #2 Second criterion (completed) -- [ ] #3 Third criterion -<!-- AC:END --> - -## Implementation Plan -1. Research approach -2. Implement solution - -## Implementation Notes -Summary of what was done. -``` - -### How to Modify Each Section - -| What You Want to Change | CLI Command to Use | -|------------------------|-------------------| -| Title | `backlog task edit 42 -t "New Title"` | -| Status | `backlog task edit 42 -s "In Progress"` | -| Assignee | `backlog task edit 42 -a @sara` | -| Labels | `backlog task edit 42 -l backend,api` | -| Description | `backlog task edit 42 -d "New description"` | -| Add AC | `backlog task edit 42 --ac "New criterion"` | -| Check AC #1 | `backlog task edit 42 --check-ac 1` | -| Uncheck AC #2 | `backlog task edit 42 --uncheck-ac 2` | -| Remove AC #3 | `backlog task edit 42 --remove-ac 3` | -| Add Plan | `backlog task edit 42 --plan "1. Step one\n2. Step two"` | -| Add Notes | `backlog task edit 42 --notes "What I did"` | - ---- - -## 4. Defining Tasks - -### Creating New Tasks - -**Always use CLI to create tasks:** -```bash -backlog task create "Task title" -d "Description" --ac "First criterion" --ac "Second criterion" -``` - -### Title (one liner) -Use a clear brief title that summarizes the task. - -### Description (The "why") -Provide a concise summary of the task purpose and its goal. Explains the context without implementation details. - -### Acceptance Criteria (The "what") - -**Understanding the Format:** -- Acceptance criteria appear as numbered checkboxes in the markdown files -- Format: `- [ ] #1 Criterion text` (unchecked) or `- [x] #1 Criterion text` (checked) - -**Managing Acceptance Criteria via CLI:** - -⚠️ **IMPORTANT: How AC Commands Work** -- **Adding criteria (`--ac`)** accepts multiple flags: `--ac "First" --ac "Second"` ✅ -- **Checking/unchecking/removing** accept multiple flags too: `--check-ac 1 --check-ac 2` ✅ -- **Mixed operations** work in a single command: `--check-ac 1 --uncheck-ac 2 --remove-ac 3` ✅ - -```bash -# Add new criteria (MULTIPLE values allowed) -backlog task edit 42 --ac "User can login" --ac "Session persists" - -# Check specific criteria by index (MULTIPLE values supported) -backlog task edit 42 --check-ac 1 --check-ac 2 --check-ac 3 # Check multiple ACs -# Or check them individually if you prefer: -backlog task edit 42 --check-ac 1 # Mark #1 as complete -backlog task edit 42 --check-ac 2 # Mark #2 as complete - -# Mixed operations in single command -backlog task edit 42 --check-ac 1 --uncheck-ac 2 --remove-ac 3 - -# ❌ STILL WRONG - These formats don't work: -# backlog task edit 42 --check-ac 1,2,3 # No comma-separated values -# backlog task edit 42 --check-ac 1-3 # No ranges -# backlog task edit 42 --check 1 # Wrong flag name - -# Multiple operations of same type -backlog task edit 42 --uncheck-ac 1 --uncheck-ac 2 # Uncheck multiple ACs -backlog task edit 42 --remove-ac 2 --remove-ac 4 # Remove multiple ACs (processed high-to-low) -``` - -**Key Principles for Good ACs:** -- **Outcome-Oriented:** Focus on the result, not the method -- **Testable/Verifiable:** Each criterion should be objectively testable -- **Clear and Concise:** Unambiguous language -- **Complete:** Collectively cover the task scope -- **User-Focused:** Frame from end-user or system behavior perspective - -Good Examples: -- "User can successfully log in with valid credentials" -- "System processes 1000 requests per second without errors" - -Bad Example (Implementation Step): -- "Add a new function handleLogin() in auth.ts" - -### Task Breakdown Strategy - -1. Identify foundational components first -2. Create tasks in dependency order (foundations before features) -3. Ensure each task delivers value independently -4. Avoid creating tasks that block each other - -### Task Requirements - -- Tasks must be **atomic** and **testable** or **verifiable** -- Each task should represent a single unit of work for one PR -- **Never** reference future tasks (only tasks with id < current task id) -- Ensure tasks are **independent** and don't depend on future work - ---- - -## 5. Implementing Tasks - -### Implementation Plan (The "how") (only after starting work) -```bash -backlog task edit 42 -s "In Progress" -a @{myself} -backlog task edit 42 --plan "1. Research patterns\n2. Implement\n3. Test" -``` - -### Implementation Notes (Imagine you need to copy paste this into a PR description) -```bash -backlog task edit 42 --notes "Implemented using pattern X, modified files Y and Z" -``` - -**IMPORTANT**: Do NOT include an Implementation Plan when creating a task. The plan is added only after you start implementation. -- Creation phase: provide Title, Description, Acceptance Criteria, and optionally labels/priority/assignee. -- When you begin work, switch to edit and add the plan: `backlog task edit <id> --plan "..."`. -- Add Implementation Notes only after completing the work: `backlog task edit <id> --notes "..."`. - -Phase discipline: What goes where -- Creation: Title, Description, Acceptance Criteria, labels/priority/assignee. -- Implementation: Implementation Plan (after moving to In Progress). -- Wrap-up: Implementation Notes, AC and Definition of Done checks. - -**IMPORTANT**: Only implement what's in the Acceptance Criteria. If you need to do more, either: -1. Update the AC first: `backlog task edit 42 --ac "New requirement"` -2. Or create a new task: `backlog task create "Additional feature"` - ---- - -## 6. Typical Workflow - -```bash -# 1. Identify work -backlog task list -s "To Do" --plain - -# 2. Read task details -backlog task 42 --plain - -# 3. Start work: assign yourself & change status -backlog task edit 42 -a @myself -s "In Progress" - -# 4. Add implementation plan -backlog task edit 42 --plan "1. Analyze\n2. Refactor\n3. Test" - -# 5. Work on the task (write code, test, etc.) - -# 6. Mark acceptance criteria as complete (supports multiple in one command) -backlog task edit 42 --check-ac 1 --check-ac 2 --check-ac 3 # Check all at once -# Or check them individually if preferred: -# backlog task edit 42 --check-ac 1 -# backlog task edit 42 --check-ac 2 -# backlog task edit 42 --check-ac 3 - -# 7. Add implementation notes -backlog task edit 42 --notes "Refactored using strategy pattern, updated tests" - -# 8. Mark task as done -backlog task edit 42 -s Done -``` - ---- - -## 7. Definition of Done (DoD) - -A task is **Done** only when **ALL** of the following are complete: - -### ✅ Via CLI Commands: -1. **All acceptance criteria checked**: Use `backlog task edit <id> --check-ac <index>` for each -2. **Implementation notes added**: Use `backlog task edit <id> --notes "..."` -3. **Status set to Done**: Use `backlog task edit <id> -s Done` - -### ✅ Via Code/Testing: -4. **Tests pass**: Run test suite and linting -5. **Documentation updated**: Update relevant docs if needed -6. **Code reviewed**: Self-review your changes -7. **No regressions**: Performance, security checks pass - -⚠️ **NEVER mark a task as Done without completing ALL items above** - ---- - -## 8. Quick Reference: DO vs DON'T - -### Viewing Tasks -| Task | ✅ DO | ❌ DON'T | -|------|-------|----------| -| View task | `backlog task 42 --plain` | Open and read .md file directly | -| List tasks | `backlog task list --plain` | Browse backlog/tasks folder | -| Check status | `backlog task 42 --plain` | Look at file content | - -### Modifying Tasks -| Task | ✅ DO | ❌ DON'T | -|------|-------|----------| -| Check AC | `backlog task edit 42 --check-ac 1` | Change `- [ ]` to `- [x]` in file | -| Add notes | `backlog task edit 42 --notes "..."` | Type notes into .md file | -| Change status | `backlog task edit 42 -s Done` | Edit status in frontmatter | -| Add AC | `backlog task edit 42 --ac "New"` | Add `- [ ] New` to file | - ---- - -## 9. Complete CLI Command Reference - -### Task Creation -| Action | Command | -|--------|---------| -| Create task | `backlog task create "Title"` | -| With description | `backlog task create "Title" -d "Description"` | -| With AC | `backlog task create "Title" --ac "Criterion 1" --ac "Criterion 2"` | -| With all options | `backlog task create "Title" -d "Desc" -a @sara -s "To Do" -l auth --priority high` | -| Create draft | `backlog task create "Title" --draft` | -| Create subtask | `backlog task create "Title" -p 42` | - -### Task Modification -| Action | Command | -|--------|---------| -| Edit title | `backlog task edit 42 -t "New Title"` | -| Edit description | `backlog task edit 42 -d "New description"` | -| Change status | `backlog task edit 42 -s "In Progress"` | -| Assign | `backlog task edit 42 -a @sara` | -| Add labels | `backlog task edit 42 -l backend,api` | -| Set priority | `backlog task edit 42 --priority high` | - -### Acceptance Criteria Management -| Action | Command | -|--------|---------| -| Add AC | `backlog task edit 42 --ac "New criterion" --ac "Another"` | -| Remove AC #2 | `backlog task edit 42 --remove-ac 2` | -| Remove multiple ACs | `backlog task edit 42 --remove-ac 2 --remove-ac 4` | -| Check AC #1 | `backlog task edit 42 --check-ac 1` | -| Check multiple ACs | `backlog task edit 42 --check-ac 1 --check-ac 3` | -| Uncheck AC #3 | `backlog task edit 42 --uncheck-ac 3` | -| Mixed operations | `backlog task edit 42 --check-ac 1 --uncheck-ac 2 --remove-ac 3 --ac "New"` | - -### Task Content -| Action | Command | -|--------|---------| -| Add plan | `backlog task edit 42 --plan "1. Step one\n2. Step two"` | -| Add notes | `backlog task edit 42 --notes "Implementation details"` | -| Add dependencies | `backlog task edit 42 --dep task-1 --dep task-2` | - -### Task Operations -| Action | Command | -|--------|---------| -| View task | `backlog task 42 --plain` | -| List tasks | `backlog task list --plain` | -| Filter by status | `backlog task list -s "In Progress" --plain` | -| Filter by assignee | `backlog task list -a @sara --plain` | -| Archive task | `backlog task archive 42` | -| Demote to draft | `backlog task demote 42` | - ---- - -## 10. Troubleshooting - -### If You Accidentally Edited a File Directly - -1. **DON'T PANIC** - But don't save or commit -2. Revert the changes -3. Make changes properly via CLI -4. If already saved, the metadata might be out of sync - use `backlog task edit` to fix - -### Common Issues - -| Problem | Solution | -|---------|----------| -| "Task not found" | Check task ID with `backlog task list --plain` | -| AC won't check | Use correct index: `backlog task 42 --plain` to see AC numbers | -| Changes not saving | Ensure you're using CLI, not editing files | -| Metadata out of sync | Re-edit via CLI to fix: `backlog task edit 42 -s <current-status>` | - ---- - -## Remember: The Golden Rule - -**🎯 If you want to change ANYTHING in a task, use the `backlog task edit` command.** -**📖 Only READ task files directly, never WRITE to them.** - -Full help available: `backlog --help` - -# === BACKLOG.MD GUIDELINES END === diff --git a/.cursor/rules/testing-patterns.mdc b/.cursor/rules/testing-patterns.mdc index 010b76544..a0e64dbae 100644 --- a/.cursor/rules/testing-patterns.mdc +++ b/.cursor/rules/testing-patterns.mdc @@ -9,6 +9,8 @@ alwaysApply: false Coolify employs **comprehensive testing strategies** using modern PHP testing frameworks to ensure reliability of deployment operations, infrastructure management, and user interactions. +!Important: Always run tests inside `coolify` container. + ## Testing Framework Stack ### Core Testing Tools diff --git a/.github/workflows/coolify-production-build.yml b/.github/workflows/coolify-production-build.yml index 9286fdbb0..cd1f002b8 100644 --- a/.github/workflows/coolify-production-build.yml +++ b/.github/workflows/coolify-production-build.yml @@ -13,7 +13,6 @@ on: - docker/testing-host/Dockerfile - templates/** - CHANGELOG.md - - backlog/** env: GITHUB_REGISTRY: ghcr.io diff --git a/.github/workflows/coolify-staging-build.yml b/.github/workflows/coolify-staging-build.yml index 390eab000..09b1e9421 100644 --- a/.github/workflows/coolify-staging-build.yml +++ b/.github/workflows/coolify-staging-build.yml @@ -16,7 +16,6 @@ on: - docker/testing-host/Dockerfile - templates/** - CHANGELOG.md - - backlog/** env: GITHUB_REGISTRY: ghcr.io diff --git a/CLAUDE.md b/CLAUDE.md index 87409c260..96f8eec78 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -247,403 +247,3 @@ ### Project Information - [Project Overview](.cursor/rules/project-overview.mdc) - High-level project structure - [Technology Stack](.cursor/rules/technology-stack.mdc) - Detailed tech stack information - [Cursor Rules Guide](.cursor/rules/cursor_rules.mdc) - How to maintain cursor rules - - -# === BACKLOG.MD GUIDELINES START === -# Instructions for the usage of Backlog.md CLI Tool - -## What is Backlog.md? - -**Backlog.md is the complete project management system for this codebase.** It provides everything needed to manage tasks, track progress, and collaborate on development - all through a powerful CLI that operates on markdown files. - -### Core Capabilities - -✅ **Task Management**: Create, edit, assign, prioritize, and track tasks with full metadata -✅ **Acceptance Criteria**: Granular control with add/remove/check/uncheck by index -✅ **Board Visualization**: Terminal-based Kanban board (`backlog board`) and web UI (`backlog browser`) -✅ **Git Integration**: Automatic tracking of task states across branches -✅ **Dependencies**: Task relationships and subtask hierarchies -✅ **Documentation & Decisions**: Structured docs and architectural decision records -✅ **Export & Reporting**: Generate markdown reports and board snapshots -✅ **AI-Optimized**: `--plain` flag provides clean text output for AI processing - -### Why This Matters to You (AI Agent) - -1. **Comprehensive system** - Full project management capabilities through CLI -2. **The CLI is the interface** - All operations go through `backlog` commands -3. **Unified interaction model** - You can use CLI for both reading (`backlog task 1 --plain`) and writing (`backlog task edit 1`) -4. **Metadata stays synchronized** - The CLI handles all the complex relationships - -### Key Understanding - -- **Tasks** live in `backlog/tasks/` as `task-<id> - <title>.md` files -- **You interact via CLI only**: `backlog task create`, `backlog task edit`, etc. -- **Use `--plain` flag** for AI-friendly output when viewing/listing -- **Never bypass the CLI** - It handles Git, metadata, file naming, and relationships - ---- - -# ⚠️ CRITICAL: NEVER EDIT TASK FILES DIRECTLY - -**ALL task operations MUST use the Backlog.md CLI commands** -- ✅ **DO**: Use `backlog task edit` and other CLI commands -- ✅ **DO**: Use `backlog task create` to create new tasks -- ✅ **DO**: Use `backlog task edit <id> --check-ac <index>` to mark acceptance criteria -- ❌ **DON'T**: Edit markdown files directly -- ❌ **DON'T**: Manually change checkboxes in files -- ❌ **DON'T**: Add or modify text in task files without using CLI - -**Why?** Direct file editing breaks metadata synchronization, Git tracking, and task relationships. - ---- - -## 1. Source of Truth & File Structure - -### 📖 **UNDERSTANDING** (What you'll see when reading) -- Markdown task files live under **`backlog/tasks/`** (drafts under **`backlog/drafts/`**) -- Files are named: `task-<id> - <title>.md` (e.g., `task-42 - Add GraphQL resolver.md`) -- Project documentation is in **`backlog/docs/`** -- Project decisions are in **`backlog/decisions/`** - -### 🔧 **ACTING** (How to change things) -- **All task operations MUST use the Backlog.md CLI tool** -- This ensures metadata is correctly updated and the project stays in sync -- **Always use `--plain` flag** when listing or viewing tasks for AI-friendly text output - ---- - -## 2. Common Mistakes to Avoid - -### ❌ **WRONG: Direct File Editing** -```markdown -# DON'T DO THIS: -1. Open backlog/tasks/task-7 - Feature.md in editor -2. Change "- [ ]" to "- [x]" manually -3. Add notes directly to the file -4. Save the file -``` - -### ✅ **CORRECT: Using CLI Commands** -```bash -# DO THIS INSTEAD: -backlog task edit 7 --check-ac 1 # Mark AC #1 as complete -backlog task edit 7 --notes "Implementation complete" # Add notes -backlog task edit 7 -s "In Progress" -a @agent-k # Multiple commands: change status and assign the task -``` - ---- - -## 3. Understanding Task Format (Read-Only Reference) - -⚠️ **FORMAT REFERENCE ONLY** - The following sections show what you'll SEE in task files. -**Never edit these directly! Use CLI commands to make changes.** - -### Task Structure You'll See - -```markdown ---- -id: task-42 -title: Add GraphQL resolver -status: To Do -assignee: [@sara] -labels: [backend, api] ---- - -## Description -Brief explanation of the task purpose. - -## Acceptance Criteria -<!-- AC:BEGIN --> -- [ ] #1 First criterion -- [x] #2 Second criterion (completed) -- [ ] #3 Third criterion -<!-- AC:END --> - -## Implementation Plan -1. Research approach -2. Implement solution - -## Implementation Notes -Summary of what was done. -``` - -### How to Modify Each Section - -| What You Want to Change | CLI Command to Use | -|------------------------|-------------------| -| Title | `backlog task edit 42 -t "New Title"` | -| Status | `backlog task edit 42 -s "In Progress"` | -| Assignee | `backlog task edit 42 -a @sara` | -| Labels | `backlog task edit 42 -l backend,api` | -| Description | `backlog task edit 42 -d "New description"` | -| Add AC | `backlog task edit 42 --ac "New criterion"` | -| Check AC #1 | `backlog task edit 42 --check-ac 1` | -| Uncheck AC #2 | `backlog task edit 42 --uncheck-ac 2` | -| Remove AC #3 | `backlog task edit 42 --remove-ac 3` | -| Add Plan | `backlog task edit 42 --plan "1. Step one\n2. Step two"` | -| Add Notes | `backlog task edit 42 --notes "What I did"` | - ---- - -## 4. Defining Tasks - -### Creating New Tasks - -**Always use CLI to create tasks:** -```bash -backlog task create "Task title" -d "Description" --ac "First criterion" --ac "Second criterion" -``` - -### Title (one liner) -Use a clear brief title that summarizes the task. - -### Description (The "why") -Provide a concise summary of the task purpose and its goal. Explains the context without implementation details. - -### Acceptance Criteria (The "what") - -**Understanding the Format:** -- Acceptance criteria appear as numbered checkboxes in the markdown files -- Format: `- [ ] #1 Criterion text` (unchecked) or `- [x] #1 Criterion text` (checked) - -**Managing Acceptance Criteria via CLI:** - -⚠️ **IMPORTANT: How AC Commands Work** -- **Adding criteria (`--ac`)** accepts multiple flags: `--ac "First" --ac "Second"` ✅ -- **Checking/unchecking/removing** accept multiple flags too: `--check-ac 1 --check-ac 2` ✅ -- **Mixed operations** work in a single command: `--check-ac 1 --uncheck-ac 2 --remove-ac 3` ✅ - -```bash -# Add new criteria (MULTIPLE values allowed) -backlog task edit 42 --ac "User can login" --ac "Session persists" - -# Check specific criteria by index (MULTIPLE values supported) -backlog task edit 42 --check-ac 1 --check-ac 2 --check-ac 3 # Check multiple ACs -# Or check them individually if you prefer: -backlog task edit 42 --check-ac 1 # Mark #1 as complete -backlog task edit 42 --check-ac 2 # Mark #2 as complete - -# Mixed operations in single command -backlog task edit 42 --check-ac 1 --uncheck-ac 2 --remove-ac 3 - -# ❌ STILL WRONG - These formats don't work: -# backlog task edit 42 --check-ac 1,2,3 # No comma-separated values -# backlog task edit 42 --check-ac 1-3 # No ranges -# backlog task edit 42 --check 1 # Wrong flag name - -# Multiple operations of same type -backlog task edit 42 --uncheck-ac 1 --uncheck-ac 2 # Uncheck multiple ACs -backlog task edit 42 --remove-ac 2 --remove-ac 4 # Remove multiple ACs (processed high-to-low) -``` - -**Key Principles for Good ACs:** -- **Outcome-Oriented:** Focus on the result, not the method -- **Testable/Verifiable:** Each criterion should be objectively testable -- **Clear and Concise:** Unambiguous language -- **Complete:** Collectively cover the task scope -- **User-Focused:** Frame from end-user or system behavior perspective - -Good Examples: -- "User can successfully log in with valid credentials" -- "System processes 1000 requests per second without errors" - -Bad Example (Implementation Step): -- "Add a new function handleLogin() in auth.ts" - -### Task Breakdown Strategy - -1. Identify foundational components first -2. Create tasks in dependency order (foundations before features) -3. Ensure each task delivers value independently -4. Avoid creating tasks that block each other - -### Task Requirements - -- Tasks must be **atomic** and **testable** or **verifiable** -- Each task should represent a single unit of work for one PR -- **Never** reference future tasks (only tasks with id < current task id) -- Ensure tasks are **independent** and don't depend on future work - ---- - -## 5. Implementing Tasks - -### Implementation Plan (The "how") (only after starting work) -```bash -backlog task edit 42 -s "In Progress" -a @{myself} -backlog task edit 42 --plan "1. Research patterns\n2. Implement\n3. Test" -``` - -### Implementation Notes (Imagine you need to copy paste this into a PR description) -```bash -backlog task edit 42 --notes "Implemented using pattern X, modified files Y and Z" -``` - -**IMPORTANT**: Do NOT include an Implementation Plan when creating a task. The plan is added only after you start implementation. -- Creation phase: provide Title, Description, Acceptance Criteria, and optionally labels/priority/assignee. -- When you begin work, switch to edit and add the plan: `backlog task edit <id> --plan "..."`. -- Add Implementation Notes only after completing the work: `backlog task edit <id> --notes "..."`. - -Phase discipline: What goes where -- Creation: Title, Description, Acceptance Criteria, labels/priority/assignee. -- Implementation: Implementation Plan (after moving to In Progress). -- Wrap-up: Implementation Notes, AC and Definition of Done checks. - -**IMPORTANT**: Only implement what's in the Acceptance Criteria. If you need to do more, either: -1. Update the AC first: `backlog task edit 42 --ac "New requirement"` -2. Or create a new task: `backlog task create "Additional feature"` - ---- - -## 6. Typical Workflow - -```bash -# 1. Identify work -backlog task list -s "To Do" --plain - -# 2. Read task details -backlog task 42 --plain - -# 3. Start work: assign yourself & change status -backlog task edit 42 -a @myself -s "In Progress" - -# 4. Add implementation plan -backlog task edit 42 --plan "1. Analyze\n2. Refactor\n3. Test" - -# 5. Work on the task (write code, test, etc.) - -# 6. Mark acceptance criteria as complete (supports multiple in one command) -backlog task edit 42 --check-ac 1 --check-ac 2 --check-ac 3 # Check all at once -# Or check them individually if preferred: -# backlog task edit 42 --check-ac 1 -# backlog task edit 42 --check-ac 2 -# backlog task edit 42 --check-ac 3 - -# 7. Add implementation notes -backlog task edit 42 --notes "Refactored using strategy pattern, updated tests" - -# 8. Mark task as done -backlog task edit 42 -s Done -``` - ---- - -## 7. Definition of Done (DoD) - -A task is **Done** only when **ALL** of the following are complete: - -### ✅ Via CLI Commands: -1. **All acceptance criteria checked**: Use `backlog task edit <id> --check-ac <index>` for each -2. **Implementation notes added**: Use `backlog task edit <id> --notes "..."` -3. **Status set to Done**: Use `backlog task edit <id> -s Done` - -### ✅ Via Code/Testing: -4. **Tests pass**: Run test suite and linting -5. **Documentation updated**: Update relevant docs if needed -6. **Code reviewed**: Self-review your changes -7. **No regressions**: Performance, security checks pass - -⚠️ **NEVER mark a task as Done without completing ALL items above** - ---- - -## 8. Quick Reference: DO vs DON'T - -### Viewing Tasks -| Task | ✅ DO | ❌ DON'T | -|------|-------|----------| -| View task | `backlog task 42 --plain` | Open and read .md file directly | -| List tasks | `backlog task list --plain` | Browse backlog/tasks folder | -| Check status | `backlog task 42 --plain` | Look at file content | - -### Modifying Tasks -| Task | ✅ DO | ❌ DON'T | -|------|-------|----------| -| Check AC | `backlog task edit 42 --check-ac 1` | Change `- [ ]` to `- [x]` in file | -| Add notes | `backlog task edit 42 --notes "..."` | Type notes into .md file | -| Change status | `backlog task edit 42 -s Done` | Edit status in frontmatter | -| Add AC | `backlog task edit 42 --ac "New"` | Add `- [ ] New` to file | - ---- - -## 9. Complete CLI Command Reference - -### Task Creation -| Action | Command | -|--------|---------| -| Create task | `backlog task create "Title"` | -| With description | `backlog task create "Title" -d "Description"` | -| With AC | `backlog task create "Title" --ac "Criterion 1" --ac "Criterion 2"` | -| With all options | `backlog task create "Title" -d "Desc" -a @sara -s "To Do" -l auth --priority high` | -| Create draft | `backlog task create "Title" --draft` | -| Create subtask | `backlog task create "Title" -p 42` | - -### Task Modification -| Action | Command | -|--------|---------| -| Edit title | `backlog task edit 42 -t "New Title"` | -| Edit description | `backlog task edit 42 -d "New description"` | -| Change status | `backlog task edit 42 -s "In Progress"` | -| Assign | `backlog task edit 42 -a @sara` | -| Add labels | `backlog task edit 42 -l backend,api` | -| Set priority | `backlog task edit 42 --priority high` | - -### Acceptance Criteria Management -| Action | Command | -|--------|---------| -| Add AC | `backlog task edit 42 --ac "New criterion" --ac "Another"` | -| Remove AC #2 | `backlog task edit 42 --remove-ac 2` | -| Remove multiple ACs | `backlog task edit 42 --remove-ac 2 --remove-ac 4` | -| Check AC #1 | `backlog task edit 42 --check-ac 1` | -| Check multiple ACs | `backlog task edit 42 --check-ac 1 --check-ac 3` | -| Uncheck AC #3 | `backlog task edit 42 --uncheck-ac 3` | -| Mixed operations | `backlog task edit 42 --check-ac 1 --uncheck-ac 2 --remove-ac 3 --ac "New"` | - -### Task Content -| Action | Command | -|--------|---------| -| Add plan | `backlog task edit 42 --plan "1. Step one\n2. Step two"` | -| Add notes | `backlog task edit 42 --notes "Implementation details"` | -| Add dependencies | `backlog task edit 42 --dep task-1 --dep task-2` | - -### Task Operations -| Action | Command | -|--------|---------| -| View task | `backlog task 42 --plain` | -| List tasks | `backlog task list --plain` | -| Filter by status | `backlog task list -s "In Progress" --plain` | -| Filter by assignee | `backlog task list -a @sara --plain` | -| Archive task | `backlog task archive 42` | -| Demote to draft | `backlog task demote 42` | - ---- - -## 10. Troubleshooting - -### If You Accidentally Edited a File Directly - -1. **DON'T PANIC** - But don't save or commit -2. Revert the changes -3. Make changes properly via CLI -4. If already saved, the metadata might be out of sync - use `backlog task edit` to fix - -### Common Issues - -| Problem | Solution | -|---------|----------| -| "Task not found" | Check task ID with `backlog task list --plain` | -| AC won't check | Use correct index: `backlog task 42 --plain` to see AC numbers | -| Changes not saving | Ensure you're using CLI, not editing files | -| Metadata out of sync | Re-edit via CLI to fix: `backlog task edit 42 -s <current-status>` | - ---- - -## Remember: The Golden Rule - -**🎯 If you want to change ANYTHING in a task, use the `backlog task edit` command.** -**📖 Only READ task files directly, never WRITE to them.** - -Full help available: `backlog --help` - -# === BACKLOG.MD GUIDELINES END === - diff --git a/app/Actions/Database/StartClickhouse.php b/app/Actions/Database/StartClickhouse.php index f218fcabb..7be727f55 100644 --- a/app/Actions/Database/StartClickhouse.php +++ b/app/Actions/Database/StartClickhouse.php @@ -99,8 +99,12 @@ public function handle(StandaloneClickhouse $database) $docker_compose = generateCustomDockerRunOptionsForDatabases($docker_run_options, $docker_compose, $container_name, $this->database->destination->network); $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; diff --git a/app/Actions/Database/StartDatabaseProxy.php b/app/Actions/Database/StartDatabaseProxy.php index 12fd92792..d90eebc17 100644 --- a/app/Actions/Database/StartDatabaseProxy.php +++ b/app/Actions/Database/StartDatabaseProxy.php @@ -52,8 +52,9 @@ public function handle(StandaloneRedis|StandalonePostgresql|StandaloneMongodb|St } $configuration_dir = database_proxy_dir($database->uuid); + $volume_configuration_dir = $configuration_dir; if (isDev()) { - $configuration_dir = '/var/lib/docker/volumes/coolify_dev_coolify_data/_data/databases/'.$database->uuid.'/proxy'; + $volume_configuration_dir = '/var/lib/docker/volumes/coolify_dev_coolify_data/_data/databases/'.$database->uuid.'/proxy'; } $nginxconf = <<<EOF user nginx; @@ -86,7 +87,7 @@ public function handle(StandaloneRedis|StandalonePostgresql|StandaloneMongodb|St 'volumes' => [ [ 'type' => 'bind', - 'source' => "$configuration_dir/nginx.conf", + 'source' => "$volume_configuration_dir/nginx.conf", 'target' => '/etc/nginx/nginx.conf', ], ], @@ -115,8 +116,18 @@ public function handle(StandaloneRedis|StandalonePostgresql|StandaloneMongodb|St instant_remote_process(["docker rm -f $proxyContainerName"], $server, false); instant_remote_process([ "mkdir -p $configuration_dir", - "echo '{$nginxconf_base64}' | base64 -d | tee $configuration_dir/nginx.conf > /dev/null", - "echo '{$dockercompose_base64}' | base64 -d | tee $configuration_dir/docker-compose.yaml > /dev/null", + [ + 'transfer_file' => [ + 'content' => base64_decode($nginxconf_base64), + 'destination' => "$configuration_dir/nginx.conf", + ], + ], + [ + 'transfer_file' => [ + 'content' => base64_decode($dockercompose_base64), + 'destination' => "$configuration_dir/docker-compose.yaml", + ], + ], "docker compose --project-directory {$configuration_dir} pull", "docker compose --project-directory {$configuration_dir} up -d", ], $server); diff --git a/app/Actions/Database/StartDragonfly.php b/app/Actions/Database/StartDragonfly.php index 38ad99d2e..579c6841d 100644 --- a/app/Actions/Database/StartDragonfly.php +++ b/app/Actions/Database/StartDragonfly.php @@ -183,8 +183,12 @@ public function handle(StandaloneDragonfly $database) $docker_compose = generateCustomDockerRunOptionsForDatabases($docker_run_options, $docker_compose, $container_name, $this->database->destination->network); $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; diff --git a/app/Actions/Database/StartKeydb.php b/app/Actions/Database/StartKeydb.php index 59bcd4123..e1d4e43c1 100644 --- a/app/Actions/Database/StartKeydb.php +++ b/app/Actions/Database/StartKeydb.php @@ -199,8 +199,12 @@ public function handle(StandaloneKeydb $database) $docker_run_options = convertDockerRunToCompose($this->database->custom_docker_run_options); $docker_compose = generateCustomDockerRunOptionsForDatabases($docker_run_options, $docker_compose, $container_name, $this->database->destination->network); $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; diff --git a/app/Actions/Database/StartMariadb.php b/app/Actions/Database/StartMariadb.php index 13dba4b43..3f7d22245 100644 --- a/app/Actions/Database/StartMariadb.php +++ b/app/Actions/Database/StartMariadb.php @@ -203,8 +203,12 @@ public function handle(StandaloneMariadb $database) } $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; @@ -284,7 +288,11 @@ private function add_custom_mysql() } $filename = 'custom-config.cnf'; $content = $this->database->mariadb_conf; - $content_base64 = base64_encode($content); - $this->commands[] = "echo '{$content_base64}' | base64 -d | tee $this->configuration_dir/{$filename} > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $content, + 'destination' => "$this->configuration_dir/{$filename}", + ], + ]; } } diff --git a/app/Actions/Database/StartMongodb.php b/app/Actions/Database/StartMongodb.php index 870b5b7e5..7135f1c70 100644 --- a/app/Actions/Database/StartMongodb.php +++ b/app/Actions/Database/StartMongodb.php @@ -28,9 +28,6 @@ public function handle(StandaloneMongodb $database) $container_name = $this->database->uuid; $this->configuration_dir = database_configuration_dir().'/'.$container_name; - if (isDev()) { - $this->configuration_dir = '/var/lib/docker/volumes/coolify_dev_coolify_data/_data/databases/'.$container_name; - } $this->commands = [ "echo 'Starting database.'", @@ -254,8 +251,12 @@ public function handle(StandaloneMongodb $database) } $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; @@ -332,15 +333,22 @@ private function add_custom_mongo_conf() } $filename = 'mongod.conf'; $content = $this->database->mongo_conf; - $content_base64 = base64_encode($content); - $this->commands[] = "echo '{$content_base64}' | base64 -d | tee $this->configuration_dir/{$filename} > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $content, + 'destination' => "$this->configuration_dir/{$filename}", + ], + ]; } private function add_default_database() { $content = "db = db.getSiblingDB(\"{$this->database->mongo_initdb_database}\");db.createCollection('init_collection');db.createUser({user: \"{$this->database->mongo_initdb_root_username}\", pwd: \"{$this->database->mongo_initdb_root_password}\",roles: [{role:\"readWrite\",db:\"{$this->database->mongo_initdb_database}\"}]});"; - $content_base64 = base64_encode($content); - $this->commands[] = "mkdir -p $this->configuration_dir/docker-entrypoint-initdb.d"; - $this->commands[] = "echo '{$content_base64}' | base64 -d | tee $this->configuration_dir/docker-entrypoint-initdb.d/01-default-database.js > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $content, + 'destination' => "$this->configuration_dir/docker-entrypoint-initdb.d/01-default-database.js", + ], + ]; } } diff --git a/app/Actions/Database/StartMysql.php b/app/Actions/Database/StartMysql.php index 5d5611e07..5f453f80a 100644 --- a/app/Actions/Database/StartMysql.php +++ b/app/Actions/Database/StartMysql.php @@ -204,8 +204,12 @@ public function handle(StandaloneMysql $database) } $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; @@ -287,7 +291,11 @@ private function add_custom_mysql() } $filename = 'custom-config.cnf'; $content = $this->database->mysql_conf; - $content_base64 = base64_encode($content); - $this->commands[] = "echo '{$content_base64}' | base64 -d | tee $this->configuration_dir/{$filename} > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $content, + 'destination' => "$this->configuration_dir/{$filename}", + ], + ]; } } diff --git a/app/Actions/Database/StartPostgresql.php b/app/Actions/Database/StartPostgresql.php index 4314ccd2f..75ca8ef10 100644 --- a/app/Actions/Database/StartPostgresql.php +++ b/app/Actions/Database/StartPostgresql.php @@ -27,9 +27,6 @@ public function handle(StandalonePostgresql $database) $this->database = $database; $container_name = $this->database->uuid; $this->configuration_dir = database_configuration_dir().'/'.$container_name; - if (isDev()) { - $this->configuration_dir = '/var/lib/docker/volumes/coolify_dev_coolify_data/_data/databases/'.$container_name; - } $this->commands = [ "echo 'Starting database.'", @@ -217,8 +214,12 @@ public function handle(StandalonePostgresql $database) } $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; @@ -229,6 +230,8 @@ public function handle(StandalonePostgresql $database) } $this->commands[] = "echo 'Database started.'"; + ray($this->commands); + return remote_process($this->commands, $database->destination->server, callEventOnFinish: 'DatabaseStatusChanged'); } @@ -302,8 +305,12 @@ private function generate_init_scripts() foreach ($this->database->init_scripts as $init_script) { $filename = data_get($init_script, 'filename'); $content = data_get($init_script, 'content'); - $content_base64 = base64_encode($content); - $this->commands[] = "echo '{$content_base64}' | base64 -d | tee $this->configuration_dir/docker-entrypoint-initdb.d/{$filename} > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $content, + 'destination' => "$this->configuration_dir/docker-entrypoint-initdb.d/{$filename}", + ], + ]; $this->init_scripts[] = "$this->configuration_dir/docker-entrypoint-initdb.d/{$filename}"; } } @@ -325,7 +332,11 @@ private function add_custom_conf() $this->database->postgres_conf = $content; $this->database->save(); } - $content_base64 = base64_encode($content); - $this->commands[] = "echo '{$content_base64}' | base64 -d | tee $config_file_path > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $content, + 'destination' => $config_file_path, + ], + ]; } } diff --git a/app/Actions/Database/StartRedis.php b/app/Actions/Database/StartRedis.php index 68a1f3fe3..b5962b165 100644 --- a/app/Actions/Database/StartRedis.php +++ b/app/Actions/Database/StartRedis.php @@ -196,8 +196,12 @@ public function handle(StandaloneRedis $database) $docker_compose = generateCustomDockerRunOptionsForDatabases($docker_run_options, $docker_compose, $container_name, $this->database->destination->network); $docker_compose = Yaml::dump($docker_compose, 10); - $docker_compose_base64 = base64_encode($docker_compose); - $this->commands[] = "echo '{$docker_compose_base64}' | base64 -d | tee $this->configuration_dir/docker-compose.yml > /dev/null"; + $this->commands[] = [ + 'transfer_file' => [ + 'content' => $docker_compose, + 'destination' => "$this->configuration_dir/docker-compose.yml", + ], + ]; $readme = generate_readme_file($this->database->name, now()); $this->commands[] = "echo '{$readme}' > $this->configuration_dir/README.md"; $this->commands[] = "echo 'Pulling {$database->image} image.'"; diff --git a/app/Actions/Proxy/CheckConfiguration.php b/app/Actions/Proxy/CheckConfiguration.php deleted file mode 100644 index b2d1eb787..000000000 --- a/app/Actions/Proxy/CheckConfiguration.php +++ /dev/null @@ -1,36 +0,0 @@ -<?php - -namespace App\Actions\Proxy; - -use App\Models\Server; -use App\Services\ProxyDashboardCacheService; -use Lorisleiva\Actions\Concerns\AsAction; - -class CheckConfiguration -{ - use AsAction; - - public function handle(Server $server, bool $reset = false) - { - $proxyType = $server->proxyType(); - if ($proxyType === 'NONE') { - return 'OK'; - } - $proxy_path = $server->proxyPath(); - $payload = [ - "mkdir -p $proxy_path", - "cat $proxy_path/docker-compose.yml", - ]; - $proxy_configuration = instant_remote_process($payload, $server, false); - if ($reset || ! $proxy_configuration || is_null($proxy_configuration)) { - $proxy_configuration = str(generate_default_proxy_configuration($server))->trim()->value(); - } - if (! $proxy_configuration || is_null($proxy_configuration)) { - throw new \Exception('Could not generate proxy configuration'); - } - - ProxyDashboardCacheService::isTraefikDashboardAvailableFromConfiguration($server, $proxy_configuration); - - return $proxy_configuration; - } -} diff --git a/app/Actions/Proxy/CheckProxy.php b/app/Actions/Proxy/CheckProxy.php index a06e547c5..99537e606 100644 --- a/app/Actions/Proxy/CheckProxy.php +++ b/app/Actions/Proxy/CheckProxy.php @@ -70,7 +70,7 @@ public function handle(Server $server, $fromUI = false): bool try { if ($server->proxyType() !== ProxyTypes::NONE->value) { - $proxyCompose = CheckConfiguration::run($server); + $proxyCompose = GetProxyConfiguration::run($server); if (isset($proxyCompose)) { $yaml = Yaml::parse($proxyCompose); $configPorts = []; diff --git a/app/Actions/Proxy/GetProxyConfiguration.php b/app/Actions/Proxy/GetProxyConfiguration.php new file mode 100644 index 000000000..3bf91c281 --- /dev/null +++ b/app/Actions/Proxy/GetProxyConfiguration.php @@ -0,0 +1,47 @@ +<?php + +namespace App\Actions\Proxy; + +use App\Models\Server; +use App\Services\ProxyDashboardCacheService; +use Lorisleiva\Actions\Concerns\AsAction; + +class GetProxyConfiguration +{ + use AsAction; + + public function handle(Server $server, bool $forceRegenerate = false): string + { + $proxyType = $server->proxyType(); + if ($proxyType === 'NONE') { + return 'OK'; + } + + $proxy_path = $server->proxyPath(); + $proxy_configuration = null; + + // If not forcing regeneration, try to read existing configuration + if (! $forceRegenerate) { + $payload = [ + "mkdir -p $proxy_path", + "cat $proxy_path/docker-compose.yml 2>/dev/null", + ]; + $proxy_configuration = instant_remote_process($payload, $server, false); + } + + // Generate default configuration if: + // 1. Force regenerate is requested + // 2. Configuration file doesn't exist or is empty + if ($forceRegenerate || empty(trim($proxy_configuration ?? ''))) { + $proxy_configuration = str(generate_default_proxy_configuration($server))->trim()->value(); + } + + if (empty($proxy_configuration)) { + throw new \Exception('Could not get or generate proxy configuration'); + } + + ProxyDashboardCacheService::isTraefikDashboardAvailableFromConfiguration($server, $proxy_configuration); + + return $proxy_configuration; + } +} diff --git a/app/Actions/Proxy/SaveConfiguration.php b/app/Actions/Proxy/SaveConfiguration.php deleted file mode 100644 index f2de2b3f5..000000000 --- a/app/Actions/Proxy/SaveConfiguration.php +++ /dev/null @@ -1,28 +0,0 @@ -<?php - -namespace App\Actions\Proxy; - -use App\Models\Server; -use Lorisleiva\Actions\Concerns\AsAction; - -class SaveConfiguration -{ - use AsAction; - - public function handle(Server $server, ?string $proxy_settings = null) - { - if (is_null($proxy_settings)) { - $proxy_settings = CheckConfiguration::run($server, true); - } - $proxy_path = $server->proxyPath(); - $docker_compose_yml_base64 = base64_encode($proxy_settings); - - $server->proxy->last_saved_settings = str($docker_compose_yml_base64)->pipe('md5')->value; - $server->save(); - - return instant_remote_process([ - "mkdir -p $proxy_path", - "echo '$docker_compose_yml_base64' | base64 -d | tee $proxy_path/docker-compose.yml > /dev/null", - ], $server); - } -} diff --git a/app/Actions/Proxy/SaveProxyConfiguration.php b/app/Actions/Proxy/SaveProxyConfiguration.php new file mode 100644 index 000000000..38c9c8def --- /dev/null +++ b/app/Actions/Proxy/SaveProxyConfiguration.php @@ -0,0 +1,32 @@ +<?php + +namespace App\Actions\Proxy; + +use App\Models\Server; +use Lorisleiva\Actions\Concerns\AsAction; + +class SaveProxyConfiguration +{ + use AsAction; + + public function handle(Server $server, string $configuration): void + { + $proxy_path = $server->proxyPath(); + $docker_compose_yml_base64 = base64_encode($configuration); + + // Update the saved settings hash + $server->proxy->last_saved_settings = str($docker_compose_yml_base64)->pipe('md5')->value; + $server->save(); + + // Transfer the configuration file to the server + instant_remote_process([ + "mkdir -p $proxy_path", + [ + 'transfer_file' => [ + 'content' => base64_decode($docker_compose_yml_base64), + 'destination' => "$proxy_path/docker-compose.yml", + ], + ], + ], $server); + } +} diff --git a/app/Actions/Proxy/StartProxy.php b/app/Actions/Proxy/StartProxy.php index e7c020ff6..ecfb13d0b 100644 --- a/app/Actions/Proxy/StartProxy.php +++ b/app/Actions/Proxy/StartProxy.php @@ -21,11 +21,11 @@ public function handle(Server $server, bool $async = true, bool $force = false): } $commands = collect([]); $proxy_path = $server->proxyPath(); - $configuration = CheckConfiguration::run($server); + $configuration = GetProxyConfiguration::run($server); if (! $configuration) { throw new \Exception('Configuration is not synced'); } - SaveConfiguration::run($server, $configuration); + SaveProxyConfiguration::run($server, $configuration); $docker_compose_yml_base64 = base64_encode($configuration); $server->proxy->last_applied_settings = str($docker_compose_yml_base64)->pipe('md5')->value(); $server->save(); diff --git a/app/Actions/Server/CheckUpdates.php b/app/Actions/Server/CheckUpdates.php index a8b1be11d..6823dfb92 100644 --- a/app/Actions/Server/CheckUpdates.php +++ b/app/Actions/Server/CheckUpdates.php @@ -102,7 +102,6 @@ public function handle(Server $server) ]; } } catch (\Throwable $e) { - ray('Error:', $e->getMessage()); return [ 'osId' => $osId, diff --git a/app/Actions/Server/ConfigureCloudflared.php b/app/Actions/Server/ConfigureCloudflared.php index d21622bc5..e66e7eecb 100644 --- a/app/Actions/Server/ConfigureCloudflared.php +++ b/app/Actions/Server/ConfigureCloudflared.php @@ -40,7 +40,12 @@ public function handle(Server $server, string $cloudflare_token, string $ssh_dom $commands = collect([ 'mkdir -p /tmp/cloudflared', 'cd /tmp/cloudflared', - "echo '$docker_compose_yml_base64' | base64 -d | tee docker-compose.yml > /dev/null", + [ + 'transfer_file' => [ + 'content' => base64_decode($docker_compose_yml_base64), + 'destination' => '/tmp/cloudflared/docker-compose.yml', + ], + ], 'echo Pulling latest Cloudflare Tunnel image.', 'docker compose pull', 'echo Stopping existing Cloudflare Tunnel container.', diff --git a/app/Actions/Server/InstallDocker.php b/app/Actions/Server/InstallDocker.php index 5410b1cbd..33c22b484 100644 --- a/app/Actions/Server/InstallDocker.php +++ b/app/Actions/Server/InstallDocker.php @@ -14,6 +14,7 @@ class InstallDocker public function handle(Server $server) { + ray('install docker'); $dockerVersion = config('constants.docker.minimum_required_version'); $supported_os_type = $server->validateOS(); if (! $supported_os_type) { @@ -103,8 +104,15 @@ public function handle(Server $server) "curl https://releases.rancher.com/install-docker/{$dockerVersion}.sh | sh || curl https://get.docker.com | sh -s -- --version {$dockerVersion}", "echo 'Configuring Docker Engine (merging existing configuration with the required)...'", 'test -s /etc/docker/daemon.json && cp /etc/docker/daemon.json "/etc/docker/daemon.json.original-$(date +"%Y%m%d-%H%M%S")"', - "test ! -s /etc/docker/daemon.json && echo '{$config}' | base64 -d | tee /etc/docker/daemon.json > /dev/null", - "echo '{$config}' | base64 -d | tee /etc/docker/daemon.json.coolify > /dev/null", + [ + 'transfer_file' => [ + 'content' => base64_decode($config), + 'destination' => '/tmp/daemon.json.new', + ], + ], + 'test ! -s /etc/docker/daemon.json && cp /tmp/daemon.json.new /etc/docker/daemon.json', + 'cp /tmp/daemon.json.new /etc/docker/daemon.json.coolify', + 'rm -f /tmp/daemon.json.new', 'jq . /etc/docker/daemon.json.coolify | tee /etc/docker/daemon.json.coolify.pretty > /dev/null', 'mv /etc/docker/daemon.json.coolify.pretty /etc/docker/daemon.json.coolify', "jq -s '.[0] * .[1]' /etc/docker/daemon.json.coolify /etc/docker/daemon.json | tee /etc/docker/daemon.json.appended > /dev/null", diff --git a/app/Actions/Server/StartLogDrain.php b/app/Actions/Server/StartLogDrain.php index f72f23696..3e1dad1c2 100644 --- a/app/Actions/Server/StartLogDrain.php +++ b/app/Actions/Server/StartLogDrain.php @@ -180,10 +180,30 @@ public function handle(Server $server) $command = [ "echo 'Saving configuration'", "mkdir -p $config_path", - "echo '{$parsers}' | base64 -d | tee $parsers_config > /dev/null", - "echo '{$config}' | base64 -d | tee $fluent_bit_config > /dev/null", - "echo '{$compose}' | base64 -d | tee $compose_path > /dev/null", - "echo '{$readme}' | base64 -d | tee $readme_path > /dev/null", + [ + 'transfer_file' => [ + 'content' => base64_decode($parsers), + 'destination' => $parsers_config, + ], + ], + [ + 'transfer_file' => [ + 'content' => base64_decode($config), + 'destination' => $fluent_bit_config, + ], + ], + [ + 'transfer_file' => [ + 'content' => base64_decode($compose), + 'destination' => $compose_path, + ], + ], + [ + 'transfer_file' => [ + 'content' => base64_decode($readme), + 'destination' => $readme_path, + ], + ], "test -f $config_path/.env && rm $config_path/.env", ]; if ($type === 'newrelic') { diff --git a/app/Console/Commands/CleanupDatabase.php b/app/Console/Commands/CleanupDatabase.php index 2ccb76529..347ea9419 100644 --- a/app/Console/Commands/CleanupDatabase.php +++ b/app/Console/Commands/CleanupDatabase.php @@ -64,13 +64,5 @@ public function handle() if ($this->option('yes')) { $scheduled_task_executions->delete(); } - - // Cleanup webhooks table - $webhooks = DB::table('webhooks')->where('created_at', '<', now()->subDays($keep_days)); - $count = $webhooks->count(); - echo "Delete $count entries from webhooks.\n"; - if ($this->option('yes')) { - $webhooks->delete(); - } } } diff --git a/app/Console/Commands/Dev.php b/app/Console/Commands/Dev.php index a4cfde6f8..8f26d78ff 100644 --- a/app/Console/Commands/Dev.php +++ b/app/Console/Commands/Dev.php @@ -2,6 +2,7 @@ namespace App\Console\Commands; +use App\Jobs\CheckHelperImageJob; use App\Models\InstanceSettings; use Illuminate\Console\Command; use Illuminate\Support\Facades\Artisan; @@ -44,5 +45,6 @@ public function init() } else { echo "Instance already initialized.\n"; } + CheckHelperImageJob::dispatch(); } } diff --git a/app/Exceptions/Handler.php b/app/Exceptions/Handler.php index 275de57c0..3d731223d 100644 --- a/app/Exceptions/Handler.php +++ b/app/Exceptions/Handler.php @@ -29,6 +29,7 @@ class Handler extends ExceptionHandler */ protected $dontReport = [ ProcessException::class, + NonReportableException::class, ]; /** @@ -110,9 +111,14 @@ function (Scope $scope) { ); } ); + // Check for errors that should not be reported to Sentry if (str($e->getMessage())->contains('No space left on device')) { + // Log locally but don't send to Sentry + logger()->warning('Disk space error: '.$e->getMessage()); + return; } + Integration::captureUnhandledException($e); }); } diff --git a/app/Exceptions/NonReportableException.php b/app/Exceptions/NonReportableException.php new file mode 100644 index 000000000..4c4672127 --- /dev/null +++ b/app/Exceptions/NonReportableException.php @@ -0,0 +1,31 @@ +<?php + +namespace App\Exceptions; + +use Exception; + +/** + * Exception that should not be reported to Sentry or other error tracking services. + * Use this for known, expected errors that don't require external tracking. + */ +class NonReportableException extends Exception +{ + /** + * Create a new non-reportable exception instance. + * + * @param string $message + * @param int $code + */ + public function __construct($message = '', $code = 0, ?\Throwable $previous = null) + { + parent::__construct($message, $code, $previous); + } + + /** + * Create from another exception, preserving its message and stack trace. + */ + public static function fromException(\Throwable $exception): static + { + return new static($exception->getMessage(), $exception->getCode(), $exception); + } +} diff --git a/app/Helpers/SshMultiplexingHelper.php b/app/Helpers/SshMultiplexingHelper.php index 8caa2880a..f847f33cc 100644 --- a/app/Helpers/SshMultiplexingHelper.php +++ b/app/Helpers/SshMultiplexingHelper.php @@ -4,7 +4,9 @@ use App\Models\PrivateKey; use App\Models\Server; +use Illuminate\Support\Facades\Cache; use Illuminate\Support\Facades\Hash; +use Illuminate\Support\Facades\Log; use Illuminate\Support\Facades\Process; class SshMultiplexingHelper @@ -30,6 +32,7 @@ public static function ensureMultiplexedConnection(Server $server): bool $sshConfig = self::serverSshConfiguration($server); $muxSocket = $sshConfig['muxFilename']; + // Check if connection exists $checkCommand = "ssh -O check -o ControlPath=$muxSocket "; if (data_get($server, 'settings.is_cloudflare_tunnel')) { $checkCommand .= '-o ProxyCommand="cloudflared access ssh --hostname %h" '; @@ -41,6 +44,24 @@ public static function ensureMultiplexedConnection(Server $server): bool return self::establishNewMultiplexedConnection($server); } + // Connection exists, ensure we have metadata for age tracking + if (self::getConnectionAge($server) === null) { + // Existing connection but no metadata, store current time as fallback + self::storeConnectionMetadata($server); + } + + // Connection exists, check if it needs refresh due to age + if (self::isConnectionExpired($server)) { + return self::refreshMultiplexedConnection($server); + } + + // Perform health check if enabled + if (config('constants.ssh.mux_health_check_enabled')) { + if (! self::isConnectionHealthy($server)) { + return self::refreshMultiplexedConnection($server); + } + } + return true; } @@ -65,6 +86,9 @@ public static function establishNewMultiplexedConnection(Server $server): bool return false; } + // Store connection metadata for tracking + self::storeConnectionMetadata($server); + return true; } @@ -79,6 +103,9 @@ public static function removeMuxFile(Server $server) } $closeCommand .= "{$server->user}@{$server->ip}"; Process::run($closeCommand); + + // Clear connection metadata from cache + self::clearConnectionMetadata($server); } public static function generateScpCommand(Server $server, string $source, string $dest) @@ -94,8 +121,18 @@ public static function generateScpCommand(Server $server, string $source, string if ($server->isIpv6()) { $scp_command .= '-6 '; } - if (self::isMultiplexingEnabled() && self::ensureMultiplexedConnection($server)) { - $scp_command .= "-o ControlMaster=auto -o ControlPath=$muxSocket -o ControlPersist={$muxPersistTime} "; + if (self::isMultiplexingEnabled()) { + try { + if (self::ensureMultiplexedConnection($server)) { + $scp_command .= "-o ControlMaster=auto -o ControlPath=$muxSocket -o ControlPersist={$muxPersistTime} "; + } + } catch (\Exception $e) { + Log::warning('SSH multiplexing failed for SCP, falling back to non-multiplexed connection', [ + 'server' => $server->name ?? $server->ip, + 'error' => $e->getMessage(), + ]); + // Continue without multiplexing + } } if (data_get($server, 'settings.is_cloudflare_tunnel')) { @@ -130,8 +167,16 @@ public static function generateSshCommand(Server $server, string $command) $ssh_command = "timeout $timeout ssh "; - if (self::isMultiplexingEnabled() && self::ensureMultiplexedConnection($server)) { - $ssh_command .= "-o ControlMaster=auto -o ControlPath=$muxSocket -o ControlPersist={$muxPersistTime} "; + $multiplexingSuccessful = false; + if (self::isMultiplexingEnabled()) { + try { + $multiplexingSuccessful = self::ensureMultiplexedConnection($server); + if ($multiplexingSuccessful) { + $ssh_command .= "-o ControlMaster=auto -o ControlPath=$muxSocket -o ControlPersist={$muxPersistTime} "; + } + } catch (\Exception $e) { + // Continue without multiplexing + } } if (data_get($server, 'settings.is_cloudflare_tunnel')) { @@ -186,4 +231,81 @@ private static function getCommonSshOptions(Server $server, string $sshKeyLocati return $options; } + + /** + * Check if the multiplexed connection is healthy by running a test command + */ + public static function isConnectionHealthy(Server $server): bool + { + $sshConfig = self::serverSshConfiguration($server); + $muxSocket = $sshConfig['muxFilename']; + $healthCheckTimeout = config('constants.ssh.mux_health_check_timeout'); + + $healthCommand = "timeout $healthCheckTimeout ssh -o ControlMaster=auto -o ControlPath=$muxSocket "; + if (data_get($server, 'settings.is_cloudflare_tunnel')) { + $healthCommand .= '-o ProxyCommand="cloudflared access ssh --hostname %h" '; + } + $healthCommand .= "{$server->user}@{$server->ip} 'echo \"health_check_ok\"'"; + + $process = Process::run($healthCommand); + $isHealthy = $process->exitCode() === 0 && str_contains($process->output(), 'health_check_ok'); + + return $isHealthy; + } + + /** + * Check if the connection has exceeded its maximum age + */ + public static function isConnectionExpired(Server $server): bool + { + $connectionAge = self::getConnectionAge($server); + $maxAge = config('constants.ssh.mux_max_age'); + + return $connectionAge !== null && $connectionAge > $maxAge; + } + + /** + * Get the age of the current connection in seconds + */ + public static function getConnectionAge(Server $server): ?int + { + $cacheKey = "ssh_mux_connection_time_{$server->uuid}"; + $connectionTime = Cache::get($cacheKey); + + if ($connectionTime === null) { + return null; + } + + return time() - $connectionTime; + } + + /** + * Refresh a multiplexed connection by closing and re-establishing it + */ + public static function refreshMultiplexedConnection(Server $server): bool + { + // Close existing connection + self::removeMuxFile($server); + + // Establish new connection + return self::establishNewMultiplexedConnection($server); + } + + /** + * Store connection metadata when a new connection is established + */ + private static function storeConnectionMetadata(Server $server): void + { + $cacheKey = "ssh_mux_connection_time_{$server->uuid}"; + Cache::put($cacheKey, time(), config('constants.ssh.mux_persist_time') + 300); // Cache slightly longer than persist time + } + + /** + * Clear connection metadata from cache + */ + private static function clearConnectionMetadata(Server $server): void + { + $cacheKey = "ssh_mux_connection_time_{$server->uuid}"; + Cache::forget($cacheKey); + } } diff --git a/app/Helpers/SshRetryHandler.php b/app/Helpers/SshRetryHandler.php new file mode 100644 index 000000000..aaaf4252a --- /dev/null +++ b/app/Helpers/SshRetryHandler.php @@ -0,0 +1,34 @@ +<?php + +namespace App\Helpers; + +use App\Traits\SshRetryable; + +/** + * Helper class to use SshRetryable trait in non-class contexts + */ +class SshRetryHandler +{ + use SshRetryable; + + /** + * Static method to get a singleton instance + */ + public static function instance(): self + { + static $instance = null; + if ($instance === null) { + $instance = new self; + } + + return $instance; + } + + /** + * Convenience static method for retry execution + */ + public static function retry(callable $callback, array $context = [], bool $throwError = true) + { + return self::instance()->executeWithSshRetry($callback, $context, $throwError); + } +} diff --git a/app/Http/Controllers/Api/DeployController.php b/app/Http/Controllers/Api/DeployController.php index b87420f72..c4d603392 100644 --- a/app/Http/Controllers/Api/DeployController.php +++ b/app/Http/Controllers/Api/DeployController.php @@ -225,6 +225,14 @@ private function by_uuids(string $uuid, int $teamId, bool $force = false, int $p foreach ($uuids as $uuid) { $resource = getResourceByUuid($uuid, $teamId); if ($resource) { + if ($pr !== 0) { + $preview = $resource->previews()->where('pull_request_id', $pr)->first(); + if (! $preview) { + $deployments->push(['message' => "Pull request {$pr} not found for this resource.", 'resource_uuid' => $uuid]); + + continue; + } + } ['message' => $return_message, 'deployment_uuid' => $deployment_uuid] = $this->deploy_resource($resource, $force, $pr); if ($deployment_uuid) { $deployments->push(['message' => $return_message, 'resource_uuid' => $uuid, 'deployment_uuid' => $deployment_uuid->toString()]); diff --git a/app/Http/Controllers/Webhook/Github.php b/app/Http/Controllers/Webhook/Github.php index b940bf394..5ba9c08e7 100644 --- a/app/Http/Controllers/Webhook/Github.php +++ b/app/Http/Controllers/Webhook/Github.php @@ -97,162 +97,168 @@ public function manual(Request $request) return response("Nothing to do. No applications found with branch '$base_branch'."); } } - foreach ($applications as $application) { - $webhook_secret = data_get($application, 'manual_webhook_secret_github'); - $hmac = hash_hmac('sha256', $request->getContent(), $webhook_secret); - if (! hash_equals($x_hub_signature_256, $hmac) && ! isDev()) { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'Invalid signature.', - ]); + $applicationsByServer = $applications->groupBy(function ($app) { + return $app->destination->server_id; + }); - continue; - } - $isFunctional = $application->destination->server->isFunctional(); - if (! $isFunctional) { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'Server is not functional.', - ]); - - continue; - } - if ($x_github_event === 'push') { - if ($application->isDeployable()) { - $is_watch_path_triggered = $application->isWatchPathsTriggered($changed_files); - if ($is_watch_path_triggered || is_null($application->watch_paths)) { - $deployment_uuid = new Cuid2; - $result = queue_application_deployment( - application: $application, - deployment_uuid: $deployment_uuid, - force_rebuild: false, - commit: data_get($payload, 'after', 'HEAD'), - is_webhook: true, - ); - if ($result['status'] === 'skipped') { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'skipped', - 'message' => $result['message'], - ]); - } else { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'success', - 'message' => 'Deployment queued.', - 'application_uuid' => $application->uuid, - 'application_name' => $application->name, - 'deployment_uuid' => $result['deployment_uuid'], - ]); - } - } else { - $paths = str($application->watch_paths)->explode("\n"); - $return_payloads->push([ - 'status' => 'failed', - 'message' => 'Changed files do not match watch paths. Ignoring deployment.', - 'application_uuid' => $application->uuid, - 'application_name' => $application->name, - 'details' => [ - 'changed_files' => $changed_files, - 'watch_paths' => $paths, - ], - ]); - } - } else { + foreach ($applicationsByServer as $serverId => $serverApplications) { + foreach ($serverApplications as $application) { + $webhook_secret = data_get($application, 'manual_webhook_secret_github'); + $hmac = hash_hmac('sha256', $request->getContent(), $webhook_secret); + if (! hash_equals($x_hub_signature_256, $hmac) && ! isDev()) { $return_payloads->push([ + 'application' => $application->name, 'status' => 'failed', - 'message' => 'Deployments disabled.', - 'application_uuid' => $application->uuid, - 'application_name' => $application->name, + 'message' => 'Invalid signature.', ]); + + continue; } - } - if ($x_github_event === 'pull_request') { - if ($action === 'opened' || $action === 'synchronize' || $action === 'reopened') { - if ($application->isPRDeployable()) { - // Check if PR deployments from public contributors are restricted - if (! $application->settings->is_pr_deployments_public_enabled) { - $trustedAssociations = ['OWNER', 'MEMBER', 'COLLABORATOR', 'CONTRIBUTOR']; - if (! in_array($author_association, $trustedAssociations)) { + $isFunctional = $application->destination->server->isFunctional(); + if (! $isFunctional) { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'failed', + 'message' => 'Server is not functional.', + ]); + + continue; + } + if ($x_github_event === 'push') { + if ($application->isDeployable()) { + $is_watch_path_triggered = $application->isWatchPathsTriggered($changed_files); + if ($is_watch_path_triggered || is_null($application->watch_paths)) { + $deployment_uuid = new Cuid2; + $result = queue_application_deployment( + application: $application, + deployment_uuid: $deployment_uuid, + force_rebuild: false, + commit: data_get($payload, 'after', 'HEAD'), + is_webhook: true, + ); + if ($result['status'] === 'skipped') { $return_payloads->push([ 'application' => $application->name, - 'status' => 'failed', - 'message' => 'PR deployments are restricted to repository members and contributors. Author association: '.$author_association, + 'status' => 'skipped', + 'message' => $result['message'], ]); - - continue; - } - } - $deployment_uuid = new Cuid2; - $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); - if (! $found) { - if ($application->build_pack === 'dockercompose') { - $pr_app = ApplicationPreview::create([ - 'git_type' => 'github', - 'application_id' => $application->id, - 'pull_request_id' => $pull_request_id, - 'pull_request_html_url' => $pull_request_html_url, - 'docker_compose_domains' => $application->docker_compose_domains, - ]); - $pr_app->generate_preview_fqdn_compose(); } else { - $pr_app = ApplicationPreview::create([ - 'git_type' => 'github', - 'application_id' => $application->id, - 'pull_request_id' => $pull_request_id, - 'pull_request_html_url' => $pull_request_html_url, + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'success', + 'message' => 'Deployment queued.', + 'application_uuid' => $application->uuid, + 'application_name' => $application->name, + 'deployment_uuid' => $result['deployment_uuid'], ]); - $pr_app->generate_preview_fqdn(); } + } else { + $paths = str($application->watch_paths)->explode("\n"); + $return_payloads->push([ + 'status' => 'failed', + 'message' => 'Changed files do not match watch paths. Ignoring deployment.', + 'application_uuid' => $application->uuid, + 'application_name' => $application->name, + 'details' => [ + 'changed_files' => $changed_files, + 'watch_paths' => $paths, + ], + ]); } + } else { + $return_payloads->push([ + 'status' => 'failed', + 'message' => 'Deployments disabled.', + 'application_uuid' => $application->uuid, + 'application_name' => $application->name, + ]); + } + } + if ($x_github_event === 'pull_request') { + if ($action === 'opened' || $action === 'synchronize' || $action === 'reopened') { + if ($application->isPRDeployable()) { + // Check if PR deployments from public contributors are restricted + if (! $application->settings->is_pr_deployments_public_enabled) { + $trustedAssociations = ['OWNER', 'MEMBER', 'COLLABORATOR', 'CONTRIBUTOR']; + if (! in_array($author_association, $trustedAssociations)) { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'failed', + 'message' => 'PR deployments are restricted to repository members and contributors. Author association: '.$author_association, + ]); - $result = queue_application_deployment( - application: $application, - pull_request_id: $pull_request_id, - deployment_uuid: $deployment_uuid, - force_rebuild: false, - commit: data_get($payload, 'head.sha', 'HEAD'), - is_webhook: true, - git_type: 'github' - ); - if ($result['status'] === 'skipped') { + continue; + } + } + $deployment_uuid = new Cuid2; + $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); + if (! $found) { + if ($application->build_pack === 'dockercompose') { + $pr_app = ApplicationPreview::create([ + 'git_type' => 'github', + 'application_id' => $application->id, + 'pull_request_id' => $pull_request_id, + 'pull_request_html_url' => $pull_request_html_url, + 'docker_compose_domains' => $application->docker_compose_domains, + ]); + $pr_app->generate_preview_fqdn_compose(); + } else { + $pr_app = ApplicationPreview::create([ + 'git_type' => 'github', + 'application_id' => $application->id, + 'pull_request_id' => $pull_request_id, + 'pull_request_html_url' => $pull_request_html_url, + ]); + $pr_app->generate_preview_fqdn(); + } + } + + $result = queue_application_deployment( + application: $application, + pull_request_id: $pull_request_id, + deployment_uuid: $deployment_uuid, + force_rebuild: false, + commit: data_get($payload, 'head.sha', 'HEAD'), + is_webhook: true, + git_type: 'github' + ); + if ($result['status'] === 'skipped') { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'skipped', + 'message' => $result['message'], + ]); + } else { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'success', + 'message' => 'Preview deployment queued.', + ]); + } + } else { $return_payloads->push([ 'application' => $application->name, - 'status' => 'skipped', - 'message' => $result['message'], + 'status' => 'failed', + 'message' => 'Preview deployments disabled.', + ]); + } + } + if ($action === 'closed') { + $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); + if ($found) { + DeleteResourceJob::dispatch($found); + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'success', + 'message' => 'Preview deployment closed.', ]); } else { $return_payloads->push([ 'application' => $application->name, - 'status' => 'success', - 'message' => 'Preview deployment queued.', + 'status' => 'failed', + 'message' => 'No preview deployment found.', ]); } - } else { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'Preview deployments disabled.', - ]); - } - } - if ($action === 'closed') { - $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); - if ($found) { - DeleteResourceJob::dispatch($found); - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'success', - 'message' => 'Preview deployment closed.', - ]); - } else { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'No preview deployment found.', - ]); } } } @@ -358,141 +364,147 @@ public function normal(Request $request) return response("Nothing to do. No applications found with branch '$base_branch'."); } } - foreach ($applications as $application) { - $isFunctional = $application->destination->server->isFunctional(); - if (! $isFunctional) { - $return_payloads->push([ - 'status' => 'failed', - 'message' => 'Server is not functional.', - 'application_uuid' => $application->uuid, - 'application_name' => $application->name, - ]); + $applicationsByServer = $applications->groupBy(function ($app) { + return $app->destination->server_id; + }); - continue; - } - if ($x_github_event === 'push') { - if ($application->isDeployable()) { - $is_watch_path_triggered = $application->isWatchPathsTriggered($changed_files); - if ($is_watch_path_triggered || is_null($application->watch_paths)) { - $deployment_uuid = new Cuid2; - $result = queue_application_deployment( - application: $application, - deployment_uuid: $deployment_uuid, - commit: data_get($payload, 'after', 'HEAD'), - force_rebuild: false, - is_webhook: true, - ); - $return_payloads->push([ - 'status' => $result['status'], - 'message' => $result['message'], - 'application_uuid' => $application->uuid, - 'application_name' => $application->name, - 'deployment_uuid' => $result['deployment_uuid'], - ]); - } else { - $paths = str($application->watch_paths)->explode("\n"); - $return_payloads->push([ - 'status' => 'failed', - 'message' => 'Changed files do not match watch paths. Ignoring deployment.', - 'application_uuid' => $application->uuid, - 'application_name' => $application->name, - 'details' => [ - 'changed_files' => $changed_files, - 'watch_paths' => $paths, - ], - ]); - } - } else { + foreach ($applicationsByServer as $serverId => $serverApplications) { + foreach ($serverApplications as $application) { + $isFunctional = $application->destination->server->isFunctional(); + if (! $isFunctional) { $return_payloads->push([ 'status' => 'failed', - 'message' => 'Deployments disabled.', + 'message' => 'Server is not functional.', 'application_uuid' => $application->uuid, 'application_name' => $application->name, ]); - } - } - if ($x_github_event === 'pull_request') { - if ($action === 'opened' || $action === 'synchronize' || $action === 'reopened') { - if ($application->isPRDeployable()) { - // Check if PR deployments from public contributors are restricted - if (! $application->settings->is_pr_deployments_public_enabled) { - $trustedAssociations = ['OWNER', 'MEMBER', 'COLLABORATOR', 'CONTRIBUTOR']; - if (! in_array($author_association, $trustedAssociations)) { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'PR deployments are restricted to repository members and contributors. Author association: '.$author_association, - ]); - continue; - } - } - $deployment_uuid = new Cuid2; - $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); - if (! $found) { - ApplicationPreview::create([ - 'git_type' => 'github', - 'application_id' => $application->id, - 'pull_request_id' => $pull_request_id, - 'pull_request_html_url' => $pull_request_html_url, + continue; + } + if ($x_github_event === 'push') { + if ($application->isDeployable()) { + $is_watch_path_triggered = $application->isWatchPathsTriggered($changed_files); + if ($is_watch_path_triggered || is_null($application->watch_paths)) { + $deployment_uuid = new Cuid2; + $result = queue_application_deployment( + application: $application, + deployment_uuid: $deployment_uuid, + commit: data_get($payload, 'after', 'HEAD'), + force_rebuild: false, + is_webhook: true, + ); + $return_payloads->push([ + 'status' => $result['status'], + 'message' => $result['message'], + 'application_uuid' => $application->uuid, + 'application_name' => $application->name, + 'deployment_uuid' => $result['deployment_uuid'], + ]); + } else { + $paths = str($application->watch_paths)->explode("\n"); + $return_payloads->push([ + 'status' => 'failed', + 'message' => 'Changed files do not match watch paths. Ignoring deployment.', + 'application_uuid' => $application->uuid, + 'application_name' => $application->name, + 'details' => [ + 'changed_files' => $changed_files, + 'watch_paths' => $paths, + ], ]); } - $result = queue_application_deployment( - application: $application, - pull_request_id: $pull_request_id, - deployment_uuid: $deployment_uuid, - force_rebuild: false, - commit: data_get($payload, 'head.sha', 'HEAD'), - is_webhook: true, - git_type: 'github' - ); - if ($result['status'] === 'skipped') { + } else { + $return_payloads->push([ + 'status' => 'failed', + 'message' => 'Deployments disabled.', + 'application_uuid' => $application->uuid, + 'application_name' => $application->name, + ]); + } + } + if ($x_github_event === 'pull_request') { + if ($action === 'opened' || $action === 'synchronize' || $action === 'reopened') { + if ($application->isPRDeployable()) { + // Check if PR deployments from public contributors are restricted + if (! $application->settings->is_pr_deployments_public_enabled) { + $trustedAssociations = ['OWNER', 'MEMBER', 'COLLABORATOR', 'CONTRIBUTOR']; + if (! in_array($author_association, $trustedAssociations)) { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'failed', + 'message' => 'PR deployments are restricted to repository members and contributors. Author association: '.$author_association, + ]); + + continue; + } + } + $deployment_uuid = new Cuid2; + $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); + if (! $found) { + ApplicationPreview::create([ + 'git_type' => 'github', + 'application_id' => $application->id, + 'pull_request_id' => $pull_request_id, + 'pull_request_html_url' => $pull_request_html_url, + ]); + } + $result = queue_application_deployment( + application: $application, + pull_request_id: $pull_request_id, + deployment_uuid: $deployment_uuid, + force_rebuild: false, + commit: data_get($payload, 'head.sha', 'HEAD'), + is_webhook: true, + git_type: 'github' + ); + if ($result['status'] === 'skipped') { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'skipped', + 'message' => $result['message'], + ]); + } else { + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'success', + 'message' => 'Preview deployment queued.', + ]); + } + } else { $return_payloads->push([ 'application' => $application->name, - 'status' => 'skipped', - 'message' => $result['message'], + 'status' => 'failed', + 'message' => 'Preview deployments disabled.', + ]); + } + } + if ($action === 'closed' || $action === 'close') { + $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); + if ($found) { + $containers = getCurrentApplicationContainerStatus($application->destination->server, $application->id, $pull_request_id); + if ($containers->isNotEmpty()) { + $containers->each(function ($container) use ($application) { + $container_name = data_get($container, 'Names'); + instant_remote_process(["docker rm -f $container_name"], $application->destination->server); + }); + } + + ApplicationPullRequestUpdateJob::dispatchSync(application: $application, preview: $found, status: ProcessStatus::CLOSED); + + DeleteResourceJob::dispatch($found); + + $return_payloads->push([ + 'application' => $application->name, + 'status' => 'success', + 'message' => 'Preview deployment closed.', ]); } else { $return_payloads->push([ 'application' => $application->name, - 'status' => 'success', - 'message' => 'Preview deployment queued.', + 'status' => 'failed', + 'message' => 'No preview deployment found.', ]); } - } else { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'Preview deployments disabled.', - ]); - } - } - if ($action === 'closed' || $action === 'close') { - $found = ApplicationPreview::where('application_id', $application->id)->where('pull_request_id', $pull_request_id)->first(); - if ($found) { - $containers = getCurrentApplicationContainerStatus($application->destination->server, $application->id, $pull_request_id); - if ($containers->isNotEmpty()) { - $containers->each(function ($container) use ($application) { - $container_name = data_get($container, 'Names'); - instant_remote_process(["docker rm -f $container_name"], $application->destination->server); - }); - } - - ApplicationPullRequestUpdateJob::dispatchSync(application: $application, preview: $found, status: ProcessStatus::CLOSED); - - DeleteResourceJob::dispatch($found); - - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'success', - 'message' => 'Preview deployment closed.', - ]); - } else { - $return_payloads->push([ - 'application' => $application->name, - 'status' => 'failed', - 'message' => 'No preview deployment found.', - ]); } } } diff --git a/app/Http/Controllers/Webhook/Stripe.php b/app/Http/Controllers/Webhook/Stripe.php index 83ba16699..ae50aac42 100644 --- a/app/Http/Controllers/Webhook/Stripe.php +++ b/app/Http/Controllers/Webhook/Stripe.php @@ -4,15 +4,12 @@ use App\Http\Controllers\Controller; use App\Jobs\StripeProcessJob; -use App\Models\Webhook; use Exception; use Illuminate\Http\Request; use Illuminate\Support\Facades\Storage; class Stripe extends Controller { - protected $webhook; - public function events(Request $request) { try { @@ -40,19 +37,10 @@ public function events(Request $request) return response('Webhook received. Cool cool cool cool cool.', 200); } - $this->webhook = Webhook::create([ - 'type' => 'stripe', - 'payload' => $request->getContent(), - ]); StripeProcessJob::dispatch($event); return response('Webhook received. Cool cool cool cool cool.', 200); } catch (Exception $e) { - $this->webhook->update([ - 'status' => 'failed', - 'failure_reason' => $e->getMessage(), - ]); - return response($e->getMessage(), 400); } } diff --git a/app/Http/Middleware/ApiAllowed.php b/app/Http/Middleware/ApiAllowed.php index dd85c3521..21441a117 100644 --- a/app/Http/Middleware/ApiAllowed.php +++ b/app/Http/Middleware/ApiAllowed.php @@ -28,7 +28,7 @@ public function handle(Request $request, Closure $next): Response $allowedIps = array_map('trim', $allowedIps); $allowedIps = array_filter($allowedIps); // Remove empty entries - if (! empty($allowedIps) && ! check_ip_against_allowlist($request->ip(), $allowedIps)) { + if (! empty($allowedIps) && ! checkIPAgainstAllowlist($request->ip(), $allowedIps)) { return response()->json(['success' => true, 'message' => 'You are not allowed to access the API.'], 403); } } diff --git a/app/Jobs/ApplicationDeploymentJob.php b/app/Jobs/ApplicationDeploymentJob.php index 9037fa3e5..54201053c 100644 --- a/app/Jobs/ApplicationDeploymentJob.php +++ b/app/Jobs/ApplicationDeploymentJob.php @@ -221,7 +221,7 @@ public function __construct(public int $application_deployment_queue_id) if ($this->pull_request_id === 0) { $this->container_name = $this->application->settings->custom_internal_name; } else { - $this->container_name = "{$this->application->settings->custom_internal_name}-pr-{$this->pull_request_id}"; + $this->container_name = addPreviewDeploymentSuffix($this->application->settings->custom_internal_name, $this->pull_request_id); } } @@ -388,11 +388,8 @@ private function deploy_simple_dockerfile() $dockerfile_base64 = base64_encode($this->application->dockerfile); $this->application_deployment_queue->addLogEntry("Starting deployment of {$this->application->name} to {$this->server->name}."); $this->prepare_builder_image(); - $this->execute_remote_command( - [ - executeInDocker($this->deployment_uuid, "echo '$dockerfile_base64' | base64 -d | tee {$this->workdir}{$this->dockerfile_location} > /dev/null"), - ], - ); + $dockerfile_content = base64_decode($dockerfile_base64); + transfer_file_to_container($dockerfile_content, "{$this->workdir}{$this->dockerfile_location}", $this->deployment_uuid, $this->server); $this->generate_image_names(); $this->generate_compose_file(); $this->generate_build_env_variables(); @@ -497,10 +494,7 @@ private function deploy_docker_compose_buildpack() $yaml = Yaml::dump(convertToArray($composeFile), 10); } $this->docker_compose_base64 = base64_encode($yaml); - $this->execute_remote_command([ - executeInDocker($this->deployment_uuid, "echo '{$this->docker_compose_base64}' | base64 -d | tee {$this->workdir}{$this->docker_compose_location} > /dev/null"), - 'hidden' => true, - ]); + transfer_file_to_container($yaml, "{$this->workdir}{$this->docker_compose_location}", $this->deployment_uuid, $this->server); // Build new container to limit downtime. $this->application_deployment_queue->addLogEntry('Pulling & building required images.'); @@ -712,16 +706,15 @@ private function write_deployment_configurations() if ($this->pull_request_id === 0) { $composeFileName = "$mainDir/docker-compose.yaml"; } else { - $composeFileName = "$mainDir/docker-compose-pr-{$this->pull_request_id}.yaml"; - $this->docker_compose_location = "/docker-compose-pr-{$this->pull_request_id}.yaml"; + $composeFileName = "$mainDir/".addPreviewDeploymentSuffix('docker-compose', $this->pull_request_id).'.yaml'; + $this->docker_compose_location = '/'.addPreviewDeploymentSuffix('docker-compose', $this->pull_request_id).'.yaml'; } + $this->execute_remote_command([ + "mkdir -p $mainDir", + ]); + $docker_compose_content = base64_decode($this->docker_compose_base64); + transfer_file_to_server($docker_compose_content, $composeFileName, $this->server); $this->execute_remote_command( - [ - "mkdir -p $mainDir", - ], - [ - "echo '{$this->docker_compose_base64}' | base64 -d | tee $composeFileName > /dev/null", - ], [ "echo '{$readme}' > $mainDir/README.md", ] @@ -905,10 +898,10 @@ private function save_environment_variables() } if ($this->build_pack === 'dockercompose') { $sorted_environment_variables = $sorted_environment_variables->filter(function ($env) { - return ! str($env->key)->startsWith('SERVICE_FQDN_') && ! str($env->key)->startsWith('SERVICE_URL_'); + return ! str($env->key)->startsWith('SERVICE_FQDN_') && ! str($env->key)->startsWith('SERVICE_URL_') && ! str($env->key)->startsWith('SERVICE_NAME_'); }); $sorted_environment_variables_preview = $sorted_environment_variables_preview->filter(function ($env) { - return ! str($env->key)->startsWith('SERVICE_FQDN_') && ! str($env->key)->startsWith('SERVICE_URL_'); + return ! str($env->key)->startsWith('SERVICE_FQDN_') && ! str($env->key)->startsWith('SERVICE_URL_') && ! str($env->key)->startsWith('SERVICE_NAME_'); }); } $ports = $this->application->main_port(); @@ -949,9 +942,20 @@ private function save_environment_variables() $envs->push('SERVICE_FQDN_'.str($forServiceName)->upper().'='.$coolifyFqdn); } } + + // Generate SERVICE_NAME for dockercompose services from processed compose + if ($this->application->settings->is_raw_compose_deployment_enabled) { + $dockerCompose = Yaml::parse($this->application->docker_compose_raw); + } else { + $dockerCompose = Yaml::parse($this->application->docker_compose); + } + $services = data_get($dockerCompose, 'services', []); + foreach ($services as $serviceName => $_) { + $envs->push('SERVICE_NAME_'.str($serviceName)->upper().'='.$serviceName); + } } } else { - $this->env_filename = ".env-pr-$this->pull_request_id"; + $this->env_filename = addPreviewDeploymentSuffix('.env', $this->pull_request_id); foreach ($sorted_environment_variables_preview as $env) { $envs->push($env->key.'='.$env->real_value); } @@ -982,6 +986,13 @@ private function save_environment_variables() $envs->push('SERVICE_FQDN_'.str($forServiceName)->upper().'='.$coolifyFqdn); } } + + // Generate SERVICE_NAME for dockercompose services + $rawDockerCompose = Yaml::parse($this->application->docker_compose_raw); + $rawServices = data_get($rawDockerCompose, 'services', []); + foreach ($rawServices as $rawServiceName => $_) { + $envs->push('SERVICE_NAME_'.str($rawServiceName)->upper().'='.addPreviewDeploymentSuffix($rawServiceName, $this->pull_request_id)); + } } } if ($envs->isEmpty()) { @@ -1013,27 +1024,15 @@ private function save_environment_variables() ); } } else { - $envs_base64 = base64_encode($envs->implode("\n")); - $this->execute_remote_command( - [ - executeInDocker($this->deployment_uuid, "echo '$envs_base64' | base64 -d | tee $this->workdir/{$this->env_filename} > /dev/null"), - ], + $envs_content = $envs->implode("\n"); + transfer_file_to_container($envs_content, "$this->workdir/{$this->env_filename}", $this->deployment_uuid, $this->server); - ); if ($this->use_build_server) { $this->server = $this->original_server; - $this->execute_remote_command( - [ - "echo '$envs_base64' | base64 -d | tee $this->configuration_dir/{$this->env_filename} > /dev/null", - ] - ); + transfer_file_to_server($envs_content, "$this->configuration_dir/{$this->env_filename}", $this->server); $this->server = $this->build_server; } else { - $this->execute_remote_command( - [ - "echo '$envs_base64' | base64 -d | tee $this->configuration_dir/{$this->env_filename} > /dev/null", - ] - ); + transfer_file_to_server($envs_content, "$this->configuration_dir/{$this->env_filename}", $this->server); } } $this->environment_variables = $envs; @@ -1443,14 +1442,11 @@ private function check_git_if_build_needed() } $private_key = data_get($this->application, 'private_key.private_key'); if ($private_key) { - $private_key = base64_encode($private_key); + $this->execute_remote_command([ + executeInDocker($this->deployment_uuid, 'mkdir -p /root/.ssh'), + ]); + transfer_file_to_container($private_key, '/root/.ssh/id_rsa', $this->deployment_uuid, $this->server); $this->execute_remote_command( - [ - executeInDocker($this->deployment_uuid, 'mkdir -p /root/.ssh'), - ], - [ - executeInDocker($this->deployment_uuid, "echo '{$private_key}' | base64 -d | tee /root/.ssh/id_rsa > /dev/null"), - ], [ executeInDocker($this->deployment_uuid, 'chmod 600 /root/.ssh/id_rsa'), ], @@ -1622,6 +1618,12 @@ private function generate_nixpacks_env_variables() } } + // Add COOLIFY_* environment variables to Nixpacks build context + $coolify_envs = $this->generate_coolify_env_variables(); + $coolify_envs->each(function ($value, $key) { + $this->env_nixpacks_args->push("--env {$key}={$value}"); + }); + $this->env_nixpacks_args = $this->env_nixpacks_args->implode(' '); } @@ -1993,7 +1995,7 @@ private function generate_compose_file() $this->docker_compose = Yaml::dump($docker_compose, 10); $this->docker_compose_base64 = base64_encode($this->docker_compose); - $this->execute_remote_command([executeInDocker($this->deployment_uuid, "echo '{$this->docker_compose_base64}' | base64 -d | tee {$this->workdir}/docker-compose.yaml > /dev/null"), 'hidden' => true]); + transfer_file_to_container(base64_decode($this->docker_compose_base64), "{$this->workdir}/docker-compose.yaml", $this->deployment_uuid, $this->server); } private function generate_local_persistent_volumes() @@ -2006,7 +2008,7 @@ private function generate_local_persistent_volumes() $volume_name = $persistentStorage->name; } if ($this->pull_request_id !== 0) { - $volume_name = $volume_name.'-pr-'.$this->pull_request_id; + $volume_name = addPreviewDeploymentSuffix($volume_name, $this->pull_request_id); } $local_persistent_volumes[] = $volume_name.':'.$persistentStorage->mount_path; } @@ -2024,7 +2026,7 @@ private function generate_local_persistent_volumes_only_volume_names() $name = $persistentStorage->name; if ($this->pull_request_id !== 0) { - $name = $name.'-pr-'.$this->pull_request_id; + $name = addPreviewDeploymentSuffix($name, $this->pull_request_id); } $local_persistent_volumes_names[$name] = [ @@ -2121,25 +2123,34 @@ private function build_image() } else { if ($this->application->build_pack === 'nixpacks') { $this->nixpacks_plan = base64_encode($this->nixpacks_plan); - $this->execute_remote_command([executeInDocker($this->deployment_uuid, "echo '{$this->nixpacks_plan}' | base64 -d | tee /artifacts/thegameplan.json > /dev/null"), 'hidden' => true]); + $nixpacks_content = base64_decode($this->nixpacks_plan); + transfer_file_to_container($nixpacks_content, '/artifacts/thegameplan.json', $this->deployment_uuid, $this->server); if ($this->force_rebuild) { $this->execute_remote_command([ executeInDocker($this->deployment_uuid, "nixpacks build -c /artifacts/thegameplan.json --no-cache --no-error-without-start -n {$this->build_image_name} {$this->workdir} -o {$this->workdir}"), 'hidden' => true, ]); - $build_command = "docker build --no-cache {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->build_image_name} {$this->workdir}"; + $env_copy_command = ''; + if ($this->pull_request_id !== 0 && $this->env_filename) { + $env_copy_command = "if [ -f {$this->workdir}/{$this->env_filename} ]; then cp {$this->workdir}/{$this->env_filename} {$this->workdir}/.env; fi && "; + } + $build_command = "{$env_copy_command}docker build --no-cache {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->build_image_name} {$this->workdir}"; } else { $this->execute_remote_command([ executeInDocker($this->deployment_uuid, "nixpacks build -c /artifacts/thegameplan.json --cache-key '{$this->application->uuid}' --no-error-without-start -n {$this->build_image_name} {$this->workdir} -o {$this->workdir}"), 'hidden' => true, ]); - $build_command = "docker build {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->build_image_name} {$this->workdir}"; + $env_copy_command = ''; + if ($this->pull_request_id !== 0 && $this->env_filename) { + $env_copy_command = "if [ -f {$this->workdir}/{$this->env_filename} ]; then cp {$this->workdir}/{$this->env_filename} {$this->workdir}/.env; fi && "; + } + $build_command = "{$env_copy_command}docker build {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->build_image_name} {$this->workdir}"; } $base64_build_command = base64_encode($build_command); $this->execute_remote_command( [ - executeInDocker($this->deployment_uuid, "echo '{$base64_build_command}' | base64 -d | tee /artifacts/build.sh > /dev/null"), + transfer_file_to_container(base64_decode($base64_build_command), '/artifacts/build.sh', $this->deployment_uuid, $this->server), 'hidden' => true, ], [ @@ -2162,7 +2173,7 @@ private function build_image() } $this->execute_remote_command( [ - executeInDocker($this->deployment_uuid, "echo '{$base64_build_command}' | base64 -d | tee /artifacts/build.sh > /dev/null"), + transfer_file_to_container(base64_decode($base64_build_command), '/artifacts/build.sh', $this->deployment_uuid, $this->server), 'hidden' => true, ], [ @@ -2194,13 +2205,13 @@ private function build_image() $base64_build_command = base64_encode($build_command); $this->execute_remote_command( [ - executeInDocker($this->deployment_uuid, "echo '{$dockerfile}' | base64 -d | tee {$this->workdir}/Dockerfile > /dev/null"), + transfer_file_to_container(base64_decode($dockerfile), "{$this->workdir}/Dockerfile", $this->deployment_uuid, $this->server), ], [ - executeInDocker($this->deployment_uuid, "echo '{$nginx_config}' | base64 -d | tee {$this->workdir}/nginx.conf > /dev/null"), + transfer_file_to_container(base64_decode($nginx_config), "{$this->workdir}/nginx.conf", $this->deployment_uuid, $this->server), ], [ - executeInDocker($this->deployment_uuid, "echo '{$base64_build_command}' | base64 -d | tee /artifacts/build.sh > /dev/null"), + transfer_file_to_container(base64_decode($base64_build_command), '/artifacts/build.sh', $this->deployment_uuid, $this->server), 'hidden' => true, ], [ @@ -2223,7 +2234,7 @@ private function build_image() $base64_build_command = base64_encode($build_command); $this->execute_remote_command( [ - executeInDocker($this->deployment_uuid, "echo '{$base64_build_command}' | base64 -d | tee /artifacts/build.sh > /dev/null"), + transfer_file_to_container(base64_decode($base64_build_command), '/artifacts/build.sh', $this->deployment_uuid, $this->server), 'hidden' => true, ], [ @@ -2238,24 +2249,33 @@ private function build_image() } else { if ($this->application->build_pack === 'nixpacks') { $this->nixpacks_plan = base64_encode($this->nixpacks_plan); - $this->execute_remote_command([executeInDocker($this->deployment_uuid, "echo '{$this->nixpacks_plan}' | base64 -d | tee /artifacts/thegameplan.json > /dev/null"), 'hidden' => true]); + $nixpacks_content = base64_decode($this->nixpacks_plan); + transfer_file_to_container($nixpacks_content, '/artifacts/thegameplan.json', $this->deployment_uuid, $this->server); if ($this->force_rebuild) { $this->execute_remote_command([ executeInDocker($this->deployment_uuid, "nixpacks build -c /artifacts/thegameplan.json --no-cache --no-error-without-start -n {$this->production_image_name} {$this->workdir} -o {$this->workdir}"), 'hidden' => true, ]); - $build_command = "docker build --no-cache {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->production_image_name} {$this->workdir}"; + $env_copy_command = ''; + if ($this->pull_request_id !== 0 && $this->env_filename) { + $env_copy_command = "if [ -f {$this->workdir}/{$this->env_filename} ]; then cp {$this->workdir}/{$this->env_filename} {$this->workdir}/.env; fi && "; + } + $build_command = "{$env_copy_command}docker build --no-cache {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->production_image_name} {$this->workdir}"; } else { $this->execute_remote_command([ executeInDocker($this->deployment_uuid, "nixpacks build -c /artifacts/thegameplan.json --cache-key '{$this->application->uuid}' --no-error-without-start -n {$this->production_image_name} {$this->workdir} -o {$this->workdir}"), 'hidden' => true, ]); - $build_command = "docker build {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->production_image_name} {$this->workdir}"; + $env_copy_command = ''; + if ($this->pull_request_id !== 0 && $this->env_filename) { + $env_copy_command = "if [ -f {$this->workdir}/{$this->env_filename} ]; then cp {$this->workdir}/{$this->env_filename} {$this->workdir}/.env; fi && "; + } + $build_command = "{$env_copy_command}docker build {$this->addHosts} --network host -f {$this->workdir}/.nixpacks/Dockerfile {$this->build_args} --progress plain -t {$this->production_image_name} {$this->workdir}"; } $base64_build_command = base64_encode($build_command); $this->execute_remote_command( [ - executeInDocker($this->deployment_uuid, "echo '{$base64_build_command}' | base64 -d | tee /artifacts/build.sh > /dev/null"), + transfer_file_to_container(base64_decode($base64_build_command), '/artifacts/build.sh', $this->deployment_uuid, $this->server), 'hidden' => true, ], [ @@ -2278,7 +2298,7 @@ private function build_image() } $this->execute_remote_command( [ - executeInDocker($this->deployment_uuid, "echo '{$base64_build_command}' | base64 -d | tee /artifacts/build.sh > /dev/null"), + transfer_file_to_container(base64_decode($base64_build_command), '/artifacts/build.sh', $this->deployment_uuid, $this->server), 'hidden' => true, ], [ @@ -2319,7 +2339,7 @@ private function stop_running_container(bool $force = false) $containers = getCurrentApplicationContainerStatus($this->server, $this->application->id, $this->pull_request_id); if ($this->pull_request_id === 0) { $containers = $containers->filter(function ($container) { - return data_get($container, 'Names') !== $this->container_name && data_get($container, 'Names') !== $this->container_name.'-pr-'.$this->pull_request_id; + return data_get($container, 'Names') !== $this->container_name && data_get($container, 'Names') !== addPreviewDeploymentSuffix($this->container_name, $this->pull_request_id); }); } $containers->each(function ($container) { @@ -2405,7 +2425,7 @@ private function add_build_env_variables_to_dockerfile() } $dockerfile_base64 = base64_encode($dockerfile->implode("\n")); $this->execute_remote_command([ - executeInDocker($this->deployment_uuid, "echo '{$dockerfile_base64}' | base64 -d | tee {$this->workdir}{$this->dockerfile_location} > /dev/null"), + transfer_file_to_container(base64_decode($dockerfile_base64), "{$this->workdir}{$this->dockerfile_location}", $this->deployment_uuid, $this->server), 'hidden' => true, ]); } @@ -2477,8 +2497,6 @@ private function run_post_deployment_command() private function next(string $status) { - queue_next_deployment($this->application); - // Never allow changing status from FAILED or CANCELLED_BY_USER to anything else if ($this->application_deployment_queue->status === ApplicationDeploymentStatus::FAILED->value) { $this->application->environment->project->team?->notify(new DeploymentFailed($this->application, $this->deployment_uuid, $this->preview)); @@ -2493,6 +2511,8 @@ private function next(string $status) 'status' => $status, ]); + queue_next_deployment($this->application); + if ($status === ApplicationDeploymentStatus::FINISHED->value) { if (! $this->only_this_server) { $this->deploy_to_additional_destinations(); diff --git a/app/Jobs/DatabaseBackupJob.php b/app/Jobs/DatabaseBackupJob.php index 752d1f1ca..6ac9ae1e6 100644 --- a/app/Jobs/DatabaseBackupJob.php +++ b/app/Jobs/DatabaseBackupJob.php @@ -54,6 +54,10 @@ class DatabaseBackupJob implements ShouldBeEncrypted, ShouldQueue public ?string $backup_output = null; + public ?string $error_output = null; + + public bool $s3_uploaded = false; + public ?string $postgres_password = null; public ?string $mongo_root_username = null; @@ -355,7 +359,6 @@ public function handle(): void // If local backup is disabled, delete the local file immediately after S3 upload if ($this->backup->disable_local_backup) { deleteBackupsLocally($this->backup_location, $this->server); - $this->add_to_backup_output('Local backup file deleted after S3 upload (disable_local_backup enabled).'); } } @@ -367,15 +370,34 @@ public function handle(): void 'size' => $size, ]); } catch (\Throwable $e) { - if ($this->backup_log) { - $this->backup_log->update([ - 'status' => 'failed', - 'message' => $this->backup_output, - 'size' => $size, - 'filename' => null, - ]); + // Check if backup actually failed or if it's just a post-backup issue + $actualBackupFailed = ! $this->s3_uploaded && $this->backup->save_s3; + + if ($actualBackupFailed || $size === 0) { + // Real backup failure + if ($this->backup_log) { + $this->backup_log->update([ + 'status' => 'failed', + 'message' => $this->error_output ?? $this->backup_output ?? $e->getMessage(), + 'size' => $size, + 'filename' => null, + ]); + } + $this->team?->notify(new BackupFailed($this->backup, $this->database, $this->error_output ?? $this->backup_output ?? $e->getMessage(), $database)); + } else { + // Backup succeeded but post-processing failed (cleanup, notification, etc.) + if ($this->backup_log) { + $this->backup_log->update([ + 'status' => 'success', + 'message' => $this->backup_output ? $this->backup_output."\nWarning: Post-backup cleanup encountered an issue: ".$e->getMessage() : 'Warning: '.$e->getMessage(), + 'size' => $size, + ]); + } + // Send success notification since the backup itself succeeded + $this->team->notify(new BackupSuccess($this->backup, $this->database, $database)); + // Log the post-backup issue + ray('Post-backup operation failed but backup was successful: '.$e->getMessage()); } - $this->team?->notify(new BackupFailed($this->backup, $this->database, $this->backup_output, $database)); } } if ($this->backup_log && $this->backup_log->status === 'success') { @@ -446,7 +468,7 @@ private function backup_standalone_mongodb(string $databaseWithCollections): voi $this->backup_output = null; } } catch (\Throwable $e) { - $this->add_to_backup_output($e->getMessage()); + $this->add_to_error_output($e->getMessage()); throw $e; } } @@ -472,7 +494,7 @@ private function backup_standalone_postgresql(string $database): void $this->backup_output = null; } } catch (\Throwable $e) { - $this->add_to_backup_output($e->getMessage()); + $this->add_to_error_output($e->getMessage()); throw $e; } } @@ -492,7 +514,7 @@ private function backup_standalone_mysql(string $database): void $this->backup_output = null; } } catch (\Throwable $e) { - $this->add_to_backup_output($e->getMessage()); + $this->add_to_error_output($e->getMessage()); throw $e; } } @@ -512,7 +534,7 @@ private function backup_standalone_mariadb(string $database): void $this->backup_output = null; } } catch (\Throwable $e) { - $this->add_to_backup_output($e->getMessage()); + $this->add_to_error_output($e->getMessage()); throw $e; } } @@ -526,6 +548,15 @@ private function add_to_backup_output($output): void } } + private function add_to_error_output($output): void + { + if ($this->error_output) { + $this->error_output = $this->error_output."\n".$output; + } else { + $this->error_output = $output; + } + } + private function calculate_size() { return instant_remote_process(["du -b $this->backup_location | cut -f1"], $this->server, false); @@ -571,9 +602,10 @@ private function upload_to_s3(): void $commands[] = "docker exec backup-of-{$this->backup->uuid} mc cp $this->backup_location temporary/$bucket{$this->backup_dir}/"; instant_remote_process($commands, $this->server); - $this->add_to_backup_output('Uploaded to S3.'); + $this->s3_uploaded = true; } catch (\Throwable $e) { - $this->add_to_backup_output($e->getMessage()); + $this->s3_uploaded = false; + $this->add_to_error_output($e->getMessage()); throw $e; } finally { $command = "docker rm -f backup-of-{$this->backup->uuid}"; diff --git a/app/Jobs/ScheduledTaskJob.php b/app/Jobs/ScheduledTaskJob.php index 6c0c017e7..609595356 100644 --- a/app/Jobs/ScheduledTaskJob.php +++ b/app/Jobs/ScheduledTaskJob.php @@ -3,6 +3,7 @@ namespace App\Jobs; use App\Events\ScheduledTaskDone; +use App\Exceptions\NonReportableException; use App\Models\Application; use App\Models\ScheduledTask; use App\Models\ScheduledTaskExecution; @@ -120,7 +121,7 @@ public function handle(): void } // No valid container was found. - throw new \Exception('ScheduledTaskJob failed: No valid container was found. Is the container name correct?'); + throw new NonReportableException('ScheduledTaskJob failed: No valid container was found. Is the container name correct?'); } catch (\Throwable $e) { if ($this->task_log) { $this->task_log->update([ diff --git a/app/Livewire/Project/Application/General.php b/app/Livewire/Project/Application/General.php index aa72b7c5f..9f15011c2 100644 --- a/app/Livewire/Project/Application/General.php +++ b/app/Livewire/Project/Application/General.php @@ -487,7 +487,7 @@ public function checkFqdns($showToaster = true) $domains = str($this->application->fqdn)->trim()->explode(','); if ($this->application->additional_servers->count() === 0) { foreach ($domains as $domain) { - if (! validate_dns_entry($domain, $this->application->destination->server)) { + if (! validateDNSEntry($domain, $this->application->destination->server)) { $showToaster && $this->dispatch('error', 'Validating DNS failed.', "Make sure you have added the DNS records correctly.<br><br>$domain->{$this->application->destination->server->ip}<br><br>Check this <a target='_blank' class='underline dark:text-white' href='https://coolify.io/docs/knowledge-base/dns-configuration'>documentation</a> for further help."); } } @@ -615,7 +615,7 @@ public function submit($showToaster = true) foreach ($this->parsedServiceDomains as $service) { $domain = data_get($service, 'domain'); if ($domain) { - if (! validate_dns_entry($domain, $this->application->destination->server)) { + if (! validateDNSEntry($domain, $this->application->destination->server)) { $showToaster && $this->dispatch('error', 'Validating DNS failed.', "Make sure you have added the DNS records correctly.<br><br>$domain->{$this->application->destination->server->ip}<br><br>Check this <a target='_blank' class='underline dark:text-white' href='https://coolify.io/docs/knowledge-base/dns-configuration'>documentation</a> for further help."); } } @@ -671,7 +671,7 @@ private function updateServiceEnvironmentVariables() $domains = collect(json_decode($this->application->docker_compose_domains, true)) ?? collect([]); foreach ($domains as $serviceName => $service) { - $serviceNameFormatted = str($serviceName)->upper()->replace('-', '_'); + $serviceNameFormatted = str($serviceName)->upper()->replace('-', '_')->replace('.', '_'); $domain = data_get($service, 'domain'); // Delete SERVICE_FQDN_ and SERVICE_URL_ variables if domain is removed $this->application->environment_variables()->where('resourceable_type', Application::class) diff --git a/app/Livewire/Project/Application/Previews.php b/app/Livewire/Project/Application/Previews.php index ebfd84489..1cb2ef2c5 100644 --- a/app/Livewire/Project/Application/Previews.php +++ b/app/Livewire/Project/Application/Previews.php @@ -77,7 +77,7 @@ public function save_preview($preview_id) $preview->fqdn = str($preview->fqdn)->replaceEnd(',', '')->trim(); $preview->fqdn = str($preview->fqdn)->replaceStart(',', '')->trim(); $preview->fqdn = str($preview->fqdn)->trim()->lower(); - if (! validate_dns_entry($preview->fqdn, $this->application->destination->server)) { + if (! validateDNSEntry($preview->fqdn, $this->application->destination->server)) { $this->dispatch('error', 'Validating DNS failed.', "Make sure you have added the DNS records correctly.<br><br>$preview->fqdn->{$this->application->destination->server->ip}<br><br>Check this <a target='_blank' class='underline dark:text-white' href='https://coolify.io/docs/knowledge-base/dns-configuration'>documentation</a> for further help."); $success = false; } @@ -231,6 +231,18 @@ protected function setDeploymentUuid() $this->parameters['deployment_uuid'] = $this->deployment_uuid; } + private function stopContainers(array $containers, $server) + { + $containersToStop = collect($containers)->pluck('Names')->toArray(); + + foreach ($containersToStop as $containerName) { + instant_remote_process(command: [ + "docker stop --time=30 $containerName", + "docker rm -f $containerName", + ], server: $server, throwError: false); + } + } + public function stop(int $pull_request_id) { $this->authorize('deploy', $this->application); diff --git a/app/Livewire/Project/CloneMe.php b/app/Livewire/Project/CloneMe.php index be9de139f..a4f50ee06 100644 --- a/app/Livewire/Project/CloneMe.php +++ b/app/Livewire/Project/CloneMe.php @@ -2,7 +2,6 @@ namespace App\Livewire\Project; -use App\Actions\Application\StopApplication; use App\Actions\Database\StartDatabase; use App\Actions\Database\StopDatabase; use App\Actions\Service\StartService; @@ -128,144 +127,10 @@ public function clone(string $type) $databases = $this->environment->databases(); $services = $this->environment->services; foreach ($applications as $application) { - $applicationSettings = $application->settings; - - $uuid = (string) new Cuid2; - $url = $application->fqdn; - if ($this->server->proxyType() !== 'NONE' && $applicationSettings->is_container_label_readonly_enabled === true) { - $url = generateUrl(server: $this->server, random: $uuid); - } - - $newApplication = $application->replicate([ - 'id', - 'created_at', - 'updated_at', - 'additional_servers_count', - 'additional_networks_count', - ])->fill([ - 'uuid' => $uuid, - 'fqdn' => $url, - 'status' => 'exited', + $selectedDestination = $this->servers->flatMap(fn ($server) => $server->destinations)->where('id', $this->selectedDestination)->first(); + clone_application($application, $selectedDestination, [ 'environment_id' => $environment->id, - 'destination_id' => $this->selectedDestination, - ]); - $newApplication->save(); - - if ($newApplication->destination->server->proxyType() !== 'NONE' && $applicationSettings->is_container_label_readonly_enabled === true) { - $customLabels = str(implode('|coolify|', generateLabelsApplication($newApplication)))->replace('|coolify|', "\n"); - $newApplication->custom_labels = base64_encode($customLabels); - $newApplication->save(); - } - - $newApplication->settings()->delete(); - if ($applicationSettings) { - $newApplicationSettings = $applicationSettings->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'application_id' => $newApplication->id, - ]); - $newApplicationSettings->save(); - } - - $tags = $application->tags; - foreach ($tags as $tag) { - $newApplication->tags()->attach($tag->id); - } - - $scheduledTasks = $application->scheduled_tasks()->get(); - foreach ($scheduledTasks as $task) { - $newTask = $task->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'uuid' => (string) new Cuid2, - 'application_id' => $newApplication->id, - 'team_id' => currentTeam()->id, - ]); - $newTask->save(); - } - - $applicationPreviews = $application->previews()->get(); - foreach ($applicationPreviews as $preview) { - $newPreview = $preview->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'application_id' => $newApplication->id, - 'status' => 'exited', - ]); - $newPreview->save(); - } - - $persistentVolumes = $application->persistentStorages()->get(); - foreach ($persistentVolumes as $volume) { - $newName = ''; - if (str_starts_with($volume->name, $application->uuid)) { - $newName = str($volume->name)->replace($application->uuid, $newApplication->uuid); - } else { - $newName = $newApplication->uuid.'-'.$volume->name; - } - - $newPersistentVolume = $volume->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'name' => $newName, - 'resource_id' => $newApplication->id, - ]); - $newPersistentVolume->save(); - - if ($this->cloneVolumeData) { - try { - StopApplication::dispatch($application, false, false); - $sourceVolume = $volume->name; - $targetVolume = $newPersistentVolume->name; - $sourceServer = $application->destination->server; - $targetServer = $newApplication->destination->server; - - VolumeCloneJob::dispatch($sourceVolume, $targetVolume, $sourceServer, $targetServer, $newPersistentVolume); - - queue_application_deployment( - deployment_uuid: (string) new Cuid2, - application: $application, - server: $sourceServer, - destination: $application->destination, - no_questions_asked: true - ); - } catch (\Exception $e) { - \Log::error('Failed to copy volume data for '.$volume->name.': '.$e->getMessage()); - } - } - } - - $fileStorages = $application->fileStorages()->get(); - foreach ($fileStorages as $storage) { - $newStorage = $storage->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'resource_id' => $newApplication->id, - ]); - $newStorage->save(); - } - - $environmentVaribles = $application->environment_variables()->get(); - foreach ($environmentVaribles as $environmentVarible) { - $newEnvironmentVariable = $environmentVarible->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'resourceable_id' => $newApplication->id, - ]); - $newEnvironmentVariable->save(); - } + ], $this->cloneVolumeData); } foreach ($databases as $database) { diff --git a/app/Livewire/Project/Database/Import.php b/app/Livewire/Project/Database/Import.php index 3f974f63d..706c6c0cd 100644 --- a/app/Livewire/Project/Database/Import.php +++ b/app/Livewire/Project/Database/Import.php @@ -232,8 +232,12 @@ public function runImport() break; } - $restoreCommandBase64 = base64_encode($restoreCommand); - $this->importCommands[] = "echo \"{$restoreCommandBase64}\" | base64 -d > {$scriptPath}"; + $this->importCommands[] = [ + 'transfer_file' => [ + 'content' => $restoreCommand, + 'destination' => $scriptPath, + ], + ]; $this->importCommands[] = "chmod +x {$scriptPath}"; $this->importCommands[] = "docker cp {$scriptPath} {$this->container}:{$scriptPath}"; diff --git a/app/Livewire/Project/Shared/EnvironmentVariable/All.php b/app/Livewire/Project/Shared/EnvironmentVariable/All.php index 3631a43c8..141263ba2 100644 --- a/app/Livewire/Project/Shared/EnvironmentVariable/All.php +++ b/app/Livewire/Project/Shared/EnvironmentVariable/All.php @@ -257,7 +257,7 @@ private function updateOrCreateVariables($isPreview, $variables) { $count = 0; foreach ($variables as $key => $value) { - if (str($key)->startsWith('SERVICE_FQDN') || str($key)->startsWith('SERVICE_URL')) { + if (str($key)->startsWith('SERVICE_FQDN') || str($key)->startsWith('SERVICE_URL') || str($key)->startsWith('SERVICE_NAME')) { continue; } $method = $isPreview ? 'environment_variables_preview' : 'environment_variables'; diff --git a/app/Livewire/Project/Shared/EnvironmentVariable/Show.php b/app/Livewire/Project/Shared/EnvironmentVariable/Show.php index 1a9daf77b..f8b06bff8 100644 --- a/app/Livewire/Project/Shared/EnvironmentVariable/Show.php +++ b/app/Livewire/Project/Shared/EnvironmentVariable/Show.php @@ -128,7 +128,7 @@ public function syncData(bool $toModel = false) public function checkEnvs() { $this->isDisabled = false; - if (str($this->env->key)->startsWith('SERVICE_FQDN') || str($this->env->key)->startsWith('SERVICE_URL')) { + if (str($this->env->key)->startsWith('SERVICE_FQDN') || str($this->env->key)->startsWith('SERVICE_URL') || str($this->env->key)->startsWith('SERVICE_NAME')) { $this->isDisabled = true; } if ($this->env->is_shown_once) { diff --git a/app/Livewire/Project/Shared/ResourceOperations.php b/app/Livewire/Project/Shared/ResourceOperations.php index 28a6380d5..47b3534a2 100644 --- a/app/Livewire/Project/Shared/ResourceOperations.php +++ b/app/Livewire/Project/Shared/ResourceOperations.php @@ -2,7 +2,6 @@ namespace App\Livewire\Project\Shared; -use App\Actions\Application\StopApplication; use App\Actions\Database\StartDatabase; use App\Actions\Database\StopDatabase; use App\Actions\Service\StartService; @@ -61,145 +60,7 @@ public function cloneTo($destination_id) $server = $new_destination->server; if ($this->resource->getMorphClass() === \App\Models\Application::class) { - $name = 'clone-of-'.str($this->resource->name)->limit(20).'-'.$uuid; - $applicationSettings = $this->resource->settings; - $url = $this->resource->fqdn; - - if ($server->proxyType() !== 'NONE' && $applicationSettings->is_container_label_readonly_enabled === true) { - $url = generateUrl(server: $server, random: $uuid); - } - - $new_resource = $this->resource->replicate([ - 'id', - 'created_at', - 'updated_at', - 'additional_servers_count', - 'additional_networks_count', - ])->fill([ - 'uuid' => $uuid, - 'name' => $name, - 'fqdn' => $url, - 'status' => 'exited', - 'destination_id' => $new_destination->id, - ]); - $new_resource->save(); - - if ($new_resource->destination->server->proxyType() !== 'NONE' && $applicationSettings->is_container_label_readonly_enabled === true) { - $customLabels = str(implode('|coolify|', generateLabelsApplication($new_resource)))->replace('|coolify|', "\n"); - $new_resource->custom_labels = base64_encode($customLabels); - $new_resource->save(); - } - - $new_resource->settings()->delete(); - if ($applicationSettings) { - $newApplicationSettings = $applicationSettings->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'application_id' => $new_resource->id, - ]); - $newApplicationSettings->save(); - } - - $tags = $this->resource->tags; - foreach ($tags as $tag) { - $new_resource->tags()->attach($tag->id); - } - - $scheduledTasks = $this->resource->scheduled_tasks()->get(); - foreach ($scheduledTasks as $task) { - $newTask = $task->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'uuid' => (string) new Cuid2, - 'application_id' => $new_resource->id, - 'team_id' => currentTeam()->id, - ]); - $newTask->save(); - } - - $applicationPreviews = $this->resource->previews()->get(); - foreach ($applicationPreviews as $preview) { - $newPreview = $preview->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'application_id' => $new_resource->id, - 'status' => 'exited', - ]); - $newPreview->save(); - } - - $persistentVolumes = $this->resource->persistentStorages()->get(); - foreach ($persistentVolumes as $volume) { - $newName = ''; - if (str_starts_with($volume->name, $this->resource->uuid)) { - $newName = str($volume->name)->replace($this->resource->uuid, $new_resource->uuid); - } else { - $newName = $new_resource->uuid.'-'.str($volume->name)->afterLast('-'); - } - - $newPersistentVolume = $volume->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'name' => $newName, - 'resource_id' => $new_resource->id, - ]); - $newPersistentVolume->save(); - - if ($this->cloneVolumeData) { - try { - StopApplication::dispatch($this->resource, false, false); - $sourceVolume = $volume->name; - $targetVolume = $newPersistentVolume->name; - $sourceServer = $this->resource->destination->server; - $targetServer = $new_resource->destination->server; - - VolumeCloneJob::dispatch($sourceVolume, $targetVolume, $sourceServer, $targetServer, $newPersistentVolume); - - queue_application_deployment( - deployment_uuid: (string) new Cuid2, - application: $this->resource, - server: $sourceServer, - destination: $this->resource->destination, - no_questions_asked: true - ); - } catch (\Exception $e) { - \Log::error('Failed to copy volume data for '.$volume->name.': '.$e->getMessage()); - } - } - } - - $fileStorages = $this->resource->fileStorages()->get(); - foreach ($fileStorages as $storage) { - $newStorage = $storage->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'resource_id' => $new_resource->id, - ]); - $newStorage->save(); - } - - $environmentVaribles = $this->resource->environment_variables()->get(); - foreach ($environmentVaribles as $environmentVarible) { - $newEnvironmentVariable = $environmentVarible->replicate([ - 'id', - 'created_at', - 'updated_at', - ])->fill([ - 'resourceable_id' => $new_resource->id, - 'resourceable_type' => $new_resource->getMorphClass(), - ]); - $newEnvironmentVariable->save(); - } + $new_resource = clone_application($this->resource, $new_destination, ['uuid' => $uuid], $this->cloneVolumeData); $route = route('project.application.configuration', [ 'project_uuid' => $this->projectUuid, diff --git a/app/Livewire/Project/Shared/Storages/All.php b/app/Livewire/Project/Shared/Storages/All.php index c26315d3b..63fc06a36 100644 --- a/app/Livewire/Project/Shared/Storages/All.php +++ b/app/Livewire/Project/Shared/Storages/All.php @@ -9,4 +9,15 @@ class All extends Component public $resource; protected $listeners = ['refreshStorages' => '$refresh']; + + public function getFirstStorageIdProperty() + { + if ($this->resource->persistentStorages->isEmpty()) { + return null; + } + + // Use the storage with the smallest ID as the "first" one + // This ensures stability even when storages are deleted + return $this->resource->persistentStorages->sortBy('id')->first()->id; + } } diff --git a/app/Livewire/Server/Proxy.php b/app/Livewire/Server/Proxy.php index 49adf7fe6..6ccca644a 100644 --- a/app/Livewire/Server/Proxy.php +++ b/app/Livewire/Server/Proxy.php @@ -2,8 +2,8 @@ namespace App\Livewire\Server; -use App\Actions\Proxy\CheckConfiguration; -use App\Actions\Proxy\SaveConfiguration; +use App\Actions\Proxy\GetProxyConfiguration; +use App\Actions\Proxy\SaveProxyConfiguration; use App\Models\Server; use Illuminate\Foundation\Auth\Access\AuthorizesRequests; use Livewire\Component; @@ -16,11 +16,11 @@ class Proxy extends Component public ?string $selectedProxy = null; - public $proxy_settings = null; + public $proxySettings = null; - public bool $redirect_enabled = true; + public bool $redirectEnabled = true; - public ?string $redirect_url = null; + public ?string $redirectUrl = null; public function getListeners() { @@ -39,14 +39,14 @@ public function getListeners() public function mount() { $this->selectedProxy = $this->server->proxyType(); - $this->redirect_enabled = data_get($this->server, 'proxy.redirect_enabled', true); - $this->redirect_url = data_get($this->server, 'proxy.redirect_url'); + $this->redirectEnabled = data_get($this->server, 'proxy.redirect_enabled', true); + $this->redirectUrl = data_get($this->server, 'proxy.redirect_url'); } - // public function proxyStatusUpdated() - // { - // $this->dispatch('refresh')->self(); - // } + public function getConfigurationFilePathProperty() + { + return $this->server->proxyPath().'/docker-compose.yml'; + } public function changeProxy() { @@ -86,7 +86,7 @@ public function instantSaveRedirect() { try { $this->authorize('update', $this->server); - $this->server->proxy->redirect_enabled = $this->redirect_enabled; + $this->server->proxy->redirect_enabled = $this->redirectEnabled; $this->server->save(); $this->server->setupDefaultRedirect(); $this->dispatch('success', 'Proxy configuration saved.'); @@ -99,8 +99,8 @@ public function submit() { try { $this->authorize('update', $this->server); - SaveConfiguration::run($this->server, $this->proxy_settings); - $this->server->proxy->redirect_url = $this->redirect_url; + SaveProxyConfiguration::run($this->server, $this->proxySettings); + $this->server->proxy->redirect_url = $this->redirectUrl; $this->server->save(); $this->server->setupDefaultRedirect(); $this->dispatch('success', 'Proxy configuration saved.'); @@ -109,14 +109,15 @@ public function submit() } } - public function reset_proxy_configuration() + public function resetProxyConfiguration() { try { $this->authorize('update', $this->server); - $this->proxy_settings = CheckConfiguration::run($this->server, true); - SaveConfiguration::run($this->server, $this->proxy_settings); + // Explicitly regenerate default configuration + $this->proxySettings = GetProxyConfiguration::run($this->server, forceRegenerate: true); + SaveProxyConfiguration::run($this->server, $this->proxySettings); $this->server->save(); - $this->dispatch('success', 'Proxy configuration saved.'); + $this->dispatch('success', 'Proxy configuration reset to default.'); } catch (\Throwable $e) { return handleError($e, $this); } @@ -125,7 +126,7 @@ public function reset_proxy_configuration() public function loadProxyConfiguration() { try { - $this->proxy_settings = CheckConfiguration::run($this->server); + $this->proxySettings = GetProxyConfiguration::run($this->server); } catch (\Throwable $e) { return handleError($e, $this); } diff --git a/app/Livewire/Server/Proxy/NewDynamicConfiguration.php b/app/Livewire/Server/Proxy/NewDynamicConfiguration.php index eb2db1cbb..b564e208b 100644 --- a/app/Livewire/Server/Proxy/NewDynamicConfiguration.php +++ b/app/Livewire/Server/Proxy/NewDynamicConfiguration.php @@ -78,10 +78,7 @@ public function addDynamicConfiguration() $yaml = Yaml::dump($yaml, 10, 2); $this->value = $yaml; } - $base64_value = base64_encode($this->value); - instant_remote_process([ - "echo '{$base64_value}' | base64 -d | tee {$file} > /dev/null", - ], $this->server); + transfer_file_to_server($this->value, $file, $this->server); if ($proxy_type === 'CADDY') { $this->server->reloadCaddy(); } diff --git a/app/Livewire/Settings/Index.php b/app/Livewire/Settings/Index.php index d05433082..13d690352 100644 --- a/app/Livewire/Settings/Index.php +++ b/app/Livewire/Settings/Index.php @@ -115,7 +115,7 @@ public function submit() $this->validate(); if ($this->settings->is_dns_validation_enabled && $this->fqdn) { - if (! validate_dns_entry($this->fqdn, $this->server)) { + if (! validateDNSEntry($this->fqdn, $this->server)) { $this->dispatch('error', "Validating DNS failed.<br><br>Make sure you have added the DNS records correctly.<br><br>{$this->fqdn}->{$this->server->ip}<br><br>Check this <a target='_blank' class='underline dark:text-white' href='https://coolify.io/docs/knowledge-base/dns-configuration'>documentation</a> for further help."); $error_show = true; } diff --git a/app/Models/Application.php b/app/Models/Application.php index 378161602..4a22a1953 100644 --- a/app/Models/Application.php +++ b/app/Models/Application.php @@ -936,9 +936,9 @@ public function isConfigurationChanged(bool $save = false) { $newConfigHash = base64_encode($this->fqdn.$this->git_repository.$this->git_branch.$this->git_commit_sha.$this->build_pack.$this->static_image.$this->install_command.$this->build_command.$this->start_command.$this->ports_exposes.$this->ports_mappings.$this->base_directory.$this->publish_directory.$this->dockerfile.$this->dockerfile_location.$this->custom_labels.$this->custom_docker_run_options.$this->dockerfile_target_build.$this->redirect.$this->custom_nginx_configuration.$this->custom_labels); if ($this->pull_request_id === 0 || $this->pull_request_id === null) { - $newConfigHash .= json_encode($this->environment_variables()->get('value')->sort()); + $newConfigHash .= json_encode($this->environment_variables()->get(['value', 'is_build_time', 'is_multiline', 'is_literal'])->sort()); } else { - $newConfigHash .= json_encode($this->environment_variables_preview->get('value')->sort()); + $newConfigHash .= json_encode($this->environment_variables_preview->get(['value', 'is_build_time', 'is_multiline', 'is_literal'])->sort()); } $newConfigHash = md5($newConfigHash); $oldConfigHash = data_get($this, 'config_hash'); @@ -1075,26 +1075,20 @@ public function generateGitLsRemoteCommands(string $deployment_uuid, bool $exec_ if (is_null($private_key)) { throw new RuntimeException('Private key not found. Please add a private key to the application and try again.'); } - $private_key = base64_encode($private_key); $base_comamnd = "GIT_SSH_COMMAND=\"ssh -o ConnectTimeout=30 -p {$customPort} -o Port={$customPort} -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /root/.ssh/id_rsa\" {$base_command} {$customRepository}"; - if ($exec_in_docker) { - $commands = collect([ - executeInDocker($deployment_uuid, 'mkdir -p /root/.ssh'), - executeInDocker($deployment_uuid, "echo '{$private_key}' | base64 -d | tee /root/.ssh/id_rsa > /dev/null"), - executeInDocker($deployment_uuid, 'chmod 600 /root/.ssh/id_rsa'), - ]); - } else { - $commands = collect([ - 'mkdir -p /root/.ssh', - "echo '{$private_key}' | base64 -d | tee /root/.ssh/id_rsa > /dev/null", - 'chmod 600 /root/.ssh/id_rsa', - ]); - } + $commands = collect([]); if ($exec_in_docker) { + $commands->push(executeInDocker($deployment_uuid, 'mkdir -p /root/.ssh')); + // SSH key transfer handled by ApplicationDeploymentJob, assume key is already in container + $commands->push(executeInDocker($deployment_uuid, 'chmod 600 /root/.ssh/id_rsa')); $commands->push(executeInDocker($deployment_uuid, $base_comamnd)); } else { + $server = $this->destination->server; + $commands->push('mkdir -p /root/.ssh'); + transfer_file_to_server($private_key, '/root/.ssh/id_rsa', $server); + $commands->push('chmod 600 /root/.ssh/id_rsa'); $commands->push($base_comamnd); } @@ -1220,7 +1214,6 @@ public function generateGitImportCommands(string $deployment_uuid, int $pull_req if (is_null($private_key)) { throw new RuntimeException('Private key not found. Please add a private key to the application and try again.'); } - $private_key = base64_encode($private_key); $escapedCustomRepository = escapeshellarg($customRepository); $git_clone_command_base = "GIT_SSH_COMMAND=\"ssh -o ConnectTimeout=30 -p {$customPort} -o Port={$customPort} -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /root/.ssh/id_rsa\" {$git_clone_command} {$escapedCustomRepository} {$escapedBaseDir}"; if ($only_checkout) { @@ -1228,18 +1221,18 @@ public function generateGitImportCommands(string $deployment_uuid, int $pull_req } else { $git_clone_command = $this->setGitImportSettings($deployment_uuid, $git_clone_command_base); } + + $commands = collect([]); + if ($exec_in_docker) { - $commands = collect([ - executeInDocker($deployment_uuid, 'mkdir -p /root/.ssh'), - executeInDocker($deployment_uuid, "echo '{$private_key}' | base64 -d | tee /root/.ssh/id_rsa > /dev/null"), - executeInDocker($deployment_uuid, 'chmod 600 /root/.ssh/id_rsa'), - ]); + $commands->push(executeInDocker($deployment_uuid, 'mkdir -p /root/.ssh')); + // SSH key transfer handled by ApplicationDeploymentJob, assume key is already in container + $commands->push(executeInDocker($deployment_uuid, 'chmod 600 /root/.ssh/id_rsa')); } else { - $commands = collect([ - 'mkdir -p /root/.ssh', - "echo '{$private_key}' | base64 -d | tee /root/.ssh/id_rsa > /dev/null", - 'chmod 600 /root/.ssh/id_rsa', - ]); + $server = $this->destination->server; + $commands->push('mkdir -p /root/.ssh'); + transfer_file_to_server($private_key, '/root/.ssh/id_rsa', $server); + $commands->push('chmod 600 /root/.ssh/id_rsa'); } if ($pull_request_id !== 0) { if ($git_type === 'gitlab') { @@ -1481,14 +1474,14 @@ public function loadComposeFile($isInit = false) $json = collect(json_decode($this->docker_compose_domains)); foreach ($json as $key => $value) { if (str($key)->contains('-')) { - $key = str($key)->replace('-', '_'); + $key = str($key)->replace('-', '_')->replace('.', '_'); } $json->put((string) $key, $value); } $services = collect(data_get($parsedServices, 'services', [])); foreach ($services as $name => $service) { if (str($name)->contains('-')) { - $replacedName = str($name)->replace('-', '_'); + $replacedName = str($name)->replace('-', '_')->replace('.', '_'); $services->put((string) $replacedName, $service); $services->forget((string) $name); } diff --git a/app/Models/Kubernetes.php b/app/Models/Kubernetes.php deleted file mode 100644 index 174cb5bc8..000000000 --- a/app/Models/Kubernetes.php +++ /dev/null @@ -1,5 +0,0 @@ -<?php - -namespace App\Models; - -class Kubernetes extends BaseModel {} diff --git a/app/Models/LocalFileVolume.php b/app/Models/LocalFileVolume.php index c56cd7694..b19b6aa42 100644 --- a/app/Models/LocalFileVolume.php +++ b/app/Models/LocalFileVolume.php @@ -119,6 +119,7 @@ public function saveStorageOnServer() $commands = collect([]); if ($this->is_directory) { $commands->push("mkdir -p $this->fs_path > /dev/null 2>&1 || true"); + $commands->push("mkdir -p $workdir > /dev/null 2>&1 || true"); $commands->push("cd $workdir"); } if (str($this->fs_path)->startsWith('.') || str($this->fs_path)->startsWith('/') || str($this->fs_path)->startsWith('~')) { @@ -158,8 +159,7 @@ public function saveStorageOnServer() $chmod = data_get($this, 'chmod'); $chown = data_get($this, 'chown'); if ($content) { - $content = base64_encode($content); - $commands->push("echo '$content' | base64 -d | tee $path > /dev/null"); + transfer_file_to_server($content, $path, $server); } else { $commands->push("touch $path"); } @@ -174,7 +174,9 @@ public function saveStorageOnServer() $commands->push("mkdir -p $path > /dev/null 2>&1 || true"); } - return instant_remote_process($commands, $server); + if ($commands->count() > 0) { + return instant_remote_process($commands, $server); + } } // Accessor for convenient access diff --git a/app/Models/PrivateKey.php b/app/Models/PrivateKey.php index f70f32bc4..c210f3c5b 100644 --- a/app/Models/PrivateKey.php +++ b/app/Models/PrivateKey.php @@ -4,6 +4,7 @@ use App\Traits\HasSafeStringAttribute; use DanHarrin\LivewireRateLimiting\WithRateLimiting; +use Illuminate\Support\Facades\DB; use Illuminate\Support\Facades\Storage; use Illuminate\Validation\ValidationException; use OpenApi\Attributes as OA; @@ -99,11 +100,18 @@ public static function validatePrivateKey($privateKey) public static function createAndStore(array $data) { - $privateKey = new self($data); - $privateKey->save(); - $privateKey->storeInFileSystem(); + return DB::transaction(function () use ($data) { + $privateKey = new self($data); + $privateKey->save(); - return $privateKey; + try { + $privateKey->storeInFileSystem(); + } catch (\Exception $e) { + throw new \Exception('Failed to store SSH key: '.$e->getMessage()); + } + + return $privateKey; + }); } public static function generateNewKeyPair($type = 'rsa') @@ -151,15 +159,64 @@ public static function validateAndExtractPublicKey($privateKey) public function storeInFileSystem() { $filename = "ssh_key@{$this->uuid}"; - Storage::disk('ssh-keys')->put($filename, $this->private_key); + $disk = Storage::disk('ssh-keys'); - return "/var/www/html/storage/app/ssh/keys/{$filename}"; + // Ensure the storage directory exists and is writable + $this->ensureStorageDirectoryExists(); + + // Attempt to store the private key + $success = $disk->put($filename, $this->private_key); + + if (! $success) { + throw new \Exception("Failed to write SSH key to filesystem. Check disk space and permissions for: {$this->getKeyLocation()}"); + } + + // Verify the file was actually created and has content + if (! $disk->exists($filename)) { + throw new \Exception("SSH key file was not created: {$this->getKeyLocation()}"); + } + + $storedContent = $disk->get($filename); + if (empty($storedContent) || $storedContent !== $this->private_key) { + $disk->delete($filename); // Clean up the bad file + throw new \Exception("SSH key file content verification failed: {$this->getKeyLocation()}"); + } + + return $this->getKeyLocation(); } public static function deleteFromStorage(self $privateKey) { $filename = "ssh_key@{$privateKey->uuid}"; - Storage::disk('ssh-keys')->delete($filename); + $disk = Storage::disk('ssh-keys'); + + if ($disk->exists($filename)) { + $disk->delete($filename); + } + } + + protected function ensureStorageDirectoryExists() + { + $disk = Storage::disk('ssh-keys'); + $directoryPath = ''; + + if (! $disk->exists($directoryPath)) { + $success = $disk->makeDirectory($directoryPath); + if (! $success) { + throw new \Exception('Failed to create SSH keys storage directory'); + } + } + + // Check if directory is writable by attempting a test file + $testFilename = '.test_write_'.uniqid(); + $testSuccess = $disk->put($testFilename, 'test'); + + if (! $testSuccess) { + throw new \Exception('SSH keys storage directory is not writable'); + } + + // Clean up test file + $disk->delete($testFilename); } public function getKeyLocation() @@ -169,10 +226,17 @@ public function getKeyLocation() public function updatePrivateKey(array $data) { - $this->update($data); - $this->storeInFileSystem(); + return DB::transaction(function () use ($data) { + $this->update($data); - return $this; + try { + $this->storeInFileSystem(); + } catch (\Exception $e) { + throw new \Exception('Failed to update SSH key: '.$e->getMessage()); + } + + return $this; + }); } public function servers() diff --git a/app/Models/Server.php b/app/Models/Server.php index 0f92bd390..b417cea49 100644 --- a/app/Models/Server.php +++ b/app/Models/Server.php @@ -309,10 +309,7 @@ public function setupDefaultRedirect() $conf = Yaml::dump($dynamic_conf, 12, 2); } $conf = $banner.$conf; - $base64 = base64_encode($conf); - instant_remote_process([ - "echo '$base64' | base64 -d | tee $default_redirect_file > /dev/null", - ], $this); + transfer_file_to_server($conf, $default_redirect_file, $this); } if ($proxy_type === 'CADDY') { @@ -446,11 +443,10 @@ public function setupDynamicProxyConfiguration() "# Do not edit it manually (only if you know what are you doing).\n\n". $yaml; - $base64 = base64_encode($yaml); instant_remote_process([ "mkdir -p $dynamic_config_path", - "echo '$base64' | base64 -d | tee $file > /dev/null", ], $this); + transfer_file_to_server($yaml, $file, $this); } } elseif ($this->proxyType() === 'CADDY') { $file = "$dynamic_config_path/coolify.caddy"; @@ -473,10 +469,7 @@ public function setupDynamicProxyConfiguration() } reverse_proxy coolify:8080 }"; - $base64 = base64_encode($caddy_file); - instant_remote_process([ - "echo '$base64' | base64 -d | tee $file > /dev/null", - ], $this); + transfer_file_to_server($caddy_file, $file, $this); $this->reloadCaddy(); } } @@ -1319,7 +1312,6 @@ private function disableSshMux(): void public function generateCaCertificate() { try { - ray('Generating CA certificate for server', $this->id); SslHelper::generateSslCertificate( commonName: 'Coolify CA Certificate', serverId: $this->id, @@ -1327,7 +1319,6 @@ public function generateCaCertificate() validityDays: 10 * 365 ); $caCertificate = SslCertificate::where('server_id', $this->id)->where('is_ca_certificate', true)->first(); - ray('CA certificate generated', $caCertificate); if ($caCertificate) { $certificateContent = $caCertificate->ssl_certificate; $caCertPath = config('constants.coolify.base_config_path').'/ssl/'; diff --git a/app/Models/Service.php b/app/Models/Service.php index 43cb32d85..bd185b355 100644 --- a/app/Models/Service.php +++ b/app/Models/Service.php @@ -1281,8 +1281,10 @@ public function saveComposeConfigs() if ($envs->count() === 0) { $commands[] = 'touch .env'; } else { - $envs_base64 = base64_encode($envs->implode("\n")); - $commands[] = "echo '$envs_base64' | base64 -d | tee .env > /dev/null"; + $envs_content = $envs->implode("\n"); + transfer_file_to_server($envs_content, $this->workdir().'/.env', $this->server); + + return; } instant_remote_process($commands, $this->server); diff --git a/app/Models/StandaloneClickhouse.php b/app/Models/StandaloneClickhouse.php index 60a750a99..88142066f 100644 --- a/app/Models/StandaloneClickhouse.php +++ b/app/Models/StandaloneClickhouse.php @@ -28,7 +28,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandaloneDragonfly.php b/app/Models/StandaloneDragonfly.php index 673851713..b7d22a2ce 100644 --- a/app/Models/StandaloneDragonfly.php +++ b/app/Models/StandaloneDragonfly.php @@ -28,7 +28,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandaloneKeydb.php b/app/Models/StandaloneKeydb.php index e6562193b..807728a36 100644 --- a/app/Models/StandaloneKeydb.php +++ b/app/Models/StandaloneKeydb.php @@ -28,7 +28,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandaloneMariadb.php b/app/Models/StandaloneMariadb.php index 1aa9d63c1..8d602c27d 100644 --- a/app/Models/StandaloneMariadb.php +++ b/app/Models/StandaloneMariadb.php @@ -29,7 +29,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandaloneMongodb.php b/app/Models/StandaloneMongodb.php index 299ea75b2..f222b0e5c 100644 --- a/app/Models/StandaloneMongodb.php +++ b/app/Models/StandaloneMongodb.php @@ -24,7 +24,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); LocalPersistentVolume::create([ 'name' => 'mongodb-db-'.$database->uuid, @@ -32,7 +31,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandaloneMysql.php b/app/Models/StandaloneMysql.php index f376c7644..e4693c76a 100644 --- a/app/Models/StandaloneMysql.php +++ b/app/Models/StandaloneMysql.php @@ -29,7 +29,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandalonePostgresql.php b/app/Models/StandalonePostgresql.php index 0bca2f4a7..47c984ff7 100644 --- a/app/Models/StandalonePostgresql.php +++ b/app/Models/StandalonePostgresql.php @@ -29,7 +29,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/StandaloneRedis.php b/app/Models/StandaloneRedis.php index 6a44ee714..79c6572ab 100644 --- a/app/Models/StandaloneRedis.php +++ b/app/Models/StandaloneRedis.php @@ -24,7 +24,6 @@ protected static function booted() 'host_path' => null, 'resource_id' => $database->id, 'resource_type' => $database->getMorphClass(), - 'is_readonly' => true, ]); }); static::forceDeleting(function ($database) { diff --git a/app/Models/Webhook.php b/app/Models/Webhook.php deleted file mode 100644 index 8e2b62955..000000000 --- a/app/Models/Webhook.php +++ /dev/null @@ -1,15 +0,0 @@ -<?php - -namespace App\Models; - -use Illuminate\Database\Eloquent\Model; - -class Webhook extends Model -{ - protected $guarded = []; - - protected $casts = [ - 'type' => 'string', - 'payload' => 'encrypted', - ]; -} diff --git a/app/Notifications/Channels/EmailChannel.php b/app/Notifications/Channels/EmailChannel.php index 47994c690..245bd85f0 100644 --- a/app/Notifications/Channels/EmailChannel.php +++ b/app/Notifications/Channels/EmailChannel.php @@ -2,6 +2,7 @@ namespace App\Notifications\Channels; +use App\Exceptions\NonReportableException; use App\Models\Team; use Exception; use Illuminate\Notifications\Notification; @@ -101,13 +102,11 @@ public function send(SendsEmail $notifiable, Notification $notification): void $mailer->send($email); } } catch (\Throwable $e) { - \Illuminate\Support\Facades\Log::error('EmailChannel failed: '.$e->getMessage(), [ - 'notification' => get_class($notification), - 'notifiable' => get_class($notifiable), - 'team_id' => data_get($notifiable, 'id'), - 'error' => $e->getMessage(), - 'trace' => $e->getTraceAsString(), - ]); + // Check if this is a Resend domain verification error on cloud instances + if (isCloud() && str_contains($e->getMessage(), 'domain is not verified')) { + // Throw as NonReportableException so it won't go to Sentry + throw NonReportableException::fromException($e); + } throw $e; } } diff --git a/app/Traits/EnvironmentVariableProtection.php b/app/Traits/EnvironmentVariableProtection.php index b6b8d2687..ecc484966 100644 --- a/app/Traits/EnvironmentVariableProtection.php +++ b/app/Traits/EnvironmentVariableProtection.php @@ -14,7 +14,7 @@ trait EnvironmentVariableProtection */ protected function isProtectedEnvironmentVariable(string $key): bool { - return str($key)->startsWith('SERVICE_FQDN') || str($key)->startsWith('SERVICE_URL'); + return str($key)->startsWith('SERVICE_FQDN_') || str($key)->startsWith('SERVICE_URL_') || str($key)->startsWith('SERVICE_NAME_'); } /** diff --git a/app/Traits/ExecuteRemoteCommand.php b/app/Traits/ExecuteRemoteCommand.php index a228a5d10..0e7961368 100644 --- a/app/Traits/ExecuteRemoteCommand.php +++ b/app/Traits/ExecuteRemoteCommand.php @@ -11,6 +11,8 @@ trait ExecuteRemoteCommand { + use SshRetryable; + public ?string $save = null; public static int $batch_counter = 0; @@ -43,76 +45,169 @@ public function execute_remote_command(...$commands) $command = parseLineForSudo($command, $this->server); } } - $remote_command = SshMultiplexingHelper::generateSshCommand($this->server, $command); - $process = Process::timeout(3600)->idleTimeout(3600)->start($remote_command, function (string $type, string $output) use ($command, $hidden, $customType, $append) { - $output = str($output)->trim(); - if ($output->startsWith('╔')) { - $output = "\n".$output; - } - // Sanitize output to ensure valid UTF-8 encoding before JSON encoding - $sanitized_output = sanitize_utf8_text($output); - - $new_log_entry = [ - 'command' => remove_iip($command), - 'output' => remove_iip($sanitized_output), - 'type' => $customType ?? $type === 'err' ? 'stderr' : 'stdout', - 'timestamp' => Carbon::now('UTC'), - 'hidden' => $hidden, - 'batch' => static::$batch_counter, - ]; - if (! $this->application_deployment_queue->logs) { - $new_log_entry['order'] = 1; - } else { - try { - $previous_logs = json_decode($this->application_deployment_queue->logs, associative: true, flags: JSON_THROW_ON_ERROR); - } catch (\JsonException $e) { - // If existing logs are corrupted, start fresh - $previous_logs = []; - $new_log_entry['order'] = 1; - } - if (is_array($previous_logs)) { - $new_log_entry['order'] = count($previous_logs) + 1; - } else { - $previous_logs = []; - $new_log_entry['order'] = 1; - } - } - $previous_logs[] = $new_log_entry; + $maxRetries = config('constants.ssh.max_retries'); + $attempt = 0; + $lastError = null; + $commandExecuted = false; + while ($attempt < $maxRetries && ! $commandExecuted) { try { - $this->application_deployment_queue->logs = json_encode($previous_logs, flags: JSON_THROW_ON_ERROR); - } catch (\JsonException $e) { - // If JSON encoding still fails, use fallback with invalid sequences replacement - $this->application_deployment_queue->logs = json_encode($previous_logs, flags: JSON_INVALID_UTF8_SUBSTITUTE); - } + $this->executeCommandWithProcess($command, $hidden, $customType, $append, $ignore_errors); + $commandExecuted = true; + } catch (\RuntimeException $e) { + $lastError = $e; + $errorMessage = $e->getMessage(); + // Only retry if it's an SSH connection error and we haven't exhausted retries + if ($this->isRetryableSshError($errorMessage) && $attempt < $maxRetries - 1) { + $attempt++; + $delay = $this->calculateRetryDelay($attempt - 1); - $this->application_deployment_queue->save(); + // Track SSH retry event in Sentry + $this->trackSshRetryEvent($attempt, $maxRetries, $delay, $errorMessage, [ + 'server' => $this->server->name ?? $this->server->ip ?? 'unknown', + 'command' => remove_iip($command), + 'trait' => 'ExecuteRemoteCommand', + ]); - if ($this->save) { - if (data_get($this->saved_outputs, $this->save, null) === null) { - data_set($this->saved_outputs, $this->save, str()); - } - if ($append) { - $this->saved_outputs[$this->save] .= str($sanitized_output)->trim(); - $this->saved_outputs[$this->save] = str($this->saved_outputs[$this->save]); + // Add log entry for the retry + if (isset($this->application_deployment_queue)) { + $this->addRetryLogEntry($attempt, $maxRetries, $delay, $errorMessage); + } + + sleep($delay); } else { - $this->saved_outputs[$this->save] = str($sanitized_output)->trim(); + // Not retryable or max retries reached + throw $e; } } - }); - $this->application_deployment_queue->update([ - 'current_process_id' => $process->id(), - ]); + } - $process_result = $process->wait(); - if ($process_result->exitCode() !== 0) { - if (! $ignore_errors) { - $this->application_deployment_queue->status = ApplicationDeploymentStatus::FAILED->value; - $this->application_deployment_queue->save(); - throw new \RuntimeException($process_result->errorOutput()); - } + // If we exhausted all retries and still failed + if (! $commandExecuted && $lastError) { + throw $lastError; } }); } + + /** + * Execute the actual command with process handling + */ + private function executeCommandWithProcess($command, $hidden, $customType, $append, $ignore_errors) + { + $remote_command = SshMultiplexingHelper::generateSshCommand($this->server, $command); + $process = Process::timeout(3600)->idleTimeout(3600)->start($remote_command, function (string $type, string $output) use ($command, $hidden, $customType, $append) { + $output = str($output)->trim(); + if ($output->startsWith('╔')) { + $output = "\n".$output; + } + + // Sanitize output to ensure valid UTF-8 encoding before JSON encoding + $sanitized_output = sanitize_utf8_text($output); + + $new_log_entry = [ + 'command' => remove_iip($command), + 'output' => remove_iip($sanitized_output), + 'type' => $customType ?? $type === 'err' ? 'stderr' : 'stdout', + 'timestamp' => Carbon::now('UTC'), + 'hidden' => $hidden, + 'batch' => static::$batch_counter, + ]; + if (! $this->application_deployment_queue->logs) { + $new_log_entry['order'] = 1; + } else { + try { + $previous_logs = json_decode($this->application_deployment_queue->logs, associative: true, flags: JSON_THROW_ON_ERROR); + } catch (\JsonException $e) { + // If existing logs are corrupted, start fresh + $previous_logs = []; + $new_log_entry['order'] = 1; + } + if (is_array($previous_logs)) { + $new_log_entry['order'] = count($previous_logs) + 1; + } else { + $previous_logs = []; + $new_log_entry['order'] = 1; + } + } + $previous_logs[] = $new_log_entry; + + try { + $this->application_deployment_queue->logs = json_encode($previous_logs, flags: JSON_THROW_ON_ERROR); + } catch (\JsonException $e) { + // If JSON encoding still fails, use fallback with invalid sequences replacement + $this->application_deployment_queue->logs = json_encode($previous_logs, flags: JSON_INVALID_UTF8_SUBSTITUTE); + } + + $this->application_deployment_queue->save(); + + if ($this->save) { + if (data_get($this->saved_outputs, $this->save, null) === null) { + data_set($this->saved_outputs, $this->save, str()); + } + if ($append) { + $this->saved_outputs[$this->save] .= str($sanitized_output)->trim(); + $this->saved_outputs[$this->save] = str($this->saved_outputs[$this->save]); + } else { + $this->saved_outputs[$this->save] = str($sanitized_output)->trim(); + } + } + }); + $this->application_deployment_queue->update([ + 'current_process_id' => $process->id(), + ]); + + $process_result = $process->wait(); + if ($process_result->exitCode() !== 0) { + if (! $ignore_errors) { + $this->application_deployment_queue->status = ApplicationDeploymentStatus::FAILED->value; + $this->application_deployment_queue->save(); + throw new \RuntimeException($process_result->errorOutput()); + } + } + } + + /** + * Add a log entry for SSH retry attempts + */ + private function addRetryLogEntry(int $attempt, int $maxRetries, int $delay, string $errorMessage) + { + $retryMessage = "SSH connection failed. Retrying... (Attempt {$attempt}/{$maxRetries}, waiting {$delay}s)\nError: {$errorMessage}"; + + $new_log_entry = [ + 'output' => remove_iip($retryMessage), + 'type' => 'stdout', + 'timestamp' => Carbon::now('UTC'), + 'hidden' => false, + 'batch' => static::$batch_counter, + ]; + + if (! $this->application_deployment_queue->logs) { + $new_log_entry['order'] = 1; + $previous_logs = []; + } else { + try { + $previous_logs = json_decode($this->application_deployment_queue->logs, associative: true, flags: JSON_THROW_ON_ERROR); + } catch (\JsonException $e) { + $previous_logs = []; + $new_log_entry['order'] = 1; + } + if (is_array($previous_logs)) { + $new_log_entry['order'] = count($previous_logs) + 1; + } else { + $previous_logs = []; + $new_log_entry['order'] = 1; + } + } + + $previous_logs[] = $new_log_entry; + + try { + $this->application_deployment_queue->logs = json_encode($previous_logs, flags: JSON_THROW_ON_ERROR); + } catch (\JsonException $e) { + $this->application_deployment_queue->logs = json_encode($previous_logs, flags: JSON_INVALID_UTF8_SUBSTITUTE); + } + + $this->application_deployment_queue->save(); + } } diff --git a/app/Traits/SshRetryable.php b/app/Traits/SshRetryable.php new file mode 100644 index 000000000..a26481056 --- /dev/null +++ b/app/Traits/SshRetryable.php @@ -0,0 +1,174 @@ +<?php + +namespace App\Traits; + +use Illuminate\Support\Facades\Log; + +trait SshRetryable +{ + /** + * Check if an error message indicates a retryable SSH connection error + */ + protected function isRetryableSshError(string $errorOutput): bool + { + $retryablePatterns = [ + 'kex_exchange_identification', + 'Connection reset by peer', + 'Connection refused', + 'Connection timed out', + 'Connection closed by remote host', + 'ssh_exchange_identification', + 'Bad file descriptor', + 'Broken pipe', + 'No route to host', + 'Network is unreachable', + 'Host is down', + 'No buffer space available', + 'Connection reset by', + 'Permission denied, please try again', + 'Received disconnect from', + 'Disconnected from', + 'Connection to .* closed', + 'ssh: connect to host .* port .*: Connection', + 'Lost connection', + 'Timeout, server not responding', + 'Cannot assign requested address', + 'Network is down', + 'Host key verification failed', + 'Operation timed out', + 'Connection closed unexpectedly', + 'Remote host closed connection', + 'Authentication failed', + 'Too many authentication failures', + ]; + + $lowerErrorOutput = strtolower($errorOutput); + foreach ($retryablePatterns as $pattern) { + if (str_contains($lowerErrorOutput, strtolower($pattern))) { + return true; + } + } + + return false; + } + + /** + * Calculate delay for exponential backoff + */ + protected function calculateRetryDelay(int $attempt): int + { + $baseDelay = config('constants.ssh.retry_base_delay'); + $maxDelay = config('constants.ssh.retry_max_delay'); + $multiplier = config('constants.ssh.retry_multiplier'); + + $delay = min($baseDelay * pow($multiplier, $attempt), $maxDelay); + + return (int) $delay; + } + + /** + * Execute a callback with SSH retry logic + * + * @param callable $callback The operation to execute + * @param array $context Context for logging (server, command, etc.) + * @param bool $throwError Whether to throw error on final failure + * @return mixed The result from the callback + */ + protected function executeWithSshRetry(callable $callback, array $context = [], bool $throwError = true) + { + $maxRetries = config('constants.ssh.max_retries'); + $lastError = null; + $lastErrorMessage = ''; + // Randomly fail the command with a key exchange error for testing + // if (random_int(1, 10) === 1) { // 10% chance to fail + // ray('SSH key exchange failed: kex_exchange_identification: read: Connection reset by peer'); + // throw new \RuntimeException('SSH key exchange failed: kex_exchange_identification: read: Connection reset by peer'); + // } + for ($attempt = 0; $attempt < $maxRetries; $attempt++) { + try { + return $callback(); + } catch (\Throwable $e) { + $lastError = $e; + $lastErrorMessage = $e->getMessage(); + + // Check if it's retryable and not the last attempt + if ($this->isRetryableSshError($lastErrorMessage) && $attempt < $maxRetries - 1) { + $delay = $this->calculateRetryDelay($attempt); + + // Track SSH retry event in Sentry + $this->trackSshRetryEvent($attempt + 1, $maxRetries, $delay, $lastErrorMessage, $context); + + // Add deployment log if available (for ExecuteRemoteCommand trait) + if (isset($this->application_deployment_queue) && method_exists($this, 'addRetryLogEntry')) { + $this->addRetryLogEntry($attempt + 1, $maxRetries, $delay, $lastErrorMessage); + } + + sleep($delay); + + continue; + } + + // Not retryable or max retries reached + break; + } + } + + // All retries exhausted + if ($attempt >= $maxRetries) { + Log::error('SSH operation failed after all retries', array_merge($context, [ + 'attempts' => $attempt, + 'error' => $lastErrorMessage, + ])); + } + + if ($throwError && $lastError) { + // If the error message is empty, provide a more meaningful one + if (empty($lastErrorMessage) || trim($lastErrorMessage) === '') { + $contextInfo = isset($context['server']) ? " to server {$context['server']}" : ''; + $attemptInfo = $attempt > 1 ? " after {$attempt} attempts" : ''; + throw new \RuntimeException("SSH connection failed{$contextInfo}{$attemptInfo}", $lastError->getCode()); + } + throw $lastError; + } + + return null; + } + + /** + * Track SSH retry event in Sentry + */ + protected function trackSshRetryEvent(int $attempt, int $maxRetries, int $delay, string $errorMessage, array $context = []): void + { + // Only track in production/cloud instances + if (isDev() || ! config('constants.sentry.sentry_dsn')) { + return; + } + + try { + app('sentry')->captureMessage( + 'SSH connection retry triggered', + \Sentry\Severity::warning(), + [ + 'extra' => [ + 'attempt' => $attempt, + 'max_retries' => $maxRetries, + 'delay_seconds' => $delay, + 'error_message' => $errorMessage, + 'context' => $context, + 'retryable_error' => true, + ], + 'tags' => [ + 'component' => 'ssh_retry', + 'error_type' => 'connection_retry', + ], + ] + ); + } catch (\Throwable $e) { + // Don't let Sentry tracking errors break the SSH retry flow + Log::warning('Failed to track SSH retry event in Sentry', [ + 'error' => $e->getMessage(), + 'original_attempt' => $attempt, + ]); + } + } +} diff --git a/bootstrap/helpers/applications.php b/bootstrap/helpers/applications.php index 919b2bde5..db7767c1e 100644 --- a/bootstrap/helpers/applications.php +++ b/bootstrap/helpers/applications.php @@ -1,12 +1,15 @@ <?php +use App\Actions\Application\StopApplication; use App\Enums\ApplicationDeploymentStatus; use App\Jobs\ApplicationDeploymentJob; +use App\Jobs\VolumeCloneJob; use App\Models\Application; use App\Models\ApplicationDeploymentQueue; use App\Models\Server; use App\Models\StandaloneDocker; use Spatie\Url\Url; +use Visus\Cuid2\Cuid2; function queue_application_deployment(Application $application, string $deployment_uuid, ?int $pull_request_id = 0, string $commit = 'HEAD', bool $force_rebuild = false, bool $is_webhook = false, bool $is_api = false, bool $restart_only = false, ?string $git_type = null, bool $no_questions_asked = false, ?Server $server = null, ?StandaloneDocker $destination = null, bool $only_this_server = false, bool $rollback = false) { @@ -68,7 +71,7 @@ function queue_application_deployment(Application $application, string $deployme ApplicationDeploymentJob::dispatch( application_deployment_queue_id: $deployment->id, ); - } elseif (next_queuable($server_id, $application_id, $commit)) { + } elseif (next_queuable($server_id, $application_id, $commit, $pull_request_id)) { ApplicationDeploymentJob::dispatch( application_deployment_queue_id: $deployment->id, ); @@ -93,32 +96,31 @@ function force_start_deployment(ApplicationDeploymentQueue $deployment) function queue_next_deployment(Application $application) { $server_id = $application->destination->server_id; - $next_found = ApplicationDeploymentQueue::where('server_id', $server_id)->where('status', ApplicationDeploymentStatus::QUEUED)->get()->sortBy('created_at')->first(); - if ($next_found) { - $next_found->update([ - 'status' => ApplicationDeploymentStatus::IN_PROGRESS->value, - ]); + $queued_deployments = ApplicationDeploymentQueue::where('server_id', $server_id) + ->where('status', ApplicationDeploymentStatus::QUEUED) + ->get() + ->sortBy('created_at'); - ApplicationDeploymentJob::dispatch( - application_deployment_queue_id: $next_found->id, - ); + foreach ($queued_deployments as $next_deployment) { + // Check if this queued deployment can actually run + if (next_queuable($next_deployment->server_id, $next_deployment->application_id, $next_deployment->commit, $next_deployment->pull_request_id)) { + $next_deployment->update([ + 'status' => ApplicationDeploymentStatus::IN_PROGRESS->value, + ]); + + ApplicationDeploymentJob::dispatch( + application_deployment_queue_id: $next_deployment->id, + ); + } } } -function next_queuable(string $server_id, string $application_id, string $commit = 'HEAD'): bool +function next_queuable(string $server_id, string $application_id, string $commit = 'HEAD', int $pull_request_id = 0): bool { - // Check if there's already a deployment in progress for this application and commit - $existing_deployment = ApplicationDeploymentQueue::where('application_id', $application_id) - ->where('commit', $commit) - ->where('status', ApplicationDeploymentStatus::IN_PROGRESS->value) - ->first(); - - if ($existing_deployment) { - return false; - } - - // Check if there's any deployment in progress for this application + // Check if there's already a deployment in progress for this application with the same pull_request_id + // This allows normal deployments and PR deployments to run concurrently $in_progress = ApplicationDeploymentQueue::where('application_id', $application_id) + ->where('pull_request_id', $pull_request_id) ->where('status', ApplicationDeploymentStatus::IN_PROGRESS->value) ->exists(); @@ -142,13 +144,15 @@ function next_queuable(string $server_id, string $application_id, string $commit function next_after_cancel(?Server $server = null) { if ($server) { - $next_found = ApplicationDeploymentQueue::where('server_id', data_get($server, 'id'))->where('status', ApplicationDeploymentStatus::QUEUED)->get()->sortBy('created_at'); + $next_found = ApplicationDeploymentQueue::where('server_id', data_get($server, 'id')) + ->where('status', ApplicationDeploymentStatus::QUEUED) + ->get() + ->sortBy('created_at'); + if ($next_found->count() > 0) { foreach ($next_found as $next) { - $server = Server::find($next->server_id); - $concurrent_builds = $server->settings->concurrent_builds; - $inprogress_deployments = ApplicationDeploymentQueue::where('server_id', $next->server_id)->whereIn('status', [ApplicationDeploymentStatus::QUEUED])->get()->sortByDesc('created_at'); - if ($inprogress_deployments->count() < $concurrent_builds) { + // Use next_queuable to properly check if this deployment can run + if (next_queuable($next->server_id, $next->application_id, $next->commit, $next->pull_request_id)) { $next->update([ 'status' => ApplicationDeploymentStatus::IN_PROGRESS->value, ]); @@ -157,8 +161,195 @@ function next_after_cancel(?Server $server = null) application_deployment_queue_id: $next->id, ); } - break; } } } } + +function clone_application(Application $source, $destination, array $overrides = [], bool $cloneVolumeData = false): Application +{ + $uuid = $overrides['uuid'] ?? (string) new Cuid2; + $server = $destination->server; + + // Prepare name and URL + $name = $overrides['name'] ?? 'clone-of-'.str($source->name)->limit(20).'-'.$uuid; + $applicationSettings = $source->settings; + $url = $overrides['fqdn'] ?? $source->fqdn; + + if ($server->proxyType() !== 'NONE' && $applicationSettings->is_container_label_readonly_enabled === true) { + $url = generateUrl(server: $server, random: $uuid); + } + + // Clone the application + $newApplication = $source->replicate([ + 'id', + 'created_at', + 'updated_at', + 'additional_servers_count', + 'additional_networks_count', + ])->fill(array_merge([ + 'uuid' => $uuid, + 'name' => $name, + 'fqdn' => $url, + 'status' => 'exited', + 'destination_id' => $destination->id, + ], $overrides)); + $newApplication->save(); + + // Update custom labels if needed + if ($newApplication->destination->server->proxyType() !== 'NONE' && $applicationSettings->is_container_label_readonly_enabled === true) { + $customLabels = str(implode('|coolify|', generateLabelsApplication($newApplication)))->replace('|coolify|', "\n"); + $newApplication->custom_labels = base64_encode($customLabels); + $newApplication->save(); + } + + // Clone settings + $newApplication->settings()->delete(); + if ($applicationSettings) { + $newApplicationSettings = $applicationSettings->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'application_id' => $newApplication->id, + ]); + $newApplicationSettings->save(); + } + + // Clone tags + $tags = $source->tags; + foreach ($tags as $tag) { + $newApplication->tags()->attach($tag->id); + } + + // Clone scheduled tasks + $scheduledTasks = $source->scheduled_tasks()->get(); + foreach ($scheduledTasks as $task) { + $newTask = $task->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'uuid' => (string) new Cuid2, + 'application_id' => $newApplication->id, + 'team_id' => currentTeam()->id, + ]); + $newTask->save(); + } + + // Clone previews with FQDN regeneration + $applicationPreviews = $source->previews()->get(); + foreach ($applicationPreviews as $preview) { + $newPreview = $preview->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'uuid' => (string) new Cuid2, + 'application_id' => $newApplication->id, + 'status' => 'exited', + 'fqdn' => null, + 'docker_compose_domains' => null, + ]); + $newPreview->save(); + + // Regenerate FQDN for the cloned preview + if ($newApplication->build_pack === 'dockercompose') { + $newPreview->generate_preview_fqdn_compose(); + } else { + $newPreview->generate_preview_fqdn(); + } + } + + // Clone persistent volumes + $persistentVolumes = $source->persistentStorages()->get(); + foreach ($persistentVolumes as $volume) { + $newName = ''; + if (str_starts_with($volume->name, $source->uuid)) { + $newName = str($volume->name)->replace($source->uuid, $newApplication->uuid); + } else { + $newName = $newApplication->uuid.'-'.str($volume->name)->afterLast('-'); + } + + $newPersistentVolume = $volume->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'name' => $newName, + 'resource_id' => $newApplication->id, + ]); + $newPersistentVolume->save(); + + if ($cloneVolumeData) { + try { + StopApplication::dispatch($source, false, false); + $sourceVolume = $volume->name; + $targetVolume = $newPersistentVolume->name; + $sourceServer = $source->destination->server; + $targetServer = $newApplication->destination->server; + + VolumeCloneJob::dispatch($sourceVolume, $targetVolume, $sourceServer, $targetServer, $newPersistentVolume); + + queue_application_deployment( + deployment_uuid: (string) new Cuid2, + application: $source, + server: $sourceServer, + destination: $source->destination, + no_questions_asked: true + ); + } catch (\Exception $e) { + \Log::error('Failed to copy volume data for '.$volume->name.': '.$e->getMessage()); + } + } + } + + // Clone file storages + $fileStorages = $source->fileStorages()->get(); + foreach ($fileStorages as $storage) { + $newStorage = $storage->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'resource_id' => $newApplication->id, + ]); + $newStorage->save(); + } + + // Clone production environment variables without triggering the created hook + $environmentVariables = $source->environment_variables()->get(); + foreach ($environmentVariables as $environmentVariable) { + \App\Models\EnvironmentVariable::withoutEvents(function () use ($environmentVariable, $newApplication) { + $newEnvironmentVariable = $environmentVariable->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'resourceable_id' => $newApplication->id, + 'resourceable_type' => $newApplication->getMorphClass(), + 'is_preview' => false, + ]); + $newEnvironmentVariable->save(); + }); + } + + // Clone preview environment variables + $previewEnvironmentVariables = $source->environment_variables_preview()->get(); + foreach ($previewEnvironmentVariables as $previewEnvironmentVariable) { + \App\Models\EnvironmentVariable::withoutEvents(function () use ($previewEnvironmentVariable, $newApplication) { + $newPreviewEnvironmentVariable = $previewEnvironmentVariable->replicate([ + 'id', + 'created_at', + 'updated_at', + ])->fill([ + 'resourceable_id' => $newApplication->id, + 'resourceable_type' => $newApplication->getMorphClass(), + 'is_preview' => true, + ]); + $newPreviewEnvironmentVariable->save(); + }); + } + + return $newApplication; +} diff --git a/bootstrap/helpers/docker.php b/bootstrap/helpers/docker.php index f61abc806..5cfddc599 100644 --- a/bootstrap/helpers/docker.php +++ b/bootstrap/helpers/docker.php @@ -1069,9 +1069,9 @@ function validateComposeFile(string $compose, int $server_id): string|Throwable } } } - $base64_compose = base64_encode(Yaml::dump($yaml_compose)); + $compose_content = Yaml::dump($yaml_compose); + transfer_file_to_server($compose_content, "/tmp/{$uuid}.yml", $server); instant_remote_process([ - "echo {$base64_compose} | base64 -d | tee /tmp/{$uuid}.yml > /dev/null", "chmod 600 /tmp/{$uuid}.yml", "docker compose -f /tmp/{$uuid}.yml config --no-interpolate --no-path-resolution -q", "rm /tmp/{$uuid}.yml", diff --git a/bootstrap/helpers/parsers.php b/bootstrap/helpers/parsers.php index f7041c3da..3dbfb6b33 100644 --- a/bootstrap/helpers/parsers.php +++ b/bootstrap/helpers/parsers.php @@ -373,7 +373,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $fqdnFor = $key->after('SERVICE_FQDN_')->lower()->value(); $originalFqdnFor = str($fqdnFor)->replace('_', '-'); if (str($fqdnFor)->contains('-')) { - $fqdnFor = str($fqdnFor)->replace('-', '_'); + $fqdnFor = str($fqdnFor)->replace('-', '_')->replace('.', '_'); } // Generated FQDN & URL $fqdn = generateFqdn(server: $server, random: "$originalFqdnFor-$uuid", parserVersion: $resource->compose_parsing_version); @@ -409,7 +409,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $urlFor = $key->after('SERVICE_URL_')->lower()->value(); $originalUrlFor = str($urlFor)->replace('_', '-'); if (str($urlFor)->contains('-')) { - $urlFor = str($urlFor)->replace('-', '_'); + $urlFor = str($urlFor)->replace('-', '_')->replace('.', '_'); } $url = generateUrl(server: $server, random: "$originalUrlFor-$uuid"); $resource->environment_variables()->firstOrCreate([ @@ -454,6 +454,12 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int } } + // generate SERVICE_NAME variables for docker compose services + $serviceNameEnvironments = collect([]); + if ($resource->build_pack === 'dockercompose') { + $serviceNameEnvironments = generateDockerComposeServiceName($services, $pullRequestId); + } + // Parse the rest of the services foreach ($services as $serviceName => $service) { $image = data_get_str($service, 'image'); @@ -567,7 +573,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int } $source = replaceLocalSource($source, $mainDirectory); if ($isPullRequest) { - $source = $source."-pr-$pullRequestId"; + $source = addPreviewDeploymentSuffix($source, $pull_request_id); } LocalFileVolume::updateOrCreate( [ @@ -610,7 +616,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $name = "{$uuid}_{$slugWithoutUuid}"; if ($isPullRequest) { - $name = "{$name}-pr-$pullRequestId"; + $name = addPreviewDeploymentSuffix($name, $pull_request_id); } if (is_string($volume)) { $parsed = parseDockerVolumeString($volume); @@ -651,11 +657,11 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $newDependsOn = collect([]); $depends_on->each(function ($dependency, $condition) use ($pullRequestId, $newDependsOn) { if (is_numeric($condition)) { - $dependency = "$dependency-pr-$pullRequestId"; + $dependency = addPreviewDeploymentSuffix($dependency, $pullRequestId); $newDependsOn->put($condition, $dependency); } else { - $condition = "$condition-pr-$pullRequestId"; + $condition = addPreviewDeploymentSuffix($condition, $pullRequestId); $newDependsOn->put($condition, $dependency); } }); @@ -858,13 +864,13 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int if ($resource->build_pack !== 'dockercompose') { $domains = collect([]); } - $changedServiceName = str($serviceName)->replace('-', '_')->value(); + $changedServiceName = str($serviceName)->replace('-', '_')->replace('.', '_')->value(); $fqdns = data_get($domains, "$changedServiceName.domain"); // Generate SERVICE_FQDN & SERVICE_URL for dockercompose if ($resource->build_pack === 'dockercompose') { foreach ($domains as $forServiceName => $domain) { $parsedDomain = data_get($domain, 'domain'); - $serviceNameFormatted = str($serviceName)->upper()->replace('-', '_'); + $serviceNameFormatted = str($serviceName)->upper()->replace('-', '_')->replace('.', '_'); if (filled($parsedDomain)) { $parsedDomain = str($parsedDomain)->explode(',')->first(); @@ -872,12 +878,12 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $coolifyScheme = $coolifyUrl->getScheme(); $coolifyFqdn = $coolifyUrl->getHost(); $coolifyUrl = $coolifyUrl->withScheme($coolifyScheme)->withHost($coolifyFqdn)->withPort(null); - $coolifyEnvironments->put('SERVICE_URL_'.str($forServiceName)->upper()->replace('-', '_'), $coolifyUrl->__toString()); - $coolifyEnvironments->put('SERVICE_FQDN_'.str($forServiceName)->upper()->replace('-', '_'), $coolifyFqdn); + $coolifyEnvironments->put('SERVICE_URL_'.str($forServiceName)->upper()->replace('-', '_')->replace('.', '_'), $coolifyUrl->__toString()); + $coolifyEnvironments->put('SERVICE_FQDN_'.str($forServiceName)->upper()->replace('-', '_')->replace('.', '_'), $coolifyFqdn); $resource->environment_variables()->updateOrCreate([ 'resourceable_type' => Application::class, 'resourceable_id' => $resource->id, - 'key' => 'SERVICE_URL_'.str($forServiceName)->upper()->replace('-', '_'), + 'key' => 'SERVICE_URL_'.str($forServiceName)->upper()->replace('-', '_')->replace('.', '_'), ], [ 'value' => $coolifyUrl->__toString(), 'is_build_time' => false, @@ -886,7 +892,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $resource->environment_variables()->updateOrCreate([ 'resourceable_type' => Application::class, 'resourceable_id' => $resource->id, - 'key' => 'SERVICE_FQDN_'.str($forServiceName)->upper()->replace('-', '_'), + 'key' => 'SERVICE_FQDN_'.str($forServiceName)->upper()->replace('-', '_')->replace('.', '_'), ], [ 'value' => $coolifyFqdn, 'is_build_time' => false, @@ -1082,7 +1088,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $payload['volumes'] = $volumesParsed; } if ($environment->count() > 0 || $coolifyEnvironments->count() > 0) { - $payload['environment'] = $environment->merge($coolifyEnvironments); + $payload['environment'] = $environment->merge($coolifyEnvironments)->merge($serviceNameEnvironments); } if ($logging) { $payload['logging'] = $logging; @@ -1091,7 +1097,7 @@ function applicationParser(Application $resource, int $pull_request_id = 0, ?int $payload['depends_on'] = $depends_on; } if ($isPullRequest) { - $serviceName = "{$serviceName}-pr-{$pullRequestId}"; + $serviceName = addPreviewDeploymentSuffix($serviceName, $pullRequestId); } $parsedServices->put($serviceName, $payload); diff --git a/bootstrap/helpers/proxy.php b/bootstrap/helpers/proxy.php index 2d479a193..5bc1d005e 100644 --- a/bootstrap/helpers/proxy.php +++ b/bootstrap/helpers/proxy.php @@ -1,6 +1,6 @@ <?php -use App\Actions\Proxy\SaveConfiguration; +use App\Actions\Proxy\SaveProxyConfiguration; use App\Enums\ProxyTypes; use App\Models\Application; use App\Models\Server; @@ -267,7 +267,7 @@ function generate_default_proxy_configuration(Server $server) } $config = Yaml::dump($config, 12, 2); - SaveConfiguration::run($server, $config); + SaveProxyConfiguration::run($server, $config); return $config; } diff --git a/bootstrap/helpers/remoteProcess.php b/bootstrap/helpers/remoteProcess.php index 6c1e2beab..7fa9671e3 100644 --- a/bootstrap/helpers/remoteProcess.php +++ b/bootstrap/helpers/remoteProcess.php @@ -29,11 +29,31 @@ function remote_process( $type = $type ?? ActivityTypes::INLINE->value; $command = $command instanceof Collection ? $command->toArray() : $command; - if ($server->isNonRoot()) { - $command = parseCommandsByLineForSudo(collect($command), $server); + // Process commands and handle file transfers + $processed_commands = []; + foreach ($command as $cmd) { + if (is_array($cmd) && isset($cmd['transfer_file'])) { + // Handle file transfer command + $transfer_data = $cmd['transfer_file']; + $content = $transfer_data['content']; + $destination = $transfer_data['destination']; + + // Execute file transfer immediately + transfer_file_to_server($content, $destination, $server, ! $ignore_errors); + + // Add a comment to the command log for visibility + $processed_commands[] = "# File transferred via SCP: $destination"; + } else { + // Regular string command + $processed_commands[] = $cmd; + } } - $command_string = implode("\n", $command); + if ($server->isNonRoot()) { + $processed_commands = parseCommandsByLineForSudo(collect($processed_commands), $server); + } + + $command_string = implode("\n", $processed_commands); if (Auth::check()) { $teams = Auth::user()->teams->pluck('id'); @@ -60,15 +80,86 @@ function remote_process( function instant_scp(string $source, string $dest, Server $server, $throwError = true) { - $scp_command = SshMultiplexingHelper::generateScpCommand($server, $source, $dest); - $process = Process::timeout(config('constants.ssh.command_timeout'))->run($scp_command); - $output = trim($process->output()); - $exitCode = $process->exitCode(); - if ($exitCode !== 0) { - return $throwError ? excludeCertainErrors($process->errorOutput(), $exitCode) : null; - } + return \App\Helpers\SshRetryHandler::retry( + function () use ($source, $dest, $server) { + $scp_command = SshMultiplexingHelper::generateScpCommand($server, $source, $dest); + $process = Process::timeout(config('constants.ssh.command_timeout'))->run($scp_command); - return $output === 'null' ? null : $output; + $output = trim($process->output()); + $exitCode = $process->exitCode(); + + if ($exitCode !== 0) { + excludeCertainErrors($process->errorOutput(), $exitCode); + } + + return $output === 'null' ? null : $output; + }, + [ + 'server' => $server->ip, + 'source' => $source, + 'dest' => $dest, + 'function' => 'instant_scp', + ], + $throwError + ); +} + +function transfer_file_to_container(string $content, string $container_path, string $deployment_uuid, Server $server, bool $throwError = true): ?string +{ + $temp_file = tempnam(sys_get_temp_dir(), 'coolify_env_'); + + try { + // Write content to temporary file + file_put_contents($temp_file, $content); + + // Generate unique filename for server transfer + $server_temp_file = '/tmp/coolify_env_'.uniqid().'_'.$deployment_uuid; + + // Transfer file to server + instant_scp($temp_file, $server_temp_file, $server, $throwError); + + // Ensure parent directory exists in container, then copy file + $parent_dir = dirname($container_path); + $commands = []; + if ($parent_dir !== '.' && $parent_dir !== '/') { + $commands[] = executeInDocker($deployment_uuid, "mkdir -p \"$parent_dir\""); + } + $commands[] = "docker cp $server_temp_file $deployment_uuid:$container_path"; + $commands[] = "rm -f $server_temp_file"; // Cleanup server temp file + + return instant_remote_process_with_timeout($commands, $server, $throwError); + + } finally { + // Always cleanup local temp file + if (file_exists($temp_file)) { + unlink($temp_file); + } + } +} + +function transfer_file_to_server(string $content, string $server_path, Server $server, bool $throwError = true): ?string +{ + $temp_file = tempnam(sys_get_temp_dir(), 'coolify_env_'); + + try { + // Write content to temporary file + file_put_contents($temp_file, $content); + + // Ensure parent directory exists on server + $parent_dir = dirname($server_path); + if ($parent_dir !== '.' && $parent_dir !== '/') { + instant_remote_process_with_timeout(["mkdir -p \"$parent_dir\""], $server, $throwError); + } + + // Transfer file directly to server destination + return instant_scp($temp_file, $server_path, $server, $throwError); + + } finally { + // Always cleanup local temp file + if (file_exists($temp_file)) { + unlink($temp_file); + } + } } function instant_remote_process_with_timeout(Collection|array $command, Server $server, bool $throwError = true, bool $no_sudo = false): ?string @@ -79,54 +170,85 @@ function instant_remote_process_with_timeout(Collection|array $command, Server $ } $command_string = implode("\n", $command); - // $start_time = microtime(true); - $sshCommand = SshMultiplexingHelper::generateSshCommand($server, $command_string); - $process = Process::timeout(30)->run($sshCommand); - // $end_time = microtime(true); + return \App\Helpers\SshRetryHandler::retry( + function () use ($server, $command_string) { + $sshCommand = SshMultiplexingHelper::generateSshCommand($server, $command_string); + $process = Process::timeout(30)->run($sshCommand); - // $execution_time = ($end_time - $start_time) * 1000; // Convert to milliseconds - // ray('SSH command execution time:', $execution_time.' ms')->orange(); + $output = trim($process->output()); + $exitCode = $process->exitCode(); - $output = trim($process->output()); - $exitCode = $process->exitCode(); + if ($exitCode !== 0) { + excludeCertainErrors($process->errorOutput(), $exitCode); + } - if ($exitCode !== 0) { - return $throwError ? excludeCertainErrors($process->errorOutput(), $exitCode) : null; - } + // Sanitize output to ensure valid UTF-8 encoding + $output = $output === 'null' ? null : sanitize_utf8_text($output); - // Sanitize output to ensure valid UTF-8 encoding - $output = $output === 'null' ? null : sanitize_utf8_text($output); - - return $output; + return $output; + }, + [ + 'server' => $server->ip, + 'command_preview' => substr($command_string, 0, 100), + 'function' => 'instant_remote_process_with_timeout', + ], + $throwError + ); } function instant_remote_process(Collection|array $command, Server $server, bool $throwError = true, bool $no_sudo = false): ?string { $command = $command instanceof Collection ? $command->toArray() : $command; + + // Process commands and handle file transfers + $processed_commands = []; + foreach ($command as $cmd) { + if (is_array($cmd) && isset($cmd['transfer_file'])) { + // Handle file transfer command + $transfer_data = $cmd['transfer_file']; + $content = $transfer_data['content']; + $destination = $transfer_data['destination']; + + // Execute file transfer immediately + transfer_file_to_server($content, $destination, $server, $throwError); + + // Add a comment to the command log for visibility + $processed_commands[] = "# File transferred via SCP: $destination"; + } else { + // Regular string command + $processed_commands[] = $cmd; + } + } + if ($server->isNonRoot() && ! $no_sudo) { - $command = parseCommandsByLineForSudo(collect($command), $server); + $processed_commands = parseCommandsByLineForSudo(collect($processed_commands), $server); } - $command_string = implode("\n", $command); + $command_string = implode("\n", $processed_commands); - // $start_time = microtime(true); - $sshCommand = SshMultiplexingHelper::generateSshCommand($server, $command_string); - $process = Process::timeout(config('constants.ssh.command_timeout'))->run($sshCommand); - // $end_time = microtime(true); + return \App\Helpers\SshRetryHandler::retry( + function () use ($server, $command_string) { + $sshCommand = SshMultiplexingHelper::generateSshCommand($server, $command_string); + $process = Process::timeout(config('constants.ssh.command_timeout'))->run($sshCommand); - // $execution_time = ($end_time - $start_time) * 1000; // Convert to milliseconds - // ray('SSH command execution time:', $execution_time.' ms')->orange(); + $output = trim($process->output()); + $exitCode = $process->exitCode(); - $output = trim($process->output()); - $exitCode = $process->exitCode(); + if ($exitCode !== 0) { + excludeCertainErrors($process->errorOutput(), $exitCode); + } - if ($exitCode !== 0) { - return $throwError ? excludeCertainErrors($process->errorOutput(), $exitCode) : null; - } + // Sanitize output to ensure valid UTF-8 encoding + $output = $output === 'null' ? null : sanitize_utf8_text($output); - // Sanitize output to ensure valid UTF-8 encoding - $output = $output === 'null' ? null : sanitize_utf8_text($output); - - return $output; + return $output; + }, + [ + 'server' => $server->ip, + 'command_preview' => substr($command_string, 0, 100), + 'function' => 'instant_remote_process', + ], + $throwError + ); } function excludeCertainErrors(string $errorOutput, ?int $exitCode = null) @@ -136,11 +258,18 @@ function excludeCertainErrors(string $errorOutput, ?int $exitCode = null) 'Could not resolve hostname', ]); $ignored = $ignoredErrors->contains(fn ($error) => Str::contains($errorOutput, $error)); + + // Ensure we always have a meaningful error message + $errorMessage = trim($errorOutput); + if (empty($errorMessage)) { + $errorMessage = "SSH command failed with exit code: $exitCode"; + } + if ($ignored) { // TODO: Create new exception and disable in sentry - throw new \RuntimeException($errorOutput, $exitCode); + throw new \RuntimeException($errorMessage, $exitCode); } - throw new \RuntimeException($errorOutput, $exitCode); + throw new \RuntimeException($errorMessage, $exitCode); } function decode_remote_command_output(?ApplicationDeploymentQueue $application_deployment_queue = null): Collection diff --git a/bootstrap/helpers/services.php b/bootstrap/helpers/services.php index cf12a28a5..7d3cb71ff 100644 --- a/bootstrap/helpers/services.php +++ b/bootstrap/helpers/services.php @@ -69,12 +69,11 @@ function getFilesystemVolumesFromServer(ServiceApplication|ServiceDatabase|Appli $fileVolume->content = $content; $fileVolume->is_directory = false; $fileVolume->save(); - $content = base64_encode($content); $dir = str($fileLocation)->dirname(); instant_remote_process([ "mkdir -p $dir", - "echo '$content' | base64 -d | tee $fileLocation", ], $server); + transfer_file_to_server($content, $fileLocation, $server); } elseif ($isFile === 'NOK' && $isDir === 'NOK' && $fileVolume->is_directory && $isInit) { // Does not exists (no dir or file), flagged as directory, is init $fileVolume->content = null; @@ -115,14 +114,14 @@ function updateCompose(ServiceApplication|ServiceDatabase $resource) $resource->save(); } - $serviceName = str($resource->name)->upper()->replace('-', '_'); + $serviceName = str($resource->name)->upper()->replace('-', '_')->replace('.', '_'); $resource->service->environment_variables()->where('key', 'LIKE', "SERVICE_FQDN_{$serviceName}%")->delete(); $resource->service->environment_variables()->where('key', 'LIKE', "SERVICE_URL_{$serviceName}%")->delete(); if ($resource->fqdn) { $resourceFqdns = str($resource->fqdn)->explode(','); $resourceFqdns = $resourceFqdns->first(); - $variableName = 'SERVICE_URL_'.str($resource->name)->upper()->replace('-', '_'); + $variableName = 'SERVICE_URL_'.str($resource->name)->upper()->replace('-', '_')->replace('.', '_'); $url = Url::fromString($resourceFqdns); $port = $url->getPort(); $path = $url->getPath(); @@ -149,7 +148,7 @@ function updateCompose(ServiceApplication|ServiceDatabase $resource) 'is_preview' => false, ]); } - $variableName = 'SERVICE_FQDN_'.str($resource->name)->upper()->replace('-', '_'); + $variableName = 'SERVICE_FQDN_'.str($resource->name)->upper()->replace('-', '_')->replace('.', '_'); $fqdn = Url::fromString($resourceFqdns); $port = $fqdn->getPort(); $path = $fqdn->getPath(); diff --git a/bootstrap/helpers/shared.php b/bootstrap/helpers/shared.php index e01f4d58b..6778a0ed1 100644 --- a/bootstrap/helpers/shared.php +++ b/bootstrap/helpers/shared.php @@ -204,7 +204,6 @@ function get_latest_version_of_coolify(): string return data_get($versions, 'coolify.v4.version'); } catch (\Throwable $e) { - ray($e->getMessage()); return '0.0.0'; } @@ -962,7 +961,7 @@ function getRealtime() } } -function validate_dns_entry(string $fqdn, Server $server) +function validateDNSEntry(string $fqdn, Server $server) { // https://www.cloudflare.com/ips-v4/# $cloudflare_ips = collect(['173.245.48.0/20', '103.21.244.0/22', '103.22.200.0/22', '103.31.4.0/22', '141.101.64.0/18', '108.162.192.0/18', '190.93.240.0/20', '188.114.96.0/20', '197.234.240.0/22', '198.41.128.0/17', '162.158.0.0/15', '104.16.0.0/13', '172.64.0.0/13', '131.0.72.0/22']); @@ -995,7 +994,7 @@ function validate_dns_entry(string $fqdn, Server $server) } else { foreach ($results as $result) { if ($result->getType() == $type) { - if (ip_match($result->getData(), $cloudflare_ips->toArray(), $match)) { + if (ipMatch($result->getData(), $cloudflare_ips->toArray(), $match)) { $found_matching_ip = true; break; } @@ -1013,7 +1012,7 @@ function validate_dns_entry(string $fqdn, Server $server) return $found_matching_ip; } -function ip_match($ip, $cidrs, &$match = null) +function ipMatch($ip, $cidrs, &$match = null) { foreach ((array) $cidrs as $cidr) { [$subnet, $mask] = explode('/', $cidr); @@ -1027,7 +1026,7 @@ function ip_match($ip, $cidrs, &$match = null) return false; } -function check_ip_against_allowlist($ip, $allowlist) +function checkIPAgainstAllowlist($ip, $allowlist) { if (empty($allowlist)) { return false; @@ -1085,78 +1084,6 @@ function check_ip_against_allowlist($ip, $allowlist) return false; } -function parseCommandsByLineForSudo(Collection $commands, Server $server): array -{ - $commands = $commands->map(function ($line) { - if ( - ! str(trim($line))->startsWith([ - 'cd', - 'command', - 'echo', - 'true', - 'if', - 'fi', - ]) - ) { - return "sudo $line"; - } - - if (str(trim($line))->startsWith('if')) { - return str_replace('if', 'if sudo', $line); - } - - return $line; - }); - - $commands = $commands->map(function ($line) use ($server) { - if (Str::startsWith($line, 'sudo mkdir -p')) { - return "$line && sudo chown -R $server->user:$server->user ".Str::after($line, 'sudo mkdir -p').' && sudo chmod -R o-rwx '.Str::after($line, 'sudo mkdir -p'); - } - - return $line; - }); - - $commands = $commands->map(function ($line) { - $line = str($line); - if (str($line)->contains('$(')) { - $line = $line->replace('$(', '$(sudo '); - } - if (str($line)->contains('||')) { - $line = $line->replace('||', '|| sudo'); - } - if (str($line)->contains('&&')) { - $line = $line->replace('&&', '&& sudo'); - } - if (str($line)->contains(' | ')) { - $line = $line->replace(' | ', ' | sudo '); - } - - return $line->value(); - }); - - return $commands->toArray(); -} -function parseLineForSudo(string $command, Server $server): string -{ - if (! str($command)->startSwith('cd') && ! str($command)->startSwith('command')) { - $command = "sudo $command"; - } - if (Str::startsWith($command, 'sudo mkdir -p')) { - $command = "$command && sudo chown -R $server->user:$server->user ".Str::after($command, 'sudo mkdir -p').' && sudo chmod -R o-rwx '.Str::after($command, 'sudo mkdir -p'); - } - if (str($command)->contains('$(') || str($command)->contains('`')) { - $command = str($command)->replace('$(', '$(sudo ')->replace('`', '`sudo ')->value(); - } - if (str($command)->contains('||')) { - $command = str($command)->replace('||', '|| sudo ')->value(); - } - if (str($command)->contains('&&')) { - $command = str($command)->replace('&&', '&& sudo ')->value(); - } - - return $command; -} - function get_public_ips() { try { @@ -2059,12 +1986,12 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal $name = $name->replaceFirst('~', $dir); } if ($pull_request_id !== 0) { - $name = $name."-pr-$pull_request_id"; + $name = addPreviewDeploymentSuffix($name, $pull_request_id); } $volume = str("$name:$mount"); } else { if ($pull_request_id !== 0) { - $name = $name."-pr-$pull_request_id"; + $name = addPreviewDeploymentSuffix($name, $pull_request_id); $volume = str("$name:$mount"); if ($topLevelVolumes->has($name)) { $v = $topLevelVolumes->get($name); @@ -2103,7 +2030,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal $name = $volume->before(':'); $mount = $volume->after(':'); if ($pull_request_id !== 0) { - $name = $name."-pr-$pull_request_id"; + $name = addPreviewDeploymentSuffix($name, $pull_request_id); } $volume = str("$name:$mount"); } @@ -2122,7 +2049,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal $source = str($source)->replaceFirst('~', $dir); } if ($pull_request_id !== 0) { - $source = $source."-pr-$pull_request_id"; + $source = addPreviewDeploymentSuffix($source, $pull_request_id); } if ($read_only) { data_set($volume, 'source', $source.':'.$target.':ro'); @@ -2131,7 +2058,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal } } else { if ($pull_request_id !== 0) { - $source = $source."-pr-$pull_request_id"; + $source = addPreviewDeploymentSuffix($source, $pull_request_id); } if ($read_only) { data_set($volume, 'source', $source.':'.$target.':ro'); @@ -2183,13 +2110,13 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal $name = $name->replaceFirst('~', $dir); } if ($pull_request_id !== 0) { - $name = $name."-pr-$pull_request_id"; + $name = addPreviewDeploymentSuffix($name, $pull_request_id); } $volume = str("$name:$mount"); } else { if ($pull_request_id !== 0) { $uuid = $resource->uuid; - $name = $uuid."-$name-pr-$pull_request_id"; + $name = $uuid.'-'.addPreviewDeploymentSuffix($name, $pull_request_id); $volume = str("$name:$mount"); if ($topLevelVolumes->has($name)) { $v = $topLevelVolumes->get($name); @@ -2231,7 +2158,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal $name = $volume->before(':'); $mount = $volume->after(':'); if ($pull_request_id !== 0) { - $name = $name."-pr-$pull_request_id"; + $name = addPreviewDeploymentSuffix($name, $pull_request_id); } $volume = str("$name:$mount"); } @@ -2259,7 +2186,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal if ($pull_request_id === 0) { $source = $uuid."-$source"; } else { - $source = $uuid."-$source-pr-$pull_request_id"; + $source = $uuid.'-'.addPreviewDeploymentSuffix($source, $pull_request_id); } if ($read_only) { data_set($volume, 'source', $source.':'.$target.':ro'); @@ -2299,7 +2226,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal if ($pull_request_id !== 0 && count($serviceDependencies) > 0) { $serviceDependencies = $serviceDependencies->map(function ($dependency) use ($pull_request_id) { - return $dependency."-pr-$pull_request_id"; + return addPreviewDeploymentSuffix($dependency, $pull_request_id); }); data_set($service, 'depends_on', $serviceDependencies->toArray()); } @@ -2693,7 +2620,7 @@ function parseDockerComposeFile(Service|Application $resource, bool $isNew = fal }); if ($pull_request_id !== 0) { $services->each(function ($service, $serviceName) use ($pull_request_id, $services) { - $services[$serviceName."-pr-$pull_request_id"] = $service; + $services[addPreviewDeploymentSuffix($serviceName, $pull_request_id)] = $service; data_forget($services, $serviceName); }); } @@ -3073,3 +3000,18 @@ function parseDockerfileInterval(string $something) return $seconds; } + +function addPreviewDeploymentSuffix(string $name, int $pull_request_id = 0): string +{ + return ($pull_request_id === 0) ? $name : $name.'-pr-'.$pull_request_id; +} + +function generateDockerComposeServiceName(mixed $services, int $pullRequestId = 0): Collection +{ + $collection = collect([]); + foreach ($services as $serviceName => $_) { + $collection->put('SERVICE_NAME_'.str($serviceName)->replace('-', '_')->replace('.', '_')->upper(), addPreviewDeploymentSuffix($serviceName, $pullRequestId)); + } + + return $collection; +} diff --git a/bootstrap/helpers/sudo.php b/bootstrap/helpers/sudo.php new file mode 100644 index 000000000..ba252c64f --- /dev/null +++ b/bootstrap/helpers/sudo.php @@ -0,0 +1,101 @@ +<?php + +use App\Models\Server; +use Illuminate\Support\Collection; +use Illuminate\Support\Str; + +function shouldChangeOwnership(string $path): bool +{ + $path = trim($path); + + $systemPaths = ['/var', '/etc', '/usr', '/opt', '/sys', '/proc', '/dev', '/bin', '/sbin', '/lib', '/lib64', '/boot', '/root', '/home', '/media', '/mnt', '/srv', '/run']; + + foreach ($systemPaths as $systemPath) { + if ($path === $systemPath || Str::startsWith($path, $systemPath.'/')) { + return false; + } + } + + $isCoolifyPath = Str::startsWith($path, '/data/coolify') || Str::startsWith($path, '/tmp/coolify'); + + return $isCoolifyPath; +} +function parseCommandsByLineForSudo(Collection $commands, Server $server): array +{ + $commands = $commands->map(function ($line) { + if ( + ! str(trim($line))->startsWith([ + 'cd', + 'command', + 'echo', + 'true', + 'if', + 'fi', + ]) + ) { + return "sudo $line"; + } + + if (str(trim($line))->startsWith('if')) { + return str_replace('if', 'if sudo', $line); + } + + return $line; + }); + + $commands = $commands->map(function ($line) use ($server) { + if (Str::startsWith($line, 'sudo mkdir -p')) { + $path = trim(Str::after($line, 'sudo mkdir -p')); + if (shouldChangeOwnership($path)) { + return "$line && sudo chown -R $server->user:$server->user $path && sudo chmod -R o-rwx $path"; + } + + return $line; + } + + return $line; + }); + + $commands = $commands->map(function ($line) { + $line = str($line); + if (str($line)->contains('$(')) { + $line = $line->replace('$(', '$(sudo '); + } + if (str($line)->contains('||')) { + $line = $line->replace('||', '|| sudo'); + } + if (str($line)->contains('&&')) { + $line = $line->replace('&&', '&& sudo'); + } + if (str($line)->contains(' | ')) { + $line = $line->replace(' | ', ' | sudo '); + } + + return $line->value(); + }); + + return $commands->toArray(); +} +function parseLineForSudo(string $command, Server $server): string +{ + if (! str($command)->startSwith('cd') && ! str($command)->startSwith('command')) { + $command = "sudo $command"; + } + if (Str::startsWith($command, 'sudo mkdir -p')) { + $path = trim(Str::after($command, 'sudo mkdir -p')); + if (shouldChangeOwnership($path)) { + $command = "$command && sudo chown -R $server->user:$server->user $path && sudo chmod -R o-rwx $path"; + } + } + if (str($command)->contains('$(') || str($command)->contains('`')) { + $command = str($command)->replace('$(', '$(sudo ')->replace('`', '`sudo ')->value(); + } + if (str($command)->contains('||')) { + $command = str($command)->replace('||', '|| sudo ')->value(); + } + if (str($command)->contains('&&')) { + $command = str($command)->replace('&&', '&& sudo ')->value(); + } + + return $command; +} diff --git a/config/constants.php b/config/constants.php index 9c1b8b274..0d29c997e 100644 --- a/config/constants.php +++ b/config/constants.php @@ -59,9 +59,16 @@ 'ssh' => [ 'mux_enabled' => env('MUX_ENABLED', env('SSH_MUX_ENABLED', true)), 'mux_persist_time' => env('SSH_MUX_PERSIST_TIME', 3600), + 'mux_health_check_enabled' => env('SSH_MUX_HEALTH_CHECK_ENABLED', true), + 'mux_health_check_timeout' => env('SSH_MUX_HEALTH_CHECK_TIMEOUT', 5), + 'mux_max_age' => env('SSH_MUX_MAX_AGE', 1800), // 30 minutes 'connection_timeout' => 10, 'server_interval' => 20, 'command_timeout' => 7200, + 'max_retries' => env('SSH_MAX_RETRIES', 3), + 'retry_base_delay' => env('SSH_RETRY_BASE_DELAY', 2), // seconds + 'retry_max_delay' => env('SSH_RETRY_MAX_DELAY', 30), // seconds + 'retry_multiplier' => env('SSH_RETRY_MULTIPLIER', 2), ], 'invitation' => [ diff --git a/database/migrations/2025_09_10_172952_remove_is_readonly_from_local_persistent_volumes_table.php b/database/migrations/2025_09_10_172952_remove_is_readonly_from_local_persistent_volumes_table.php new file mode 100644 index 000000000..31398bd35 --- /dev/null +++ b/database/migrations/2025_09_10_172952_remove_is_readonly_from_local_persistent_volumes_table.php @@ -0,0 +1,28 @@ +<?php + +use Illuminate\Database\Migrations\Migration; +use Illuminate\Database\Schema\Blueprint; +use Illuminate\Support\Facades\Schema; + +return new class extends Migration +{ + /** + * Run the migrations. + */ + public function up(): void + { + Schema::table('local_persistent_volumes', function (Blueprint $table) { + $table->dropColumn('is_readonly'); + }); + } + + /** + * Reverse the migrations. + */ + public function down(): void + { + Schema::table('local_persistent_volumes', function (Blueprint $table) { + $table->boolean('is_readonly')->default(false); + }); + } +}; diff --git a/database/migrations/2025_09_10_173300_drop_webhooks_table.php b/database/migrations/2025_09_10_173300_drop_webhooks_table.php new file mode 100644 index 000000000..4cb1b4e70 --- /dev/null +++ b/database/migrations/2025_09_10_173300_drop_webhooks_table.php @@ -0,0 +1,31 @@ +<?php + +use Illuminate\Database\Migrations\Migration; +use Illuminate\Database\Schema\Blueprint; +use Illuminate\Support\Facades\Schema; + +return new class extends Migration +{ + /** + * Run the migrations. + */ + public function up(): void + { + Schema::dropIfExists('webhooks'); + } + + /** + * Reverse the migrations. + */ + public function down(): void + { + Schema::create('webhooks', function (Blueprint $table) { + $table->id(); + $table->enum('status', ['pending', 'success', 'failed'])->default('pending'); + $table->string('type'); + $table->longText('payload'); + $table->longText('failure_reason')->nullable(); + $table->timestamps(); + }); + } +}; diff --git a/database/migrations/2025_09_10_173402_drop_kubernetes_table.php b/database/migrations/2025_09_10_173402_drop_kubernetes_table.php new file mode 100644 index 000000000..329ed0e7e --- /dev/null +++ b/database/migrations/2025_09_10_173402_drop_kubernetes_table.php @@ -0,0 +1,28 @@ +<?php + +use Illuminate\Database\Migrations\Migration; +use Illuminate\Database\Schema\Blueprint; +use Illuminate\Support\Facades\Schema; + +return new class extends Migration +{ + /** + * Run the migrations. + */ + public function up(): void + { + Schema::dropIfExists('kubernetes'); + } + + /** + * Reverse the migrations. + */ + public function down(): void + { + Schema::create('kubernetes', function (Blueprint $table) { + $table->id(); + $table->string('uuid')->unique(); + $table->timestamps(); + }); + } +}; diff --git a/docker/coolify-helper/Dockerfile b/docker/coolify-helper/Dockerfile index c66b8d67e..3ea3d8793 100644 --- a/docker/coolify-helper/Dockerfile +++ b/docker/coolify-helper/Dockerfile @@ -10,9 +10,9 @@ ARG DOCKER_BUILDX_VERSION=0.25.0 # https://github.com/buildpacks/pack/releases ARG PACK_VERSION=0.38.2 # https://github.com/railwayapp/nixpacks/releases -ARG NIXPACKS_VERSION=1.39.0 +ARG NIXPACKS_VERSION=1.40.0 # https://github.com/minio/mc/releases -ARG MINIO_VERSION=RELEASE.2025-03-12T17-29-24Z +ARG MINIO_VERSION=RELEASE.2025-08-13T08-35-41Z FROM minio/mc:${MINIO_VERSION} AS minio-client diff --git a/package-lock.json b/package-lock.json index 34b2c1dd5..56e48288c 100644 --- a/package-lock.json +++ b/package-lock.json @@ -22,7 +22,7 @@ "pusher-js": "8.4.0", "tailwind-scrollbar": "4.0.2", "tailwindcss": "4.1.10", - "vite": "6.3.5", + "vite": "6.3.6", "vue": "3.5.16" } }, @@ -1131,6 +1131,66 @@ "node": ">=14.0.0" } }, + "node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/core": { + "version": "1.4.3", + "dev": true, + "inBundle": true, + "license": "MIT", + "optional": true, + "dependencies": { + "@emnapi/wasi-threads": "1.0.2", + "tslib": "^2.4.0" + } + }, + "node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/runtime": { + "version": "1.4.3", + "dev": true, + "inBundle": true, + "license": "MIT", + "optional": true, + "dependencies": { + "tslib": "^2.4.0" + } + }, + "node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/wasi-threads": { + "version": "1.0.2", + "dev": true, + "inBundle": true, + "license": "MIT", + "optional": true, + "dependencies": { + "tslib": "^2.4.0" + } + }, + "node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@napi-rs/wasm-runtime": { + "version": "0.2.10", + "dev": true, + "inBundle": true, + "license": "MIT", + "optional": true, + "dependencies": { + "@emnapi/core": "^1.4.3", + "@emnapi/runtime": "^1.4.3", + "@tybys/wasm-util": "^0.9.0" + } + }, + "node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@tybys/wasm-util": { + "version": "0.9.0", + "dev": true, + "inBundle": true, + "license": "MIT", + "optional": true, + "dependencies": { + "tslib": "^2.4.0" + } + }, + "node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/tslib": { + "version": "2.8.0", + "dev": true, + "inBundle": true, + "license": "0BSD", + "optional": true + }, "node_modules/@tailwindcss/oxide-win32-arm64-msvc": { "version": "4.1.10", "resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-arm64-msvc/-/oxide-win32-arm64-msvc-4.1.10.tgz", @@ -2635,9 +2695,9 @@ "license": "MIT" }, "node_modules/vite": { - "version": "6.3.5", - "resolved": "https://registry.npmjs.org/vite/-/vite-6.3.5.tgz", - "integrity": "sha512-cZn6NDFE7wdTpINgs++ZJ4N49W2vRp8LCKrn3Ob1kYNtOo21vfDoaV5GzBfLU4MovSAB8uNRm4jgzVQZ+mBzPQ==", + "version": "6.3.6", + "resolved": "https://registry.npmjs.org/vite/-/vite-6.3.6.tgz", + "integrity": "sha512-0msEVHJEScQbhkbVTb/4iHZdJ6SXp/AvxL2sjwYQFfBqleHtnCqv1J3sa9zbWz/6kW1m9Tfzn92vW+kZ1WV6QA==", "dev": true, "license": "MIT", "dependencies": { diff --git a/package.json b/package.json index 10ec71415..e29c5e8e6 100644 --- a/package.json +++ b/package.json @@ -16,7 +16,7 @@ "pusher-js": "8.4.0", "tailwind-scrollbar": "4.0.2", "tailwindcss": "4.1.10", - "vite": "6.3.5", + "vite": "6.3.6", "vue": "3.5.16" }, "dependencies": { diff --git a/resources/views/livewire/project/application/general.blade.php b/resources/views/livewire/project/application/general.blade.php index 315385593..f2468c6b7 100644 --- a/resources/views/livewire/project/application/general.blade.php +++ b/resources/views/livewire/project/application/general.blade.php @@ -8,6 +8,9 @@ <form wire:submit='submit' class="flex flex-col pb-32"> <div class="flex items-center gap-2"> <h2>General</h2> + @if (isDev()) + <div>{{ $application->compose_parsing_version }}</div> + @endif <x-forms.button canGate="update" :canResource="$application" type="submit">Save</x-forms.button> </div> <div>General configuration for your application.</div> @@ -462,12 +465,9 @@ class="underline" href="https://coolify.io/docs/knowledge-base/docker/registry" </div> </div> </form> - - <x-domain-conflict-modal - :conflicts="$domainConflicts" - :showModal="$showDomainConflictModal" - confirmAction="confirmDomainUsage" /> - + + <x-domain-conflict-modal :conflicts="$domainConflicts" :showModal="$showDomainConflictModal" confirmAction="confirmDomainUsage" /> + @script <script> $wire.$on('loadCompose', (isInit = true) => { diff --git a/resources/views/livewire/project/application/source.blade.php b/resources/views/livewire/project/application/source.blade.php index 9e746fadb..9d0d53f2e 100644 --- a/resources/views/livewire/project/application/source.blade.php +++ b/resources/views/livewire/project/application/source.blade.php @@ -5,25 +5,25 @@ @can('update', $application) <x-forms.button type="submit">Save</x-forms.button> @endcan - <a target="_blank" class="hover:no-underline" href="{{ $application?->gitBranchLocation }}"> - <x-forms.button> + <div class="flex items-center gap-4 px-2"> + <a target="_blank" class="hover:no-underline flex items-center gap-1" + href="{{ $application?->gitBranchLocation }}"> Open Repository <x-external-link /> - </x-forms.button> - </a> - @if (data_get($application, 'source.is_public') === false) - <a target="_blank" class="hover:no-underline" href="{{ getInstallationPath($application->source) }}"> - <x-forms.button> + </a> + @if (data_get($application, 'source.is_public') === false) + <a target="_blank" class="hover:no-underline flex items-center gap-1" + href="{{ getInstallationPath($application->source) }}"> Open Git App <x-external-link /> - </x-forms.button> - </a> - @endif - <a target="_blank" class="flex hover:no-underline" href="{{ $application?->gitCommits }}"> - <x-forms.button>Open Commits on Git + </a> + @endif + <a target="_blank" class="flex hover:no-underline items-center gap-1" + href="{{ $application?->gitCommits }}"> + Open Commits on Git <x-external-link /> - </x-forms.button> - </a> + </a> + </div> </div> <div class="pb-4">Code source of your application.</div> @@ -34,11 +34,13 @@ class="font-bold text-warning">{{ data_get($application, 'source.name', 'No sour </div> @endif <div class="flex gap-2"> - <x-forms.input placeholder="coollabsio/coolify-example" id="gitRepository" label="Repository" canGate="update" :canResource="$application" /> + <x-forms.input placeholder="coollabsio/coolify-example" id="gitRepository" label="Repository" + canGate="update" :canResource="$application" /> <x-forms.input placeholder="main" id="gitBranch" label="Branch" canGate="update" :canResource="$application" /> </div> <div class="flex items-end gap-2"> - <x-forms.input placeholder="HEAD" id="gitCommitSha" placeholder="HEAD" label="Commit SHA" canGate="update" :canResource="$application" /> + <x-forms.input placeholder="HEAD" id="gitCommitSha" placeholder="HEAD" label="Commit SHA" + canGate="update" :canResource="$application" /> </div> </div> diff --git a/resources/views/livewire/project/shared/storages/all.blade.php b/resources/views/livewire/project/shared/storages/all.blade.php index 4ed1d1b52..d62362562 100644 --- a/resources/views/livewire/project/shared/storages/all.blade.php +++ b/resources/views/livewire/project/shared/storages/all.blade.php @@ -3,11 +3,10 @@ @foreach ($resource->persistentStorages as $storage) @if ($resource->type() === 'service') <livewire:project.shared.storages.show wire:key="storage-{{ $storage->id }}" :storage="$storage" - :resource="$resource" :isFirst="$loop->first" isService='true' /> + :resource="$resource" :isFirst="$storage->id === $this->firstStorageId" isService='true' /> @else <livewire:project.shared.storages.show wire:key="storage-{{ $storage->id }}" :storage="$storage" - :resource="$resource" isReadOnly="{{ data_get($storage, 'is_readonly') }}" - startedAt="{{ data_get($resource, 'started_at') }}" /> + :resource="$resource" :isFirst="$storage->id === $this->firstStorageId" startedAt="{{ data_get($resource, 'started_at') }}" /> @endif @endforeach </div> diff --git a/resources/views/livewire/server/proxy.blade.php b/resources/views/livewire/server/proxy.blade.php index 506b05e87..db2fd2827 100644 --- a/resources/views/livewire/server/proxy.blade.php +++ b/resources/views/livewire/server/proxy.blade.php @@ -7,9 +7,11 @@ <div class="flex items-center gap-2"> <h2>Configuration</h2> @if ($server->proxy->status === 'exited' || $server->proxy->status === 'removing') - <x-forms.button canGate="update" :canResource="$server" wire:click.prevent="changeProxy">Switch Proxy</x-forms.button> + <x-forms.button canGate="update" :canResource="$server" wire:click.prevent="changeProxy">Switch + Proxy</x-forms.button> @else - <x-forms.button canGate="update" :canResource="$server" disabled wire:click.prevent="changeProxy">Switch Proxy</x-forms.button> + <x-forms.button canGate="update" :canResource="$server" disabled + wire:click.prevent="changeProxy">Switch Proxy</x-forms.button> @endif <x-forms.button canGate="update" :canResource="$server" type="submit">Save</x-forms.button> </div> @@ -27,11 +29,11 @@ id="server.settings.generate_exact_labels" label="Generate labels only for {{ str($server->proxyType())->title() }}" instantSave /> <x-forms.checkbox canGate="update" :canResource="$server" instantSave="instantSaveRedirect" - id="redirect_enabled" label="Override default request handler" + id="redirectEnabled" label="Override default request handler" helper="Requests to unknown hosts or stopped services will receive a 503 response or be redirected to the URL you set below (need to enable this first)." /> - @if ($redirect_enabled) + @if ($redirectEnabled) <x-forms.input canGate="update" :canResource="$server" placeholder="https://app.coolify.io" - id="redirect_url" label="Redirect to (optional)" /> + id="redirectUrl" label="Redirect to (optional)" /> @endif </div> @if ($server->proxyType() === ProxyTypes::TRAEFIK->value) @@ -50,15 +52,26 @@ <x-loading text="Loading proxy configuration..." /> </div> <div wire:loading.remove wire:target="loadProxyConfiguration"> - @if ($proxy_settings) + @if ($proxySettings) <div class="flex flex-col gap-2 pt-4"> <x-forms.textarea canGate="update" :canResource="$server" useMonacoEditor - monacoEditorLanguage="yaml" label="Configuration file" name="proxy_settings" - id="proxy_settings" rows="30" /> - <x-forms.button canGate="update" :canResource="$server" - wire:click.prevent="reset_proxy_configuration"> - Reset configuration to default - </x-forms.button> + monacoEditorLanguage="yaml" + label="Configuration file ({{ $this->configurationFilePath }})" name="proxySettings" + id="proxySettings" rows="30" /> + @can('update', $server) + <x-modal-confirmation title="Reset Proxy Configuration?" + buttonTitle="Reset configuration to default" isErrorButton + submitAction="resetProxyConfiguration" :actions="[ + 'Reset proxy configuration to default settings', + 'All custom configurations will be lost', + 'Custom ports and entrypoints will be removed', + ]" + confirmationText="{{ $server->name }}" + confirmationLabel="Please confirm by entering the server name below" + shortConfirmationLabel="Server Name" step2ButtonText="Reset Configuration" + :confirmWithPassword="false" :confirmWithText="true"> + </x-modal-confirmation> + @endcan </div> @endif </div> diff --git a/routes/web.php b/routes/web.php index 02b23cc37..e6567daad 100644 --- a/routes/web.php +++ b/routes/web.php @@ -326,7 +326,11 @@ 'root' => '/', ]); if (! $disk->exists($filename)) { - return response()->json(['message' => 'Backup not found.'], 404); + if ($execution->scheduledDatabaseBackup->disable_local_backup === true && $execution->scheduledDatabaseBackup->save_s3 === true) { + return response()->json(['message' => 'Backup not available locally, but available on S3.'], 404); + } + + return response()->json(['message' => 'Backup not found locally on the server.'], 404); } return new StreamedResponse(function () use ($disk, $filename) { diff --git a/templates/compose/appwrite.yaml b/templates/compose/appwrite.yaml index 1645eba84..07f7336e1 100644 --- a/templates/compose/appwrite.yaml +++ b/templates/compose/appwrite.yaml @@ -23,6 +23,7 @@ services: environment: - SERVICE_URL_APPWRITE=/ - _APP_ENV=${_APP_ENV:-production} + - _APP_EDITION=${_APP_EDITION:-self-hosted} - _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE:-6} - _APP_LOCALE=${_APP_LOCALE:-en} - _APP_COMPRESSION_MIN_SIZE_BYTES=${_APP_COMPRESSION_MIN_SIZE_BYTES} @@ -41,11 +42,14 @@ services: - _APP_OPTIONS_FORCE_HTTPS=${_APP_OPTIONS_FORCE_HTTPS:-disabled} - _APP_OPTIONS_ROUTER_FORCE_HTTPS=${_APP_OPTIONS_ROUTER_FORCE_HTTPS:-disabled} - _APP_OPENSSL_KEY_V1=$SERVICE_PASSWORD_64_APPWRITE - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_CONSOLE_DOMAIN=${_APP_CONSOLE_DOMAIN} + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_DOMAIN_TARGET_CNAME=${_APP_DOMAIN_TARGET_CNAME:-localhost} - _APP_DOMAIN_TARGET_AAAA=${_APP_DOMAIN_TARGET_AAAA:-::1} - _APP_DOMAIN_TARGET_A=${_APP_DOMAIN_TARGET_A:-127.0.0.1} - - _APP_DOMAIN_FUNCTIONS=$SERVICE_URL_APPWRITE + - _APP_DOMAIN_TARGET_CAA=${_APP_DOMAIN_TARGET_CAA} + - _APP_DOMAIN_FUNCTIONS=${_APP_DOMAIN_FUNCTIONS:-functions.$SERVICE_FQDN_APPWRITE} + - _APP_DNS=${_APP_DNS} - _APP_REDIS_HOST=${_APP_REDIS_HOST:-appwrite-redis} - _APP_REDIS_PORT=${_APP_REDIS_PORT:-6379} - _APP_REDIS_USER=${_APP_REDIS_USER} @@ -96,7 +100,7 @@ services: - _APP_COMPUTE_MEMORY=${_APP_COMPUTE_MEMORY:-0} - _APP_FUNCTIONS_RUNTIMES=${_APP_FUNCTIONS_RUNTIMES:-node-20.0,php-8.2,python-3.11,ruby-3.2} - _APP_SITES_RUNTIMES=${_APP_SITES_RUNTIMES} - - _APP_DOMAIN_SITES=${_APP_DOMAIN_SITES:-appwrite.network} + - _APP_DOMAIN_SITES=${_APP_DOMAIN_SITES:-sites.$SERVICE_FQDN_APPWRITE} - _APP_EXECUTOR_SECRET=$SERVICE_PASSWORD_64_APPWRITE - _APP_EXECUTOR_HOST=${_APP_EXECUTOR_HOST:-http://appwrite-executor/v1} - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} @@ -124,9 +128,20 @@ services: - _APP_MIGRATIONS_FIREBASE_CLIENT_ID=${_APP_MIGRATIONS_FIREBASE_CLIENT_ID} - _APP_MIGRATIONS_FIREBASE_CLIENT_SECRET=${_APP_MIGRATIONS_FIREBASE_CLIENT_SECRET} - _APP_ASSISTANT_OPENAI_API_KEY=${_APP_ASSISTANT_OPENAI_API_KEY} + - _APP_MESSAGE_SMS_TEST_DSN=${_APP_MESSAGE_SMS_TEST_DSN} + - _APP_MESSAGE_EMAIL_TEST_DSN=${_APP_MESSAGE_EMAIL_TEST_DSN} + - _APP_MESSAGE_PUSH_TEST_DSN=${_APP_MESSAGE_PUSH_TEST_DSN} + - _APP_CONSOLE_COUNTRIES_DENYLIST=${_APP_CONSOLE_COUNTRIES_DENYLIST} + - _APP_EXPERIMENT_LOGGING_PROVIDER=${_APP_EXPERIMENT_LOGGING_PROVIDER} + - _APP_EXPERIMENT_LOGGING_CONFIG=${_APP_EXPERIMENT_LOGGING_CONFIG} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} + - _APP_DATABASE_SHARED_TABLES_V1=${_APP_DATABASE_SHARED_TABLES_V1} + - _APP_DATABASE_SHARED_NAMESPACE=${_APP_DATABASE_SHARED_NAMESPACE} + - _APP_FUNCTIONS_CREATION_ABUSE_LIMIT=${_APP_FUNCTIONS_CREATION_ABUSE_LIMIT} + - _APP_CUSTOM_DOMAIN_DENY_LIST=${_APP_CUSTOM_DOMAIN_DENY_LIST} appwrite-console: - image: appwrite/console:6.0.13 + image: appwrite/console:6.1.28 container_name: appwrite-console environment: - SERVICE_URL_APPWRITE=/console @@ -156,6 +171,7 @@ services: - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB - _APP_USAGE_STATS=${_APP_USAGE_STATS:-enabled} - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-audits: image: appwrite/appwrite:1.7.4 @@ -178,6 +194,7 @@ services: - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-webhooks: image: appwrite/appwrite:1.7.4 @@ -202,6 +219,8 @@ services: - _APP_REDIS_USER=${_APP_REDIS_USER} - _APP_REDIS_PASS=${_APP_REDIS_PASS} - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} + - _APP_WEBHOOK_MAX_FAILED_ATTEMPTS=${_APP_WEBHOOK_MAX_FAILED_ATTEMPTS} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-deletes: image: appwrite/appwrite:1.7.4 @@ -255,12 +274,11 @@ services: - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} - _APP_EXECUTOR_SECRET=$SERVICE_PASSWORD_64_APPWRITE - _APP_EXECUTOR_HOST=${_APP_EXECUTOR_HOST:-http://appwrite-executor/v1} - - _APP_MAINTENANCE_RETENTION_ABUSE=${_APP_MAINTENANCE_RETENTION_ABUSE:-86400} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} + - _APP_DATABASE_SHARED_TABLES_V1=${_APP_DATABASE_SHARED_TABLES_V1} + - _APP_EMAIL_CERTIFICATES=${_APP_EMAIL_CERTIFICATES} - _APP_MAINTENANCE_RETENTION_AUDIT=${_APP_MAINTENANCE_RETENTION_AUDIT:-1209600} - _APP_MAINTENANCE_RETENTION_AUDIT_CONSOLE=${_APP_MAINTENANCE_RETENTION_AUDIT_CONSOLE} - - _APP_MAINTENANCE_RETENTION_EXECUTION=${_APP_MAINTENANCE_RETENTION_EXECUTION:-1209600} - - _APP_SYSTEM_SECURITY_EMAIL_ADDRESS=${_APP_SYSTEM_SECURITY_EMAIL_ADDRESS} - - _APP_EMAIL_CERTIFICATES=${_APP_EMAIL_CERTIFICATES} appwrite-worker-databases: image: appwrite/appwrite:1.7.4 @@ -283,6 +301,9 @@ services: - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} + - _APP_WORKERS_NUM=${_APP_WORKERS_NUM} + - _APP_QUEUE_NAME=${_APP_QUEUE_NAME} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-builds: image: appwrite/appwrite:1.7.4 @@ -323,7 +344,7 @@ services: - _APP_COMPUTE_SIZE_LIMIT=${_APP_COMPUTE_SIZE_LIMIT:-30000000} - _APP_OPTIONS_FORCE_HTTPS=${_APP_OPTIONS_FORCE_HTTPS:-disabled} - _APP_OPTIONS_ROUTER_FORCE_HTTPS=${_APP_OPTIONS_ROUTER_FORCE_HTTPS:-disabled} - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_STORAGE_DEVICE=${_APP_STORAGE_DEVICE:-local} - _APP_STORAGE_S3_ACCESS_KEY=${_APP_STORAGE_S3_ACCESS_KEY} - _APP_STORAGE_S3_SECRET=${_APP_STORAGE_S3_SECRET} @@ -346,7 +367,10 @@ services: - _APP_STORAGE_WASABI_SECRET=${_APP_STORAGE_WASABI_SECRET} - _APP_STORAGE_WASABI_REGION=${_APP_STORAGE_WASABI_REGION:-eu-central-1} - _APP_STORAGE_WASABI_BUCKET=${_APP_STORAGE_WASABI_BUCKET} - - _APP_DOMAIN_SITES=${_APP_DOMAIN_SITES} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} + - _APP_DOMAIN_SITES=${_APP_DOMAIN_SITES:-sites.$SERVICE_FQDN_APPWRITE} + - _APP_BROWSER_HOST=${_APP_BROWSER_HOST} + - _APP_CONSOLE_DOMAIN=${_APP_CONSOLE_DOMAIN} appwrite-worker-certificates: image: appwrite/appwrite:1.7.4 @@ -362,11 +386,13 @@ services: - _APP_ENV=${_APP_ENV:-production} - _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE:-6} - _APP_OPENSSL_KEY_V1=$SERVICE_PASSWORD_64_APPWRITE - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_DOMAIN_TARGET_CNAME=${_APP_DOMAIN_TARGET_CNAME} - _APP_DOMAIN_TARGET_AAAA=${_APP_DOMAIN_TARGET_AAAA} - _APP_DOMAIN_TARGET_A=${_APP_DOMAIN_TARGET_A} - - _APP_DOMAIN_FUNCTIONS=$SERVICE_URL_APPWRITE + - _APP_DOMAIN_TARGET_CAA=${_APP_DOMAIN_TARGET_CAA} + - _APP_DOMAIN_FUNCTIONS=${_APP_DOMAIN_FUNCTIONS:-functions.$SERVICE_FQDN_APPWRITE} + - _APP_DNS=${_APP_DNS} - _APP_EMAIL_CERTIFICATES=${_APP_EMAIL_CERTIFICATES:-enabled} - _APP_REDIS_HOST=${_APP_REDIS_HOST:-appwrite-redis} - _APP_REDIS_PORT=${_APP_REDIS_PORT:-6379} @@ -378,6 +404,7 @@ services: - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-functions: image: appwrite/appwrite:1.7.4 @@ -391,7 +418,7 @@ services: - _APP_ENV=${_APP_ENV:-production} - _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE:-6} - _APP_OPENSSL_KEY_V1=$SERVICE_PASSWORD_64_APPWRITE - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_OPTIONS_FORCE_HTTPS=${_APP_OPTIONS_FORCE_HTTPS:-disabled} - _APP_REDIS_HOST=${_APP_REDIS_HOST:-appwrite-redis} - _APP_REDIS_PORT=${_APP_REDIS_PORT:-6379} @@ -413,6 +440,8 @@ services: - _APP_DOCKER_HUB_USERNAME=${_APP_DOCKER_HUB_USERNAME} - _APP_DOCKER_HUB_PASSWORD=${_APP_DOCKER_HUB_PASSWORD} - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} + - _APP_LOGGING_PROVIDER=${_APP_LOGGING_PROVIDER} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-mails: image: appwrite/appwrite:1.7.4 @@ -420,6 +449,7 @@ services: container_name: appwrite-worker-mails depends_on: - appwrite-redis + - appwrite-mariadb environment: - _APP_ENV=${_APP_ENV:-production} - _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE:-6} @@ -441,8 +471,9 @@ services: - _APP_SMTP_USERNAME=${_APP_SMTP_USERNAME} - _APP_SMTP_PASSWORD=${_APP_SMTP_PASSWORD} - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_OPTIONS_FORCE_HTTPS=${_APP_OPTIONS_FORCE_HTTPS:-disabled} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-messaging: image: appwrite/appwrite:1.7.4 @@ -468,6 +499,7 @@ services: - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} - _APP_SMS_FROM=${_APP_SMS_FROM} - _APP_SMS_PROVIDER=${_APP_SMS_PROVIDER} + - _APP_SMS_PROJECTS_DENY_LIST=${_APP_SMS_PROJECTS_DENY_LIST} - _APP_STORAGE_DEVICE=${_APP_STORAGE_DEVICE:-local} - _APP_STORAGE_S3_ACCESS_KEY=${_APP_STORAGE_S3_ACCESS_KEY} - _APP_STORAGE_S3_SECRET=${_APP_STORAGE_S3_SECRET} @@ -490,6 +522,7 @@ services: - _APP_STORAGE_WASABI_SECRET=${_APP_STORAGE_WASABI_SECRET} - _APP_STORAGE_WASABI_REGION=${_APP_STORAGE_WASABI_REGION:-eu-central-1} - _APP_STORAGE_WASABI_BUCKET=${_APP_STORAGE_WASABI_BUCKET} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-worker-migrations: image: appwrite/appwrite:1.7.4 @@ -503,10 +536,12 @@ services: - _APP_ENV=${_APP_ENV:-production} - _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE:-6} - _APP_OPENSSL_KEY_V1=$SERVICE_PASSWORD_64_APPWRITE - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_DOMAIN_TARGET_CNAME=${_APP_DOMAIN_TARGET_CNAME} - _APP_DOMAIN_TARGET_AAAA=${_APP_DOMAIN_TARGET_AAAA} - _APP_DOMAIN_TARGET_A=${_APP_DOMAIN_TARGET_A} + - _APP_DOMAIN_TARGET_CAA=${_APP_DOMAIN_TARGET_CAA} + - _APP_DNS=${_APP_DNS} - _APP_EMAIL_SECURITY=${_APP_EMAIL_SECURITY:-certs@appwrite.io} - _APP_REDIS_HOST=${_APP_REDIS_HOST:-appwrite-redis} - _APP_REDIS_PORT=${_APP_REDIS_PORT:-6379} @@ -520,6 +555,7 @@ services: - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} - _APP_MIGRATIONS_FIREBASE_CLIENT_ID=${_APP_MIGRATIONS_FIREBASE_CLIENT_ID} - _APP_MIGRATIONS_FIREBASE_CLIENT_SECRET=${_APP_MIGRATIONS_FIREBASE_CLIENT_SECRET} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-task-maintenance: image: appwrite/appwrite:1.7.4 @@ -530,11 +566,13 @@ services: environment: - _APP_ENV=${_APP_ENV:-production} - _APP_WORKER_PER_CORE=${_APP_WORKER_PER_CORE:-6} - - _APP_DOMAIN=$SERVICE_URL_APPWRITE + - _APP_DOMAIN=${_APP_DOMAIN:-$SERVICE_FQDN_APPWRITE} - _APP_DOMAIN_TARGET_CNAME=${_APP_DOMAIN_TARGET_CNAME} - _APP_DOMAIN_TARGET_AAAA=${_APP_DOMAIN_TARGET_AAAA} - _APP_DOMAIN_TARGET_A=${_APP_DOMAIN_TARGET_A} - - _APP_DOMAIN_FUNCTIONS=$SERVICE_URL_APPWRITE + - _APP_DOMAIN_TARGET_CAA=${_APP_DOMAIN_TARGET_CAA} + - _APP_DOMAIN_FUNCTIONS=${_APP_DOMAIN_FUNCTIONS:-functions.$SERVICE_FQDN_APPWRITE} + - _APP_DNS=${_APP_DNS} - _APP_OPENSSL_KEY_V1=$SERVICE_PASSWORD_64_APPWRITE - _APP_REDIS_HOST=${_APP_REDIS_HOST:-appwrite-redis} - _APP_REDIS_PORT=${_APP_REDIS_PORT:-6379} @@ -545,14 +583,16 @@ services: - _APP_DB_SCHEMA=${_APP_DB_SCHEMA:-appwrite} - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB - - _APP_MAINTENANCE_INTERVAL=${_APP_MAINTENANCE_INTERVAL} - - _APP_MAINTENANCE_RETENTION_EXECUTION=${_APP_MAINTENANCE_RETENTION_EXECUTION} + - _APP_MAINTENANCE_INTERVAL=${_APP_MAINTENANCE_INTERVAL:-86400} + - _APP_MAINTENANCE_RETENTION_EXECUTION=${_APP_MAINTENANCE_RETENTION_EXECUTION:-1209600} - _APP_MAINTENANCE_RETENTION_CACHE=${_APP_MAINTENANCE_RETENTION_CACHE:-2592000} - _APP_MAINTENANCE_RETENTION_ABUSE=${_APP_MAINTENANCE_RETENTION_ABUSE:-86400} - _APP_MAINTENANCE_RETENTION_AUDIT=${_APP_MAINTENANCE_RETENTION_AUDIT:-1209600} - _APP_MAINTENANCE_RETENTION_AUDIT_CONSOLE=${_APP_MAINTENANCE_RETENTION_AUDIT_CONSOLE} - _APP_MAINTENANCE_RETENTION_USAGE_HOURLY=${_APP_MAINTENANCE_RETENTION_USAGE_HOURLY:-8640000} - _APP_MAINTENANCE_RETENTION_SCHEDULES=${_APP_MAINTENANCE_RETENTION_SCHEDULES:-86400} + - _APP_MAINTENANCE_START_TIME=${_APP_MAINTENANCE_START_TIME} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-task-stats-resources: image: appwrite/appwrite:1.7.4 @@ -626,6 +666,7 @@ services: - _APP_USAGE_STATS=${_APP_USAGE_STATS:-enabled} - _APP_LOGGING_CONFIG=${_APP_LOGGING_CONFIG} - _APP_USAGE_AGGREGATION_INTERVAL=${_APP_USAGE_AGGREGATION_INTERVAL:-30} + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-task-scheduler-functions: image: appwrite/appwrite:1.7.4 @@ -647,6 +688,7 @@ services: - _APP_DB_SCHEMA=${_APP_DB_SCHEMA:-appwrite} - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-task-scheduler-executions: image: appwrite/appwrite:1.7.4 @@ -668,6 +710,7 @@ services: - _APP_DB_SCHEMA=${_APP_DB_SCHEMA:-appwrite} - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-task-scheduler-messages: image: appwrite/appwrite:1.7.4 @@ -689,9 +732,10 @@ services: - _APP_DB_SCHEMA=${_APP_DB_SCHEMA:-appwrite} - _APP_DB_USER=$SERVICE_USER_MARIADB - _APP_DB_PASS=$SERVICE_PASSWORD_MARIADB + - _APP_DATABASE_SHARED_TABLES=${_APP_DATABASE_SHARED_TABLES} appwrite-assistant: - image: appwrite/assistant:0.4.0 + image: appwrite/assistant:0.8.3 container_name: appwrite-assistant environment: - _APP_ASSISTANT_OPENAI_API_KEY=${_APP_ASSISTANT_OPENAI_API_KEY} @@ -699,12 +743,13 @@ services: appwrite-browser: image: appwrite/browser:0.2.4 container_name: appwrite-browser + hostname: appwrite-browser openruntimes-executor: container_name: openruntimes-executor hostname: appwrite-executor stop_signal: SIGINT - image: openruntimes/executor:0.7.14 + image: openruntimes/executor:0.8.6 networks: - runtimes volumes: @@ -714,6 +759,7 @@ services: - appwrite-sites:/storage/sites:rw - /tmp:/tmp:rw environment: + - OPR_EXECUTOR_IMAGE_PULL=disabled - OPR_EXECUTOR_INACTIVE_TRESHOLD=${_APP_COMPUTE_INACTIVE_THRESHOLD} - OPR_EXECUTOR_MAINTENANCE_INTERVAL=${_APP_COMPUTE_MAINTENANCE_INTERVAL} - OPR_EXECUTOR_NETWORK=${_APP_COMPUTE_RUNTIMES_NETWORK:-runtimes} diff --git a/tests/Feature/IpAllowlistTest.php b/tests/Feature/IpAllowlistTest.php index 3454a9c9d..959dc757d 100644 --- a/tests/Feature/IpAllowlistTest.php +++ b/tests/Feature/IpAllowlistTest.php @@ -8,7 +8,7 @@ ]; foreach ($testCases as $case) { - $result = check_ip_against_allowlist($case['ip'], $case['allowlist']); + $result = checkIPAgainstAllowlist($case['ip'], $case['allowlist']); expect($result)->toBe($case['expected']); } }); @@ -24,7 +24,7 @@ ]; foreach ($testCases as $case) { - $result = check_ip_against_allowlist($case['ip'], $case['allowlist']); + $result = checkIPAgainstAllowlist($case['ip'], $case['allowlist']); expect($result)->toBe($case['expected']); } }); @@ -40,16 +40,16 @@ // Test 0.0.0.0 without subnet foreach ($testIps as $ip) { - $result = check_ip_against_allowlist($ip, ['0.0.0.0']); + $result = checkIPAgainstAllowlist($ip, ['0.0.0.0']); expect($result)->toBeTrue(); } // Test 0.0.0.0 with any subnet notation - should still allow all foreach ($testIps as $ip) { - expect(check_ip_against_allowlist($ip, ['0.0.0.0/0']))->toBeTrue(); - expect(check_ip_against_allowlist($ip, ['0.0.0.0/8']))->toBeTrue(); - expect(check_ip_against_allowlist($ip, ['0.0.0.0/24']))->toBeTrue(); - expect(check_ip_against_allowlist($ip, ['0.0.0.0/32']))->toBeTrue(); + expect(checkIPAgainstAllowlist($ip, ['0.0.0.0/0']))->toBeTrue(); + expect(checkIPAgainstAllowlist($ip, ['0.0.0.0/8']))->toBeTrue(); + expect(checkIPAgainstAllowlist($ip, ['0.0.0.0/24']))->toBeTrue(); + expect(checkIPAgainstAllowlist($ip, ['0.0.0.0/32']))->toBeTrue(); } }); @@ -66,44 +66,44 @@ ]; foreach ($testCases as $case) { - $result = check_ip_against_allowlist($case['ip'], $allowlist); + $result = checkIPAgainstAllowlist($case['ip'], $allowlist); expect($result)->toBe($case['expected']); } }); test('IP allowlist handles empty and invalid entries', function () { // Empty allowlist blocks all - expect(check_ip_against_allowlist('192.168.1.1', []))->toBeFalse(); - expect(check_ip_against_allowlist('192.168.1.1', ['']))->toBeFalse(); + expect(checkIPAgainstAllowlist('192.168.1.1', []))->toBeFalse(); + expect(checkIPAgainstAllowlist('192.168.1.1', ['']))->toBeFalse(); // Handles spaces - expect(check_ip_against_allowlist('192.168.1.100', [' 192.168.1.100 ']))->toBeTrue(); - expect(check_ip_against_allowlist('10.0.0.5', [' 10.0.0.0/8 ']))->toBeTrue(); + expect(checkIPAgainstAllowlist('192.168.1.100', [' 192.168.1.100 ']))->toBeTrue(); + expect(checkIPAgainstAllowlist('10.0.0.5', [' 10.0.0.0/8 ']))->toBeTrue(); // Invalid entries are skipped - expect(check_ip_against_allowlist('192.168.1.1', ['invalid.ip']))->toBeFalse(); - expect(check_ip_against_allowlist('192.168.1.1', ['192.168.1.0/33']))->toBeFalse(); // Invalid mask - expect(check_ip_against_allowlist('192.168.1.1', ['192.168.1.0/-1']))->toBeFalse(); // Invalid mask + expect(checkIPAgainstAllowlist('192.168.1.1', ['invalid.ip']))->toBeFalse(); + expect(checkIPAgainstAllowlist('192.168.1.1', ['192.168.1.0/33']))->toBeFalse(); // Invalid mask + expect(checkIPAgainstAllowlist('192.168.1.1', ['192.168.1.0/-1']))->toBeFalse(); // Invalid mask }); test('IP allowlist with various subnet sizes', function () { // /32 - single host - expect(check_ip_against_allowlist('192.168.1.1', ['192.168.1.1/32']))->toBeTrue(); - expect(check_ip_against_allowlist('192.168.1.2', ['192.168.1.1/32']))->toBeFalse(); + expect(checkIPAgainstAllowlist('192.168.1.1', ['192.168.1.1/32']))->toBeTrue(); + expect(checkIPAgainstAllowlist('192.168.1.2', ['192.168.1.1/32']))->toBeFalse(); // /31 - point-to-point link - expect(check_ip_against_allowlist('192.168.1.0', ['192.168.1.0/31']))->toBeTrue(); - expect(check_ip_against_allowlist('192.168.1.1', ['192.168.1.0/31']))->toBeTrue(); - expect(check_ip_against_allowlist('192.168.1.2', ['192.168.1.0/31']))->toBeFalse(); + expect(checkIPAgainstAllowlist('192.168.1.0', ['192.168.1.0/31']))->toBeTrue(); + expect(checkIPAgainstAllowlist('192.168.1.1', ['192.168.1.0/31']))->toBeTrue(); + expect(checkIPAgainstAllowlist('192.168.1.2', ['192.168.1.0/31']))->toBeFalse(); // /16 - class B - expect(check_ip_against_allowlist('172.16.0.1', ['172.16.0.0/16']))->toBeTrue(); - expect(check_ip_against_allowlist('172.16.255.255', ['172.16.0.0/16']))->toBeTrue(); - expect(check_ip_against_allowlist('172.17.0.1', ['172.16.0.0/16']))->toBeFalse(); + expect(checkIPAgainstAllowlist('172.16.0.1', ['172.16.0.0/16']))->toBeTrue(); + expect(checkIPAgainstAllowlist('172.16.255.255', ['172.16.0.0/16']))->toBeTrue(); + expect(checkIPAgainstAllowlist('172.17.0.1', ['172.16.0.0/16']))->toBeFalse(); // /0 - all addresses - expect(check_ip_against_allowlist('1.1.1.1', ['0.0.0.0/0']))->toBeTrue(); - expect(check_ip_against_allowlist('255.255.255.255', ['0.0.0.0/0']))->toBeTrue(); + expect(checkIPAgainstAllowlist('1.1.1.1', ['0.0.0.0/0']))->toBeTrue(); + expect(checkIPAgainstAllowlist('255.255.255.255', ['0.0.0.0/0']))->toBeTrue(); }); test('IP allowlist comma-separated string input', function () { @@ -111,10 +111,10 @@ $allowlistString = '192.168.1.100,10.0.0.0/8,172.16.0.0/16'; $allowlist = explode(',', $allowlistString); - expect(check_ip_against_allowlist('192.168.1.100', $allowlist))->toBeTrue(); - expect(check_ip_against_allowlist('10.5.5.5', $allowlist))->toBeTrue(); - expect(check_ip_against_allowlist('172.16.10.10', $allowlist))->toBeTrue(); - expect(check_ip_against_allowlist('8.8.8.8', $allowlist))->toBeFalse(); + expect(checkIPAgainstAllowlist('192.168.1.100', $allowlist))->toBeTrue(); + expect(checkIPAgainstAllowlist('10.5.5.5', $allowlist))->toBeTrue(); + expect(checkIPAgainstAllowlist('172.16.10.10', $allowlist))->toBeTrue(); + expect(checkIPAgainstAllowlist('8.8.8.8', $allowlist))->toBeFalse(); }); test('ValidIpOrCidr validation rule', function () { diff --git a/tests/Unit/PrivateKeyStorageTest.php b/tests/Unit/PrivateKeyStorageTest.php new file mode 100644 index 000000000..00f39e3df --- /dev/null +++ b/tests/Unit/PrivateKeyStorageTest.php @@ -0,0 +1,316 @@ +<?php + +use App\Models\PrivateKey; +use Illuminate\Foundation\Testing\RefreshDatabase; +use Illuminate\Support\Facades\Storage; +use Tests\TestCase; + +class PrivateKeyStorageTest extends TestCase +{ + use RefreshDatabase; + + protected function setUp(): void + { + parent::setUp(); + + // Set up a test team for the tests + $this->actingAs(\App\Models\User::factory()->create()); + } + + protected function getValidPrivateKey(): string + { + return '-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW +QyNTUxOQAAACBbhpqHhqv6aI67Mj9abM3DVbmcfYhZAhC7ca4d9UCevAAAAJi/QySHv0Mk +hwAAAAtzc2gtZWQyNTUxOQAAACBbhpqHhqv6aI67Mj9abM3DVbmcfYhZAhC7ca4d9UCevA +AAAECBQw4jg1WRT2IGHMncCiZhURCts2s24HoDS0thHnnRKVuGmoeGq/pojrsyP1pszcNV +uZx9iFkCELtxrh31QJ68AAAAEXNhaWxANzZmZjY2ZDJlMmRkAQIDBA== +-----END OPENSSH PRIVATE KEY-----'; + } + + /** @test */ + public function it_successfully_stores_private_key_in_filesystem() + { + Storage::fake('ssh-keys'); + + $privateKey = PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + + $this->assertDatabaseHas('private_keys', [ + 'id' => $privateKey->id, + 'name' => 'Test Key', + ]); + + $filename = "ssh_key@{$privateKey->uuid}"; + Storage::disk('ssh-keys')->assertExists($filename); + + $storedContent = Storage::disk('ssh-keys')->get($filename); + $this->assertEquals($privateKey->private_key, $storedContent); + } + + /** @test */ + public function it_throws_exception_when_storage_fails() + { + Storage::fake('ssh-keys'); + + // Mock Storage::put to return false (simulating storage failure) + Storage::shouldReceive('disk') + ->with('ssh-keys') + ->andReturn( + \Mockery::mock() + ->shouldReceive('exists') + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::any(), 'test') + ->andReturn(true) + ->shouldReceive('delete') + ->with(\Mockery::any()) + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/ssh_key@/'), \Mockery::any()) + ->andReturn(false) // Simulate storage failure + ->getMock() + ); + + $this->expectException(\Exception::class); + $this->expectExceptionMessage('Failed to write SSH key to filesystem'); + + PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + + // Assert that no database record was created due to transaction rollback + $this->assertDatabaseMissing('private_keys', [ + 'name' => 'Test Key', + ]); + } + + /** @test */ + public function it_throws_exception_when_storage_directory_is_not_writable() + { + Storage::fake('ssh-keys'); + + // Mock Storage disk to simulate directory not writable + Storage::shouldReceive('disk') + ->with('ssh-keys') + ->andReturn( + \Mockery::mock() + ->shouldReceive('exists') + ->with('') + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/\.test_write_/'), 'test') + ->andReturn(false) // Simulate directory not writable + ->getMock() + ); + + $this->expectException(\Exception::class); + $this->expectExceptionMessage('SSH keys storage directory is not writable'); + + PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + } + + /** @test */ + public function it_creates_storage_directory_if_not_exists() + { + Storage::fake('ssh-keys'); + + // Mock Storage disk to simulate directory not existing, then being created + Storage::shouldReceive('disk') + ->with('ssh-keys') + ->andReturn( + \Mockery::mock() + ->shouldReceive('exists') + ->with('') + ->andReturn(false) // Directory doesn't exist + ->shouldReceive('makeDirectory') + ->with('') + ->andReturn(true) // Successfully create directory + ->shouldReceive('put') + ->with(\Mockery::pattern('/\.test_write_/'), 'test') + ->andReturn(true) // Directory is writable after creation + ->shouldReceive('delete') + ->with(\Mockery::pattern('/\.test_write_/')) + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/ssh_key@/'), \Mockery::any()) + ->andReturn(true) + ->shouldReceive('exists') + ->with(\Mockery::pattern('/ssh_key@/')) + ->andReturn(true) + ->shouldReceive('get') + ->with(\Mockery::pattern('/ssh_key@/')) + ->andReturn($this->getValidPrivateKey()) + ->getMock() + ); + + $privateKey = PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + + $this->assertDatabaseHas('private_keys', [ + 'id' => $privateKey->id, + 'name' => 'Test Key', + ]); + } + + /** @test */ + public function it_throws_exception_when_file_content_verification_fails() + { + Storage::fake('ssh-keys'); + + // Mock Storage disk to simulate file being created but with wrong content + Storage::shouldReceive('disk') + ->with('ssh-keys') + ->andReturn( + \Mockery::mock() + ->shouldReceive('exists') + ->with('') + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/\.test_write_/'), 'test') + ->andReturn(true) + ->shouldReceive('delete') + ->with(\Mockery::pattern('/\.test_write_/')) + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/ssh_key@/'), \Mockery::any()) + ->andReturn(true) // File created successfully + ->shouldReceive('exists') + ->with(\Mockery::pattern('/ssh_key@/')) + ->andReturn(true) // File exists + ->shouldReceive('get') + ->with(\Mockery::pattern('/ssh_key@/')) + ->andReturn('corrupted content') // But content is wrong + ->shouldReceive('delete') + ->with(\Mockery::pattern('/ssh_key@/')) + ->andReturn(true) // Clean up bad file + ->getMock() + ); + + $this->expectException(\Exception::class); + $this->expectExceptionMessage('SSH key file content verification failed'); + + PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + + // Assert that no database record was created due to transaction rollback + $this->assertDatabaseMissing('private_keys', [ + 'name' => 'Test Key', + ]); + } + + /** @test */ + public function it_successfully_deletes_private_key_from_filesystem() + { + Storage::fake('ssh-keys'); + + $privateKey = PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + + $filename = "ssh_key@{$privateKey->uuid}"; + Storage::disk('ssh-keys')->assertExists($filename); + + $privateKey->delete(); + + Storage::disk('ssh-keys')->assertMissing($filename); + } + + /** @test */ + public function it_handles_database_transaction_rollback_on_storage_failure() + { + Storage::fake('ssh-keys'); + + // Count initial private keys + $initialCount = PrivateKey::count(); + + // Mock storage failure after database save + Storage::shouldReceive('disk') + ->with('ssh-keys') + ->andReturn( + \Mockery::mock() + ->shouldReceive('exists') + ->with('') + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/\.test_write_/'), 'test') + ->andReturn(true) + ->shouldReceive('delete') + ->with(\Mockery::pattern('/\.test_write_/')) + ->andReturn(true) + ->shouldReceive('put') + ->with(\Mockery::pattern('/ssh_key@/'), \Mockery::any()) + ->andReturn(false) // Storage fails + ->getMock() + ); + + try { + PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + } catch (\Exception $e) { + // Expected exception + } + + // Assert that database was rolled back + $this->assertEquals($initialCount, PrivateKey::count()); + $this->assertDatabaseMissing('private_keys', [ + 'name' => 'Test Key', + ]); + } + + /** @test */ + public function it_successfully_updates_private_key_with_transaction() + { + Storage::fake('ssh-keys'); + + $privateKey = PrivateKey::createAndStore([ + 'name' => 'Test Key', + 'description' => 'Test Description', + 'private_key' => $this->getValidPrivateKey(), + 'team_id' => currentTeam()->id, + ]); + + $newPrivateKey = str_replace('Test', 'Updated', $this->getValidPrivateKey()); + + $privateKey->updatePrivateKey([ + 'name' => 'Updated Key', + 'private_key' => $newPrivateKey, + ]); + + $this->assertDatabaseHas('private_keys', [ + 'id' => $privateKey->id, + 'name' => 'Updated Key', + ]); + + $filename = "ssh_key@{$privateKey->uuid}"; + $storedContent = Storage::disk('ssh-keys')->get($filename); + $this->assertEquals($newPrivateKey, $storedContent); + } +} diff --git a/tests/Unit/SshRetryMechanismTest.php b/tests/Unit/SshRetryMechanismTest.php new file mode 100644 index 000000000..23e1b867f --- /dev/null +++ b/tests/Unit/SshRetryMechanismTest.php @@ -0,0 +1,189 @@ +<?php + +namespace Tests\Unit; + +use App\Helpers\SshRetryHandler; +use App\Traits\SshRetryable; +use Tests\TestCase; + +class SshRetryMechanismTest extends TestCase +{ + public function test_ssh_retry_handler_exists() + { + $this->assertTrue(class_exists(\App\Helpers\SshRetryHandler::class)); + } + + public function test_ssh_retryable_trait_exists() + { + $this->assertTrue(trait_exists(\App\Traits\SshRetryable::class)); + } + + public function test_retry_on_ssh_connection_errors() + { + $handler = new class + { + use SshRetryable; + + // Make methods public for testing + public function test_is_retryable_ssh_error($error) + { + return $this->isRetryableSshError($error); + } + }; + + // Test various SSH error patterns + $sshErrors = [ + 'kex_exchange_identification: read: Connection reset by peer', + 'Connection refused', + 'Connection timed out', + 'ssh_exchange_identification: Connection closed by remote host', + 'Broken pipe', + 'No route to host', + 'Network is unreachable', + ]; + + foreach ($sshErrors as $error) { + $this->assertTrue( + $handler->test_is_retryable_ssh_error($error), + "Failed to identify as retryable: $error" + ); + } + } + + public function test_non_ssh_errors_are_not_retryable() + { + $handler = new class + { + use SshRetryable; + + // Make methods public for testing + public function test_is_retryable_ssh_error($error) + { + return $this->isRetryableSshError($error); + } + }; + + // Test non-SSH errors + $nonSshErrors = [ + 'Command not found', + 'Permission denied', + 'File not found', + 'Syntax error', + 'Invalid argument', + ]; + + foreach ($nonSshErrors as $error) { + $this->assertFalse( + $handler->test_is_retryable_ssh_error($error), + "Incorrectly identified as retryable: $error" + ); + } + } + + public function test_exponential_backoff_calculation() + { + $handler = new class + { + use SshRetryable; + + // Make method public for testing + public function test_calculate_retry_delay($attempt) + { + return $this->calculateRetryDelay($attempt); + } + }; + + // Test with default config values + config(['constants.ssh.retry_base_delay' => 2]); + config(['constants.ssh.retry_max_delay' => 30]); + config(['constants.ssh.retry_multiplier' => 2]); + + // Attempt 0: 2 seconds + $this->assertEquals(2, $handler->test_calculate_retry_delay(0)); + + // Attempt 1: 4 seconds + $this->assertEquals(4, $handler->test_calculate_retry_delay(1)); + + // Attempt 2: 8 seconds + $this->assertEquals(8, $handler->test_calculate_retry_delay(2)); + + // Attempt 3: 16 seconds + $this->assertEquals(16, $handler->test_calculate_retry_delay(3)); + + // Attempt 4: Should be capped at 30 seconds + $this->assertEquals(30, $handler->test_calculate_retry_delay(4)); + + // Attempt 5: Should still be capped at 30 seconds + $this->assertEquals(30, $handler->test_calculate_retry_delay(5)); + } + + public function test_retry_succeeds_after_failures() + { + $attemptCount = 0; + + config(['constants.ssh.max_retries' => 3]); + + // Simulate a function that fails twice then succeeds using the public static method + $result = SshRetryHandler::retry( + function () use (&$attemptCount) { + $attemptCount++; + if ($attemptCount < 3) { + throw new \RuntimeException('kex_exchange_identification: Connection reset by peer'); + } + + return 'success'; + }, + ['test' => 'retry_test'], + true + ); + + $this->assertEquals('success', $result); + $this->assertEquals(3, $attemptCount); + } + + public function test_retry_fails_after_max_attempts() + { + $attemptCount = 0; + + config(['constants.ssh.max_retries' => 3]); + + $this->expectException(\RuntimeException::class); + $this->expectExceptionMessage('Connection reset by peer'); + + // Simulate a function that always fails using the public static method + SshRetryHandler::retry( + function () use (&$attemptCount) { + $attemptCount++; + throw new \RuntimeException('Connection reset by peer'); + }, + ['test' => 'retry_test'], + true + ); + } + + public function test_non_retryable_errors_fail_immediately() + { + $attemptCount = 0; + + config(['constants.ssh.max_retries' => 3]); + + $this->expectException(\RuntimeException::class); + $this->expectExceptionMessage('Command not found'); + + try { + // Simulate a non-retryable error using the public static method + SshRetryHandler::retry( + function () use (&$attemptCount) { + $attemptCount++; + throw new \RuntimeException('Command not found'); + }, + ['test' => 'non_retryable_test'], + true + ); + } catch (\RuntimeException $e) { + // Should only attempt once since it's not retryable + $this->assertEquals(1, $attemptCount); + throw $e; + } + } +}