v4.0.0-beta.472 (#9492)

This commit is contained in:
Andras Bacsai 2026-04-09 12:14:09 +02:00 committed by GitHub
commit ec0668ce85
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
90 changed files with 737 additions and 409 deletions

View file

@ -9,9 +9,6 @@ body:
> [!IMPORTANT]
> **Please ensure you are using the latest version of Coolify before submitting an issue, as the bug may have already been fixed in a recent update.** (Of course, if you're experiencing an issue on the latest version that wasn't present in a previous version, please let us know.)
# 💎 Bounty Program (with [algora.io](https://console.algora.io/org/coollabsio/bounties/new))
- If you would like to prioritize the issue resolution, consider adding a bounty to this issue through our [Bounty Program](https://console.algora.io/org/coollabsio/bounties/new).
- type: textarea
attributes:
label: Error Message and Logs

View file

@ -1,31 +0,0 @@
name: 💎 Enhancement Bounty
description: "Propose a new feature, service, or improvement with an attached bounty."
title: "[Enhancement]: "
labels: ["✨ Enhancement", "🔍 Triage"]
body:
- type: markdown
attributes:
value: |
> [!IMPORTANT]
> **This issue template is exclusively for proposing new features, services, or improvements with an attached bounty.** Enhancements without a bounty can be discussed in the appropriate category of [Github Discussions](https://github.com/coollabsio/coolify/discussions).
# 💎 Add a Bounty (with [algora.io](https://console.algora.io/org/coollabsio/bounties/new))
- [Click here to add the required bounty](https://console.algora.io/org/coollabsio/bounties/new)
- type: dropdown
attributes:
label: Request Type
description: Select the type of request you are making.
options:
- New Feature
- New Service
- Improvement
validations:
required: true
- type: textarea
attributes:
label: Description
description: Provide a detailed description of the feature, improvement, or service you are proposing.
validations:
required: true

View file

@ -22,7 +22,7 @@ ## Category
## Preview
<!-- Screenshot or short video showing your changes in action. Mandatory for bounty claims and new features. -->
<!-- Screenshot or short video showing your changes in action. Mandatory for new features. -->
## AI Assistance

View file

@ -212,7 +212,7 @@ #### Review Process
- Duplicate or superseded work
- Security or quality concerns
#### Code Quality, Testing, and Bounty Submissions
#### Code Quality and Testing
All contributions must adhere to the highest standards of code quality and testing:
- **Testing Required**: Every PR must include steps to test your changes. Untested code will not be reviewed or merged.
@ -220,15 +220,6 @@ #### Code Quality, Testing, and Bounty Submissions
- **Code Standards**: Follow the existing code style, conventions, and patterns in the codebase.
- **No AI-Generated Code**: Do not submit code generated by AI tools without fully understanding and verifying it. AI-generated submissions that are untested or incorrect will be rejected immediately.
**For PRs that claim bounties:**
- **Eligibility**: Bounty PRs must strictly follow all guidelines above. Untested, poorly described, or non-compliant PRs will not qualify for bounty rewards.
- **Original Work**: Bounties are for genuine contributions. Submitting AI-generated or copied code solely for bounty claims will result in disqualification and potential removal from contributing.
- **Quality Standards**: Bounty submissions are held to even higher standards. Ensure comprehensive testing, clear documentation, and alignment with project goals. When maintainers review the changes, they should work as expected (the things mentioned in the PR description plus what the bounty issuer needs).
- **Claim Process**: Only successfully merged PRs that pass all reviews (core maintainers + bounty issuer) and meet bounty criteria will be awarded. Follow the issue's bounty guidelines precisely.
- **Prioritization**: Contributor PRs are prioritized over first-time or new contributors.
- **Developer Experience**: We highly advise beginners to avoid participating in bug bounties for our codebase. Most of the time, they don't know what they are changing, how it affects other parts of the system, or if their changes are even correct.
- **Review Comments**: When maintainers ask questions, you should be able to respond properly without generic or AI-generated fluff.
## Development Notes

View file

@ -4,7 +4,7 @@ # Coolify
An open-source & self-hostable Heroku / Netlify / Vercel alternative.
![Latest Release Version](https://img.shields.io/badge/dynamic/json?labelColor=grey&color=6366f1&label=Latest%20released%20version&url=https%3A%2F%2Fcdn.coollabs.io%2Fcoolify%2Fversions.json&query=coolify.v4.version&style=for-the-badge
) [![Bounty Issues](https://img.shields.io/static/v1?labelColor=grey&color=6366f1&label=Algora&message=%F0%9F%92%8E+Bounty+issues&style=for-the-badge)](https://console.algora.io/org/coollabsio/bounties/new)
)
</div>
## About the Project
@ -65,7 +65,6 @@ ### Huge Sponsors
### Big Sponsors
* [23M](https://23m.com?ref=coolify.io) - Your experts for high-availability hosting solutions!
* [Algora](https://algora.io?ref=coolify.io) - Open source contribution platform
* [American Cloud](https://americancloud.com?ref=coolify.io) - US-based cloud infrastructure services
* [Arcjet](https://arcjet.com?ref=coolify.io) - Advanced web security and performance solutions
* [BC Direct](https://bc.direct?ref=coolify.io) - Your trusted technology consulting partner

View file

@ -40,10 +40,11 @@ class ValidationPatterns
* Blocks dangerous shell metacharacters: ; | ` $ ( ) > < newlines and carriage returns
* Allows & for command chaining (&&) which is common in multi-step build commands
* Allows double quotes for build args with spaces (e.g. --build-arg KEY="value")
* Blocks backslashes and single quotes to prevent escape-sequence attacks
* Blocks backslashes to prevent escape-sequence attacks
* Allows single and double quotes for quoted arguments (e.g. --entrypoint "sh -c 'npm start'")
* Uses [ \t] instead of \s to explicitly exclude \n and \r (which act as command separators)
*/
public const SHELL_SAFE_COMMAND_PATTERN = '/^[a-zA-Z0-9 \t._\-\/=:@,+\[\]{}#%^~&"]+$/';
public const SHELL_SAFE_COMMAND_PATTERN = '/^[a-zA-Z0-9 \t._\-\/=:@,+\[\]{}#%^~&"\']+$/';
/**
* Pattern for Docker volume names

View file

@ -2,9 +2,9 @@
return [
'coolify' => [
'version' => '4.0.0-beta.471',
'helper_version' => '1.0.12',
'realtime_version' => '1.0.11',
'version' => '4.0.0-beta.472',
'helper_version' => '1.0.13',
'realtime_version' => '1.0.12',
'self_hosted' => env('SELF_HOSTED', true),
'autoupdate' => env('AUTOUPDATE'),
'base_config_path' => env('BASE_CONFIG_PATH', '/data/coolify'),

View file

@ -60,7 +60,7 @@ services:
retries: 10
timeout: 2s
soketi:
image: '${REGISTRY_URL:-ghcr.io}/coollabsio/coolify-realtime:1.0.11'
image: '${REGISTRY_URL:-ghcr.io}/coollabsio/coolify-realtime:1.0.12'
ports:
- "${SOKETI_PORT:-6001}:6001"
- "6002:6002"

View file

@ -96,7 +96,7 @@ services:
retries: 10
timeout: 2s
soketi:
image: 'ghcr.io/coollabsio/coolify-realtime:1.0.10'
image: 'ghcr.io/coollabsio/coolify-realtime:1.0.12'
pull_policy: always
container_name: coolify-realtime
restart: always

View file

@ -28,7 +28,8 @@ ARG NIXPACKS_VERSION
USER root
WORKDIR /artifacts
RUN apk add --no-cache bash curl git git-lfs openssh-client tar tini
RUN apk upgrade --no-cache && \
apk add --no-cache bash curl git git-lfs openssh-client tar tini
RUN mkdir -p ~/.docker/cli-plugins
RUN if [[ ${TARGETPLATFORM} == 'linux/amd64' ]]; then \
curl -sSL https://github.com/docker/buildx/releases/download/v${DOCKER_BUILDX_VERSION}/buildx-v${DOCKER_BUILDX_VERSION}.linux-amd64 -o ~/.docker/cli-plugins/docker-buildx && \

View file

@ -10,7 +10,8 @@ ARG TARGETPLATFORM
ARG CLOUDFLARED_VERSION
WORKDIR /terminal
RUN apk add --no-cache openssh-client make g++ python3 curl
RUN apk upgrade --no-cache && \
apk add --no-cache openssh-client make g++ python3 curl
COPY docker/coolify-realtime/package.json ./
RUN npm i
RUN npm rebuild node-pty --update-binary

View file

@ -33,7 +33,8 @@ RUN docker-php-serversideup-set-id www-data $USER_ID:$GROUP_ID && \
docker-php-serversideup-set-file-permissions --owner $USER_ID:$GROUP_ID --service nginx
# Install PostgreSQL repository and keys
RUN apk add --no-cache gnupg && \
RUN apk upgrade --no-cache && \
apk add --no-cache gnupg && \
mkdir -p /usr/share/keyrings && \
curl -fSsL https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor > /usr/share/keyrings/postgresql.gpg

View file

@ -60,7 +60,7 @@ services:
retries: 10
timeout: 2s
soketi:
image: '${REGISTRY_URL:-ghcr.io}/coollabsio/coolify-realtime:1.0.11'
image: '${REGISTRY_URL:-ghcr.io}/coollabsio/coolify-realtime:1.0.12'
ports:
- "${SOKETI_PORT:-6001}:6001"
- "6002:6002"

View file

@ -96,7 +96,7 @@ services:
retries: 10
timeout: 2s
soketi:
image: 'ghcr.io/coollabsio/coolify-realtime:1.0.10'
image: 'ghcr.io/coollabsio/coolify-realtime:1.0.12'
pull_policy: always
container_name: coolify-realtime
restart: always

View file

@ -1,16 +1,16 @@
{
"coolify": {
"v4": {
"version": "4.0.0-beta.471"
"version": "4.0.0-beta.472"
},
"nightly": {
"version": "4.0.0"
},
"helper": {
"version": "1.0.12"
"version": "1.0.13"
},
"realtime": {
"version": "1.0.11"
"version": "1.0.12"
},
"sentinel": {
"version": "0.0.21"

8
package-lock.json generated
View file

@ -23,7 +23,7 @@
"pusher-js": "8.4.0",
"tailwind-scrollbar": "4.0.2",
"tailwindcss": "4.1.18",
"vite": "7.3.0",
"vite": "7.3.2",
"vue": "3.5.26"
}
},
@ -2709,9 +2709,9 @@
"license": "MIT"
},
"node_modules/vite": {
"version": "7.3.0",
"resolved": "https://registry.npmjs.org/vite/-/vite-7.3.0.tgz",
"integrity": "sha512-dZwN5L1VlUBewiP6H9s2+B3e3Jg96D0vzN+Ry73sOefebhYr9f94wwkMNN/9ouoU8pV1BqA1d1zGk8928cx0rg==",
"version": "7.3.2",
"resolved": "https://registry.npmjs.org/vite/-/vite-7.3.2.tgz",
"integrity": "sha512-Bby3NOsna2jsjfLVOHKes8sGwgl4TT0E6vvpYgnAYDIF/tie7MRaFthmKuHx1NSXjiTueXH3do80FMQgvEktRg==",
"dev": true,
"license": "MIT",
"dependencies": {

View file

@ -16,7 +16,7 @@
"pusher-js": "8.4.0",
"tailwind-scrollbar": "4.0.2",
"tailwindcss": "4.1.18",
"vite": "7.3.0",
"vite": "7.3.2",
"vue": "3.5.26"
},
"dependencies": {

4
public/svgs/grimmory.svg Normal file
View file

@ -0,0 +1,4 @@
<svg width="126" height="126" viewBox="0 0 126 126" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M59 4.79297C71.5051 11.5557 80 24.7854 80 40C80 40.5959 79.987 41.1888 79.9609 41.7783C79.8609 44.0406 81.7355 46 84 46C106.091 46 124 63.9086 124 86C124 108.091 106.091 126 84 126H10C4.47715 126 0 121.523 0 116V39.0068L0.0126953 38.9941C0.357624 25.0252 7.86506 12.8347 19 5.95215V63.832C19 64.8345 20.0676 65.4391 20.9121 64.9902L21.0771 64.8867L38.2227 52.3428C38.6819 52.0068 39.3064 52.0068 39.7656 52.3428L56.9229 64.8945L57.0879 64.998C57.9324 65.447 59 64.8423 59 63.8398V4.79297Z" fill="#818cf8"/>
<path d="M40 0C43.8745 0 47.6199 0.552381 51.1631 1.58008V50.9697L44.3926 46.0176L44.0879 45.8037C40.9061 43.6679 36.7098 43.7393 33.5957 46.0176L26.8369 50.9619V2.21875C30.9593 0.782634 35.3881 0 40 0Z" fill="white"/>
</svg>

After

Width:  |  Height:  |  Size: 842 B

View file

@ -1,6 +1,6 @@
# documentation: https://actualbudget.org/docs/install/docker
# slogan: A local-first personal finance app.
# category: productivity
# category: finance
# tags: budgeting,actual,finance,budget,money,expenses,income
# logo: svgs/actualbudget.png
# port: 5006

View file

@ -7,7 +7,7 @@
services:
frontend:
image: ghcr.io/smaug6739/alexandrie-frontend:v8.4.1
image: ghcr.io/smaug6739/alexandrie-frontend:v8.7.2
environment:
- SERVICE_URL_FRONTEND_8200
- PORT=8200
@ -21,7 +21,7 @@ services:
- backend
backend:
image: ghcr.io/smaug6739/alexandrie-backend:v8.4.1
image: ghcr.io/smaug6739/alexandrie-backend:v8.7.2
environment:
- SERVICE_URL_BACKEND_8201
- BACKEND_PORT=8201
@ -74,7 +74,7 @@ services:
retries: 5
rustfs:
image: rustfs/rustfs:1.0.0-alpha.81
image: rustfs/rustfs:1.0.0-alpha.90
environment:
- SERVICE_URL_RUSTFS_9000
- RUSTFS_ACCESS_KEY=${SERVICE_USER_RUSTFS}

View file

@ -4,7 +4,6 @@
# tags: workflow, orchestration, data-pipeline, python, argilla, ai, elasticsearch, datasets, data, machine-learning, data-science, nlp
# logo: svgs/argilla.png
# port: 6900
# category: productivity
services:
argilla:

View file

@ -1,5 +1,6 @@
# documentation: https://autobase.tech/docs/
# slogan: Autobase for PostgreSQL® is an open-source alternative to cloud-managed databases (self-hosted DBaaS).
# category: database
# tags: database, postgres, automation, self-hosted, dbaas
# logo: svgs/autobase.svg
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://github.com/linuxserver/budge
# slogan: A budgeting personal finance app.
# category: productivity
# category: finance
# tags: personal finance, budgeting, expense tracking
# logo: svgs/budge.png

View file

@ -1,5 +1,6 @@
# documentation: https://github.com/calibrain/calibre-web-automated-book-downloader
# slogan: An intuitive web interface for searching and requesting book downloads, designed to work seamlessly with Calibre-Web-Automated.
# category: media
# tags: calibre,calibre-web,ebook,library,epub,ereader,kindle,book,reader,download,downloader
# logo: svgs/calibre-web-automated-with-downloader.png
# port: 8083

View file

@ -1,5 +1,6 @@
# documentation: https://cap.so
# slogan: Cap is the open source alternative to Loom. Lightweight, powerful, and cross-platform. Record and share in seconds.
# category: media
# tags: cap,loom,open,source,low,code
# logo: svgs/cap.svg
# port: 5679
@ -72,4 +73,4 @@ services:
timeout: 10s
retries: 5
volumes:
- 'cap_db:/var/lib/mysql'
- 'cap_db:/var/lib/mysql'

View file

@ -1,6 +1,6 @@
# documentation: https://chaskiq.io
# slogan: Chaskiq is an messaging platform for marketing, support & sales
# category: cms
# category: helpdesk
# tags: chaskiq,messaging,chat,marketing,support,sales,open,source,rails,redis,postgresql,sidekiq
# logo: svgs/chaskiq.png
# port: 3000

View file

@ -1,6 +1,6 @@
# documentation: https://www.chatwoot.com/docs/self-hosted/
# slogan: Delightful customer relationships at scale.
# category: cms
# category: helpdesk
# tags: chatwoot,chat,api,open,source,rails,redis,postgresql,sidekiq
# logo: svgs/chatwoot.svg
# port: 3000

View file

@ -1,5 +1,6 @@
# documentation: https://chibisafe.app/docs/intro
# slogan: A beautiful and performant vault to save all your files in the cloud.
# category: storage
# tags: storage,file-sharing,upload,sharing
# logo: svgs/chibisafe.svg
# port: 80

View file

@ -7,7 +7,7 @@
services:
backend:
image: ghcr.io/get-convex/convex-backend:00bd92723422f3bff968230c94ccdeb8c1719832
image: ghcr.io/get-convex/convex-backend:a9a760ca10399ed42e1b4bb87c78539a235488c7
volumes:
- data:/convex/data
environment:
@ -47,7 +47,7 @@ services:
start_period: 10s
dashboard:
image: ghcr.io/get-convex/convex-dashboard:33cef775a8a6228cbacee4a09ac2c4073d62ed13
image: ghcr.io/get-convex/convex-dashboard:a9a760ca10399ed42e1b4bb87c78539a235488c7
environment:
- SERVICE_URL_DASHBOARD_6791
# URL of the Convex API as accessed by the dashboard (browser).
@ -56,6 +56,6 @@ services:
backend:
condition: service_healthy
healthcheck:
test: wget -qO- http://127.0.0.1:6791/
test: curl -f http://127.0.0.1:6791/
interval: 5s
start_period: 5s

View file

@ -27,6 +27,11 @@ services:
- REDIS_HOST=redis
- REDIS_PORT=6379
- WEBSOCKETS_ENABLED=true
- CORS_ENABLED=${CORS_ENABLED:-true}
- CORS_ORIGIN=${CORS_ORIGIN}
- CORS_METHODS=${CORS_METHODS:-GET,POST,PATCH,DELETE,OPTIONS}
- CORS_ALLOWED_HEADERS=${CORS_ALLOWED_HEADERS:-Content-Type,Authorization}
- CORS_CREDENTIALS=${CORS_CREDENTIALS:-true}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://127.0.0.1:8055/admin/login"]
interval: 5s

View file

@ -22,6 +22,11 @@ services:
- DB_CLIENT=sqlite3
- DB_FILENAME=/directus/database/data.db
- WEBSOCKETS_ENABLED=true
- CORS_ENABLED=${CORS_ENABLED:-true}
- CORS_ORIGIN=${CORS_ORIGIN}
- CORS_METHODS=${CORS_METHODS:-GET,POST,PATCH,DELETE,OPTIONS}
- CORS_ALLOWED_HEADERS=${CORS_ALLOWED_HEADERS:-Content-Type,Authorization}
- CORS_CREDENTIALS=${CORS_CREDENTIALS:-true}
healthcheck:
test:
["CMD", "wget", "-q", "--spider", "http://127.0.0.1:8055/admin/login"]

View file

@ -1,6 +1,6 @@
# documentation: https://www.dolibarr.org/documentation-home.php
# slogan: Dolibarr is a modern software package to manage your organization's activity (contacts, quotes, invoices, orders, stocks, agenda, hr, expense reports, accountancy, ecm, manufacturing, ...).
# category: cms
# category: productivity
# tags: crm,ERP
# logo: svgs/dolibarr.png
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://gateway.drizzle.team/
# slogan: Free self-hosted Drizzle Studio on steroids
# category: backend
# category: devtools
# tags: drizzle,gateway,self-hosted,open-source,low-code
# logo: svgs/drizzle.jpeg
# port: 4983

View file

@ -1,5 +1,6 @@
# documentation: https://www.elastic.co/docs/deploy-manage/deploy/self-managed/install-kibana-with-docker
# slogan: Elastic + Kibana is a Free and Open Source Search, Monitoring, and Visualization Stack
# category: monitoring
# tags: elastic,kibana,elasticsearch,search,visualization,logging,monitoring,observability,analytics,stack,devops
# logo: svgs/elasticsearch.svg
# port: 5601

View file

@ -1,6 +1,6 @@
# documentation: https://docs.espocrm.com
# slogan: EspoCRM is a free and open-source CRM platform.
# category: cms
# category: productivity
# tags: crm, self-hosted, open-source, workflow, automation, project management
# logo: svgs/espocrm.svg
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://firefly-iii.org
# slogan: A personal finances manager that can help you save money.
# category: productivity
# category: finance
# tags: finance, money, personal, manager
# logo: svgs/firefly.svg
# port: 8080

View file

@ -1,6 +1,6 @@
# documentation: https://formbricks.com/docs/self-hosting/setup/docker
# slogan: Open Source Survey Platform
# category: analytics
# category: productivity
# tags: form, builder, forms, survey
# logo: svgs/formbricks.png
# port: 3000

View file

@ -1,6 +1,6 @@
# documentation: https://foundryvtt.com/kb/
# slogan: Foundry Virtual Tabletop is a self-hosted & modern roleplaying platform
# category: media
# category: games
# tags: foundryvtt,foundry,vtt,ttrpg,roleplaying
# logo: svgs/foundryvtt.png
# port: 30000

View file

@ -1,6 +1,6 @@
# documentation: https://github.com/freescout-help-desk/freescout/wiki/
# slogan: FreeScout is the super lightweight and powerful free open source help desk and shared inbox written in PHP (Laravel framework).
# category: cms
# category: helpdesk
# tags: helpdesk, support, ticketing, customer-support
# logo: svgs/freescout.png
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://freshrss.org/index.html
# slogan: A free, self-hostable feed aggregator.
# category: cms
# category: RSS
# tags: rss, feed
# logo: svgs/freshrss.png
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://freshrss.org/index.html
# slogan: A free, self-hostable feed aggregator.
# category: cms
# category: RSS
# tags: rss, feed
# logo: svgs/freshrss.png
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://freshrss.org/index.html
# slogan: A free, self-hostable feed aggregator.
# category: cms
# category: RSS
# tags: rss, feed
# logo: svgs/freshrss.png
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://freshrss.org/index.html
# slogan: A free, self-hostable feed aggregator.
# category: cms
# category: RSS
# tags: rss, feed
# logo: svgs/freshrss.png
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://github.com/glanceapp/glance
# slogan: A self-hosted dashboard that puts all your feeds in one place.
# category: monitoring
# category: productivity
# tags: dashboard, server, applications, interface, rrss
# logo: svgs/glance.png
# port: 8080

View file

@ -1,6 +1,6 @@
# documentation: https://gotify.net/docs/install
# slogan: Gotify is an open-source self-hosted notification server.
# category: productivity
# category: messaging
# tags: productivity,notification,collaboration
# logo: svgs/gotify.png
# port: 80
@ -26,4 +26,4 @@ services:
interval: 5s
timeout: 20s
retries: 10

View file

@ -1,6 +1,6 @@
# documentation: https://github.com/aldinokemal/go-whatsapp-web-multidevice
# slogan: Golang WhatsApp - Built with Go for efficient memory use
# category: cms
# category: messaging
# tags: whatsapp,golang,multidevice,api,go-whatsapp
# logo: svgs/gowa.svg
# port: 3000

View file

@ -0,0 +1,49 @@
# documentation: https://github.com/grimmory-tools/grimmory
# slogan: Grimmory is a self-hosted application for managing your entire book collection in one place. Organize, read, annotate, sync across devices, and share without relying on third-party services.
# tags: books,ebooks,library,reader
# logo: svgs/grimmory.svg
# port: 80
services:
grimmory:
image: 'grimmory/grimmory:nightly-20260403-3a371f7' # Released on April 3 2026
environment:
- SERVICE_URL_GRIMMORY_80
- 'USER_ID=${GRIMMORY_USER_ID:-0}'
- 'GROUP_ID=${GRIMMORY_GROUP_ID:-0}'
- 'TZ=${TZ:-UTC}'
- 'DATABASE_URL=jdbc:mariadb://mariadb:3306/${MARIADB_DATABASE:-grimmory-db}'
- 'DATABASE_USERNAME=${SERVICE_USER_MARIADB}'
- 'DATABASE_PASSWORD=${SERVICE_PASSWORD_MARIADB}'
- BOOKLORE_PORT=80
volumes:
- 'grimmory-data:/app/data'
- 'grimmory-books:/books'
- 'grimmory-bookdrop:/bookdrop'
healthcheck:
test: 'wget --no-verbose --tries=1 --spider http://127.0.0.1/health || exit 1'
interval: 10s
timeout: 5s
retries: 10
depends_on:
mariadb:
condition: service_healthy
mariadb:
image: 'mariadb:12'
environment:
- 'MARIADB_USER=${SERVICE_USER_MARIADB}'
- 'MARIADB_PASSWORD=${SERVICE_PASSWORD_MARIADB}'
- 'MARIADB_ROOT_PASSWORD=${SERVICE_PASSWORD_MARIADBROOT}'
- 'MARIADB_DATABASE=${MARIADB_DATABASE:-grimmory-db}'
volumes:
- 'mariadb-data:/var/lib/mysql'
healthcheck:
test:
- CMD
- healthcheck.sh
- '--connect'
- '--innodb_initialized'
interval: 10s
timeout: 5s
retries: 10

View file

@ -1,5 +1,6 @@
# documentation: https://docs.hatchet.run/self-hosting/docker-compose
# slogan: Hatchet allows you to run background tasks at scale with a high-throughput, low-latency computing service built on an open-source, fault-tolerant queue.
# category: automation
# tags: ai-agents,background-tasks,data-pipelines,scheduling
# logo: svgs/hatchet.svg
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://github.com/sysadminsmedia/homebox
# slogan: Homebox is the inventory and organization system built for the Home User.
# category: storage
# category: productivity
# tags: inventory, home, organize
# logo: svgs/homebox.svg
# port: 7745

View file

@ -1,6 +1,6 @@
# documentation: https://invoiceninja.github.io/selfhost.html
# slogan: The leading open-source invoicing platform
# category: productivity
# category: finance
# tags: invoicing, billing, accounting, finance, self-hosted
# logo: svgs/invoiceninja.png
# port: 9000

View file

@ -7,7 +7,7 @@
services:
librechat:
image: ghcr.io/danny-avila/librechat-dev-api:latest
image: ghcr.io/danny-avila/librechat-dev-api:6ecd1b510faaa593ad954fb6276c18e5f12a8e53 # Released on April 2
environment:
- SERVICE_URL_LIBRECHAT_3080
- DOMAIN_CLIENT=${SERVICE_URL_LIBRECHAT}
@ -64,7 +64,7 @@ services:
"--no-verbose",
"--tries=1",
"--spider",
"http://127.0.0.1:3080/api/health",
"http://127.0.0.1:3080/health",
]
interval: 5s
timeout: 10s
@ -92,7 +92,7 @@ services:
retries: 3
meilisearch:
image: getmeili/meilisearch:v1.12.3
image: getmeili/meilisearch:v1.35.1
environment:
- MEILI_MASTER_KEY=${SERVICE_PASSWORD_MEILI}
- MEILI_NO_ANALYTICS=${MEILI_NO_ANALYTICS:-false}
@ -107,7 +107,7 @@ services:
retries: 15
vectordb:
image: ankane/pgvector:latest
image: ankane/pgvector:v0.5.1 # pgvector by ankane is archived and not maintained, in future we have to swap this image to something else that is well maintained
environment:
- POSTGRES_DB=rag
- POSTGRES_USER=${SERVICE_USER_POSTGRES}
@ -129,7 +129,7 @@ services:
start_period: 10s
rag-api:
image: ghcr.io/danny-avila/librechat-rag-api-dev-lite:latest
image: ghcr.io/danny-avila/librechat-rag-api-dev-lite:v0.7.3
environment:
- POSTGRES_DB=rag
- POSTGRES_USER=${SERVICE_USER_POSTGRES}

View file

@ -1,6 +1,6 @@
# documentation: https://docs.mattermost.com
# slogan: Mattermost is an open source, self-hosted Slack-alternative.
# category: mattermost
# category: messaging
# tags: mattermost,slack,alternative
# logo: svgs/mattermost.svg
# port: 8065

View file

@ -1,6 +1,6 @@
# documentation: https://github.com/itzg/docker-minecraft-server
# slogan: Minecraft Server that will automatically download selected version at startup.
# category: media
# category: games
# tags: minecraft
# logo: svgs/minecraft.svg
# port: 25565

View file

@ -1,6 +1,6 @@
# documentation: https://miniflux.app/docs/index.html
# slogan: Miniflux is a minimalist and opinionated feed reader.
# category: cms
# category: RSS
# tags: miniflux,rss,feed,self,hosted
# logo: svgs/miniflux.svg
# port: 8080

View file

@ -48,7 +48,7 @@ services:
redis:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:5678/"]
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:5678/healthz"]
interval: 5s
timeout: 20s
retries: 10
@ -133,7 +133,7 @@ services:
healthcheck:
test:
- CMD-SHELL
- 'wget -qO- http://127.0.0.1:5680/'
- 'wget -qO- http://127.0.0.1:5680/healthz'
interval: 5s
timeout: 20s
retries: 10

View file

@ -41,7 +41,7 @@ services:
postgresql:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:5678/"]
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:5678/healthz"]
interval: 5s
timeout: 20s
retries: 10
@ -58,7 +58,7 @@ services:
healthcheck:
test:
- CMD-SHELL
- 'wget -qO- http://127.0.0.1:5680/'
- 'wget -qO- http://127.0.0.1:5680/healthz'
interval: 5s
timeout: 20s
retries: 10

View file

@ -32,7 +32,7 @@ services:
volumes:
- n8n-data:/home/node/.n8n
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:5678/"]
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:5678/healthz"]
interval: 5s
timeout: 20s
retries: 10
@ -49,7 +49,7 @@ services:
healthcheck:
test:
- CMD-SHELL
- 'wget -qO- http://127.0.0.1:5680/'
- 'wget -qO- http://127.0.0.1:5680/healthz'
interval: 5s
timeout: 20s
retries: 10

View file

@ -13,7 +13,7 @@ services:
- 'NB_ENABLE_ROSENPASS=${NB_ENABLE_ROSENPASS:-false}'
- 'NB_ENABLE_EXPERIMENTAL_LAZY_CONN=${NB_ENABLE_EXPERIMENTAL_LAZY_CONN:-false}'
volumes:
- 'netbird-client:/etc/netbird'
- 'netbird-client:/var/lib/netbird'
cap_add:
- NET_ADMIN
- SYS_ADMIN

View file

@ -1,5 +1,6 @@
# documentation: https://docs.digpangolin.com/manage/sites/install-site
# slogan: Pangolin tunnels your services to the internet so you can access anything from anywhere.
# category: proxy
# tags: wireguard, reverse-proxy, zero-trust-network-access, open source
# logo: svgs/pangolin-logo.png

View file

@ -28,8 +28,8 @@ services:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:80"]
interval: 2s
test: ["CMD", "curl", "-f", "http://127.0.0.1:80/status.php"]
interval: 30s
timeout: 10s
retries: 15

View file

@ -28,8 +28,8 @@ services:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:80"]
interval: 2s
test: ["CMD", "curl", "-f", "http://127.0.0.1:80/status.php"]
interval: 30s
timeout: 10s
retries: 15

View file

@ -28,8 +28,8 @@ services:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:80"]
interval: 2s
test: ["CMD", "curl", "-f", "http://127.0.0.1:80/status.php"]
interval: 30s
timeout: 10s
retries: 15

View file

@ -17,7 +17,7 @@ services:
- nextcloud-config:/config
- nextcloud-data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:80"]
interval: 2s
test: ["CMD", "curl", "-f", "http://127.0.0.1:80/status.php"]
interval: 30s
timeout: 10s
retries: 15

View file

@ -1,6 +1,6 @@
# documentation: https://nocodb.com/
# slogan: NocoDB is an open source Airtable alternative. Turns any MySQL, PostgreSQL, SQL Server, SQLite & MariaDB into a smart-spreadsheet.
# category: automation
# category: productivity
# tags: nocodb,airtable,mysql,postgresql,sqlserver,sqlite,mariadb
# logo: svgs/nocodb.svg
# port: 8080

View file

@ -1,6 +1,6 @@
# documentation: https://www.odoo.com/
# slogan: Odoo is a suite of open-source business apps that cover all your company needs.
# category: cms
# category: productivity
# tags: business, apps, CRM, eCommerce, accounting, inventory, point of sale, project management, open-source
# logo: svgs/odoo.svg
# port: 8069

View file

@ -1,5 +1,6 @@
# documentation: https://docs.openarchiver.com/
# slogan: A self-hosted, open-source email archiving solution with full-text search capability.
# category: email
# tags: email archiving,email,compliance,search
# logo: svgs/openarchiver.svg
# port: 3000

View file

@ -1,6 +1,6 @@
# documentation: https://openpanel.dev/docs
# slogan: Open source alternative to Mixpanel and Plausible for product analytics
# category: devtools
# category: analytics
# tags: analytics, insights, privacy, mixpanel, plausible, google, alternative
# logo: svgs/openpanel.svg
# port: 3000

View file

@ -1,5 +1,6 @@
# documentation: https://docs.opnform.com/introduction
# slogan: OpnForm is an open-source form builder that lets you create beautiful forms and share them anywhere. It's super fast, you don't need to know how to code
# category: productivity
# tags: opnform, form, survey, cloud, open-source, self-hosted, docker, no-code, embeddable
# logo: svg/opnform.svg
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://starterhelp.orangehrm.com/hc/en-us
# slogan: OrangeHRM open source HR management software.
# category: cms
# category: productivity
# tags: HR, HRIS, HRMS, human resource management, OrangeHRM, HR management
# logo: svgs/orangehrm.svg
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://docs.osticket.com/en/latest/
# slogan: osTicket is a widely-used open source support ticket system.
# category: cms
# category: helpdesk
# tags: helpdesk, ticketing, support, open-source
# logo: svgs/osticket.png
# port: 80

View file

@ -1,3 +1,4 @@
# category: games
services:
palworld:
image: thijsvanloef/palworld-server-docker:v1.4.6

View file

@ -1,6 +1,6 @@
# documentation: https://paymenter.org/docs/guides/docker
# slogan: Open-Source Billing, Built for Hosting
# category: cms
# category: finance
# tags: automation, billing, open source
# logo: svgs/paymenter.svg
# port: 80

View file

@ -1,6 +1,6 @@
# documentation: https://docs.useplunk.com/getting-started/introduction
# slogan: Plunk, The Open-Source Email Platform for AWS
# category: automation
# category: email
# tags: plunk,email,automation,aws
# logo: svgs/plunk.svg
# port: 3000

View file

@ -1,5 +1,6 @@
# documentation: https://github.com/hoppscotch/proxyscotch
# slogan: A simple proxy server created for https://hoppscotch.io - CORS proxy
# category: proxy
# tags: proxy,hoppscotch,cors
# logo: svgs/hoppscotch.png
# port: 9159

View file

@ -1,5 +1,6 @@
# documentation: https://docs.pydio.com/
# slogan: High-performance large file sharing, native no-code automation, and a collaboration-centric architecture that simplifies access control without compromising security or compliance.
# category: storage
# tags: storage
# logo: svgs/cells.svg
# port: 8080

View file

@ -1,5 +1,6 @@
# documentation: https://www.redmine.org/
# slogan: Redmine is a flexible project management web application.
# category: productivity
# tags: redmine,project management
# logo: svgs/redmine.svg
# port: 3000

View file

@ -7,14 +7,13 @@
services:
rivet-engine:
image: rivetkit/engine:25.8.0
image: rivetdev/engine:2.2.0
environment:
- SERVICE_URL_RIVET_6420
- 'RIVET__AUTH__ADMIN_TOKEN=${SERVICE_PASSWORD_RIVET}'
- RIVET__POSTGRES__URL=postgresql://$SERVICE_USER_POSTGRESQL:$SERVICE_PASSWORD_POSTGRESQL@postgresql:5432/${POSTGRESQL_DATABASE-rivet}
depends_on:
postgresql:
condition: service_healthy
- RIVET__FILE_SYSTEM__PATH=/data
- 'RIVET__AUTH__ADMIN_TOKEN=${SERVICE_BASE64_TOKEN}'
volumes:
- 'rivet-data:/data'
healthcheck:
test:
- CMD
@ -24,19 +23,4 @@ services:
interval: 2s
timeout: 10s
retries: 10
start_period: 30s
postgresql:
image: postgres:17-alpine
volumes:
- rivet-postgresql-data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${SERVICE_USER_POSTGRESQL}
- POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRESQL}
- POSTGRES_DB=${POSTGRESQL_DATABASE-rivet}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
interval: 5s
timeout: 20s
retries: 10
start_period: 30s

View file

@ -1,5 +1,6 @@
# documentation: https://docs.sftpgo.com/2.7/
# slogan: SFTPGo is an event-driven SFTP, FTP/S, HTTP/S and WebDAV server.
# category: storage
# tags: sftpgo,sftp,ftp,file,webdav
# logo: svgs/sftpgo.png
# port: 8080

View file

@ -1,5 +1,6 @@
# documentation: https://signoz.io/docs/introduction/
# slogan: An observability platform native to OpenTelemetry with logs, traces and metrics.
# category: monitoring
# tags: telemetry, server, applications, interface, logs, monitoring, traces, metrics
# logo: svgs/signoz.svg
# port: 8080

View file

@ -1,5 +1,6 @@
# documentation: https://v2.silverbullet.md/Install/Configuration
# slogan: SilverBullet is a tool to develop, organize, and structure your personal knowledge and to make it universally accessible across all your devices.
# category: productivity
# tags: note-taking,markdown,pkm
# logo: svgs/silverbullet.png
# port: 3000

View file

@ -1,5 +1,6 @@
# documentation: https://github.com/rahulhaque/soketi-app-manager-filament
# slogan: Manage soketi websocket server and apps with ease.
# category: devtools
# tags: soketi,websockets,app-manager,dashboard
# logo: svgs/soketi-app-manager.svg
# port: 8080
@ -29,4 +30,4 @@ services:
- METRICS_HOST=$METRICS_HOST
healthcheck:
test: ["CMD", "php-fpm-healthcheck"]
start_period: 10s
start_period: 10s

View file

@ -8,33 +8,77 @@
services:
supabase-kong:
image: kong:2.8.1
# https://unix.stackexchange.com/a/294837
entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
image: kong/kong:3.9.1
entrypoint: /home/kong/kong-entrypoint.sh
depends_on:
supabase-analytics:
condition: service_healthy
healthcheck:
test: ["CMD", "kong", "health"]
interval: 5s
timeout: 5s
retries: 5
environment:
- SERVICE_URL_SUPABASEKONG_8000
- KONG_PORT_MAPS=443:8000
- JWT_SECRET=${SERVICE_PASSWORD_JWT}
- KONG_DATABASE=off
- KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml
- KONG_DECLARATIVE_CONFIG=/usr/local/kong/kong.yml
# https://github.com/supabase/cli/issues/14
- KONG_DNS_ORDER=LAST,A,CNAME
- KONG_PLUGINS=request-transformer,cors,key-auth,acl,basic-auth
- KONG_DNS_NOT_FOUND_TTL=1
- KONG_PLUGINS=request-transformer,cors,key-auth,acl,basic-auth,request-termination,ip-restriction,post-function
- KONG_NGINX_PROXY_PROXY_BUFFER_SIZE=160k
- KONG_NGINX_PROXY_PROXY_BUFFERS=64 160k
- 'KONG_PROXY_ACCESS_LOG=/dev/stdout combined'
- SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}
- SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}
- SUPABASE_PUBLISHABLE_KEY=${SUPABASE_PUBLISHABLE_KEY:-}
- SUPABASE_SECRET_KEY=${SUPABASE_SECRET_KEY:-}
- ANON_KEY_ASYMMETRIC=${ANON_KEY_ASYMMETRIC:-}
- SERVICE_ROLE_KEY_ASYMMETRIC=${SERVICE_ROLE_KEY_ASYMMETRIC:-}
- DASHBOARD_USERNAME=${SERVICE_USER_ADMIN}
- DASHBOARD_PASSWORD=${SERVICE_PASSWORD_ADMIN}
- 'KONG_STORAGE_CONNECT_TIMEOUT=${KONG_STORAGE_CONNECT_TIMEOUT:-60}'
- 'KONG_STORAGE_WRITE_TIMEOUT=${KONG_STORAGE_WRITE_TIMEOUT:-3600}'
- 'KONG_STORAGE_READ_TIMEOUT=${KONG_STORAGE_READ_TIMEOUT:-3600}'
- 'KONG_STORAGE_REQUEST_BUFFERING=${KONG_STORAGE_REQUEST_BUFFERING:-false}'
- 'KONG_STORAGE_RESPONSE_BUFFERING=${KONG_STORAGE_RESPONSE_BUFFERING:-false}'
- 'KONG_STORAGE_RESPONSE_BUFFERING=${KONG_STORAGE_RESPONSE_BUFFERING:-false}'
volumes:
- type: bind
source: ./volumes/api/kong-entrypoint.sh
target: /home/kong/kong-entrypoint.sh
content: |
#!/bin/bash
# Custom entrypoint for Kong that builds Lua expressions for request-transformer
# and performs environment variable substitution in the declarative config.
if [ -n "$SUPABASE_SECRET_KEY" ] && [ -n "$SUPABASE_PUBLISHABLE_KEY" ]; then
export LUA_AUTH_EXPR="\$((headers.authorization ~= nil and headers.authorization:sub(1, 10) ~= 'Bearer sb_' and headers.authorization) or (headers.apikey == '$SUPABASE_SECRET_KEY' and 'Bearer $SERVICE_ROLE_KEY_ASYMMETRIC') or (headers.apikey == '$SUPABASE_PUBLISHABLE_KEY' and 'Bearer $ANON_KEY_ASYMMETRIC') or headers.apikey)"
export LUA_RT_WS_EXPR="\$((query_params.apikey == '$SUPABASE_SECRET_KEY' and '$SERVICE_ROLE_KEY_ASYMMETRIC') or (query_params.apikey == '$SUPABASE_PUBLISHABLE_KEY' and '$ANON_KEY_ASYMMETRIC') or query_params.apikey)"
else
export LUA_AUTH_EXPR="\$((headers.authorization ~= nil and headers.authorization:sub(1, 10) ~= 'Bearer sb_' and headers.authorization) or headers.apikey)"
export LUA_RT_WS_EXPR="\$(query_params.apikey)"
fi
awk '{
result = ""
rest = $0
while (match(rest, /\$[A-Za-z_][A-Za-z_0-9]*/)) {
varname = substr(rest, RSTART + 1, RLENGTH - 1)
if (varname in ENVIRON) {
result = result substr(rest, 1, RSTART - 1) ENVIRON[varname]
} else {
result = result substr(rest, 1, RSTART + RLENGTH - 1)
}
rest = substr(rest, RSTART + RLENGTH)
}
print result rest
}' /home/kong/temp.yml > "$KONG_DECLARATIVE_CONFIG"
sed -i '/^[[:space:]]*- key:[[:space:]]*$/d' "$KONG_DECLARATIVE_CONFIG"
exec /entrypoint.sh kong docker-start
# https://github.com/supabase/supabase/issues/12661
- type: bind
source: ./volumes/api/kong.yml
@ -51,9 +95,11 @@ services:
- username: anon
keyauth_credentials:
- key: $SUPABASE_ANON_KEY
- key: $SUPABASE_PUBLISHABLE_KEY
- username: service_role
keyauth_credentials:
- key: $SUPABASE_SERVICE_KEY
- key: $SUPABASE_SECRET_KEY
###
### Access Control List
@ -69,8 +115,8 @@ services:
###
basicauth_credentials:
- consumer: DASHBOARD
username: $DASHBOARD_USERNAME
password: $DASHBOARD_PASSWORD
username: '$DASHBOARD_USERNAME'
password: '$DASHBOARD_PASSWORD'
###
@ -106,6 +152,36 @@ services:
- /auth/v1/authorize
plugins:
- name: cors
- name: auth-v1-open-jwks
_comment: 'Auth: /auth/v1/.well-known/jwks.json -> http://supabase-auth:9999/.well-known/jwks.json'
url: http://supabase-auth:9999/.well-known/jwks.json
routes:
- name: auth-v1-open-jwks
strip_path: true
paths:
- /auth/v1/.well-known/jwks.json
plugins:
- name: cors
- name: auth-v1-open-sso-acs
url: "http://supabase-auth:9999/sso/saml/acs"
routes:
- name: auth-v1-open-sso-acs
strip_path: true
paths:
- /sso/saml/acs
plugins:
- name: cors
- name: auth-v1-open-sso-metadata
url: "http://supabase-auth:9999/sso/saml/metadata"
routes:
- name: auth-v1-open-sso-metadata
strip_path: true
paths:
- /sso/saml/metadata
plugins:
- name: cors
## Secure Auth routes
- name: auth-v1
@ -121,6 +197,14 @@ services:
- name: key-auth
config:
hide_credentials: false
- name: request-transformer
config:
add:
headers:
- "Authorization: $LUA_AUTH_EXPR"
replace:
headers:
- "Authorization: $LUA_AUTH_EXPR"
- name: acl
config:
hide_groups_header: true
@ -141,7 +225,15 @@ services:
- name: cors
- name: key-auth
config:
hide_credentials: true
hide_credentials: false
- name: request-transformer
config:
add:
headers:
- "Authorization: $LUA_AUTH_EXPR"
replace:
headers:
- "Authorization: $LUA_AUTH_EXPR"
- name: acl
config:
hide_groups_header: true
@ -162,12 +254,17 @@ services:
- name: cors
- name: key-auth
config:
hide_credentials: true
hide_credentials: false
- name: request-transformer
config:
add:
headers:
- Content-Profile:graphql_public
- "Content-Profile: graphql_public"
- "Authorization: $LUA_AUTH_EXPR"
replace:
headers:
- "Content-Profile: graphql_public"
- "Authorization: $LUA_AUTH_EXPR"
- name: acl
config:
hide_groups_header: true
@ -190,6 +287,14 @@ services:
- name: key-auth
config:
hide_credentials: false
- name: request-transformer
config:
add:
querystring:
- "apikey: $LUA_RT_WS_EXPR"
replace:
querystring:
- "apikey: $LUA_RT_WS_EXPR"
- name: acl
config:
hide_groups_header: true
@ -197,7 +302,7 @@ services:
- admin
- anon
- name: realtime-v1-rest
_comment: 'Realtime: /realtime/v1/* -> ws://realtime:4000/socket/*'
_comment: 'Realtime: /realtime/v1/api/* -> http://realtime:4000/api/*'
url: http://realtime-dev:4000/api
protocol: http
routes:
@ -210,6 +315,14 @@ services:
- name: key-auth
config:
hide_credentials: false
- name: request-transformer
config:
add:
headers:
- "Authorization: $LUA_AUTH_EXPR"
replace:
headers:
- "Authorization: $LUA_AUTH_EXPR"
- name: acl
config:
hide_groups_header: true
@ -217,7 +330,8 @@ services:
- admin
- anon
## Storage routes: the storage server manages its own auth
## Storage API endpoint
## No key-auth - S3 protocol requests don't carry an apikey header.
- name: storage-v1
_comment: 'Storage: /storage/v1/* -> http://supabase-storage:5000/*'
connect_timeout: $KONG_STORAGE_CONNECT_TIMEOUT
@ -233,11 +347,20 @@ services:
response_buffering: $KONG_STORAGE_RESPONSE_BUFFERING
plugins:
- name: cors
- name: post-function
config:
access:
- |
local auth = kong.request.get_header("authorization")
if auth == nil or auth == "" or auth:find("^%s*$") then
kong.service.request.clear_header("authorization")
end
## Edge Functions routes
- name: functions-v1
_comment: 'Edge Functions: /functions/v1/* -> http://supabase-edge-functions:9000/*'
url: http://supabase-edge-functions:9000/
read_timeout: 150000
routes:
- name: functions-v1-all
strip_path: true
@ -246,15 +369,28 @@ services:
plugins:
- name: cors
## Analytics routes
- name: analytics-v1
_comment: 'Analytics: /analytics/v1/* -> http://logflare:4000/*'
url: http://supabase-analytics:4000/
## OAuth 2.0 Authorization Server Metadata (RFC 8414)
- name: well-known-oauth
_comment: 'Auth: /.well-known/oauth-authorization-server -> http://supabase-auth:9999/.well-known/oauth-authorization-server'
url: http://supabase-auth:9999/.well-known/oauth-authorization-server
routes:
- name: analytics-v1-all
- name: well-known-oauth
strip_path: true
paths:
- /analytics/v1/
- /.well-known/oauth-authorization-server
plugins:
- name: cors
## Analytics routes
## Not used - Studio and Vector talk directly to analytics via Docker networking.
# - name: analytics-v1
# _comment: 'Analytics: /analytics/v1/* -> http://logflare:4000/*'
# url: http://supabase-analytics:4000/
# routes:
# - name: analytics-v1-all
# strip_path: true
# paths:
# - /analytics/v1/
## Secure Database routes
- name: meta
@ -275,6 +411,48 @@ services:
allow:
- admin
## Block access to /api/mcp
- name: mcp-blocker
_comment: 'Block direct access to /api/mcp'
url: http://supabase-studio:3000/api/mcp
routes:
- name: mcp-blocker-route
strip_path: true
paths:
- /api/mcp
plugins:
- name: request-termination
config:
status_code: 403
message: "Access is forbidden."
## MCP endpoint - local access
- name: mcp
_comment: 'MCP: /mcp -> http://supabase-studio:3000/api/mcp (local access)'
url: http://supabase-studio:3000/api/mcp
routes:
- name: mcp
strip_path: true
paths:
- /mcp
plugins:
# Block access to /mcp by default
- name: request-termination
config:
status_code: 403
message: "Access is forbidden."
# Enable local access (danger zone!)
# 1. Comment out the 'request-termination' section above
# 2. Uncomment the entire section below, including 'deny'
# 3. Add your local IPs to the 'allow' list
#- name: cors
#- name: ip-restriction
# config:
# allow:
# - 127.0.0.1
# - ::1
# deny: []
## Protected Dashboard - catch all remaining routes
- name: dashboard
_comment: 'Studio: /* -> http://studio:3000/*'
@ -290,7 +468,7 @@ services:
config:
hide_credentials: true
supabase-studio:
image: supabase/studio:2026.01.07-sha-037e5f9
image: supabase/studio:2026.03.16-sha-5528817
healthcheck:
test:
[
@ -310,7 +488,11 @@ services:
- STUDIO_PG_META_URL=http://supabase-meta:8080
- POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}
- POSTGRES_HOST=${POSTGRES_HOST:-supabase-db}
- CURRENT_CLI_VERSION=2.67.1
- POSTGRES_PORT=${POSTGRES_PORT:-5432}
- POSTGRES_DB=${POSTGRES_DB:-postgres}
- 'PGRST_DB_SCHEMAS=${PGRST_DB_SCHEMAS:-public,storage,graphql_public}'
- PGRST_DB_MAX_ROWS=${PGRST_DB_MAX_ROWS:-1000}
- PGRST_DB_EXTRA_SEARCH_PATH=${PGRST_DB_EXTRA_SEARCH_PATH:-public}
- DEFAULT_ORGANIZATION_NAME=${STUDIO_DEFAULT_ORGANIZATION:-Default Organization}
- DEFAULT_PROJECT_NAME=${STUDIO_DEFAULT_PROJECT:-Default Project}
@ -320,10 +502,12 @@ services:
- SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}
- SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}
- AUTH_JWT_SECRET=${SERVICE_PASSWORD_JWT}
- PG_META_CRYPTO_KEY=${SERVICE_PASSWORD_PGMETACRYPTO}
- LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}
- LOGFLARE_PUBLIC_ACCESS_TOKEN=${SERVICE_PASSWORD_LOGFLARE}
- LOGFLARE_PRIVATE_ACCESS_TOKEN=${SERVICE_PASSWORD_LOGFLAREPRIVATE}
- LOGFLARE_URL=http://supabase-analytics:4000
- 'SUPABASE_PUBLIC_API=${SERVICE_URL_SUPABASEKONG}'
# Next.js client-side environment variables (required for browser access)
- 'NEXT_PUBLIC_SUPABASE_URL=${SERVICE_URL_SUPABASEKONG}'
- NEXT_PUBLIC_SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}
@ -333,8 +517,13 @@ services:
# Uncomment to use Big Query backend for analytics
# NEXT_ANALYTICS_BACKEND_PROVIDER=bigquery
- 'OPENAI_API_KEY=${OPENAI_API_KEY}'
- SNIPPETS_MANAGEMENT_FOLDER=/app/snippets
- EDGE_FUNCTIONS_MANAGEMENT_FOLDER=/app/edge-functions
volumes:
- ./volumes/snippets:/app/snippets
- ./volumes/functions:/app/edge-functions
supabase-db:
image: supabase/postgres:15.8.1.048
image: supabase/postgres:15.8.1.085
healthcheck:
test: pg_isready -U postgres -h 127.0.0.1
interval: 5s
@ -365,7 +554,7 @@ services:
source: ./volumes/db/realtime.sql
target: /docker-entrypoint-initdb.d/migrations/99-realtime.sql
content: |
\set pguser `echo "supabase_admin"`
\set pguser `echo "$POSTGRES_USER"`
create schema if not exists _realtime;
alter schema _realtime owner to :pguser;
@ -380,7 +569,7 @@ services:
source: ./volumes/db/pooler.sql
target: /docker-entrypoint-initdb.d/migrations/99-pooler.sql
content: |
\set pguser `echo "supabase_admin"`
\set pguser `echo "$POSTGRES_USER"`
\c _supabase
create schema if not exists _supavisor;
alter schema _supavisor owner to :pguser;
@ -624,7 +813,7 @@ services:
source: ./volumes/db/logs.sql
target: /docker-entrypoint-initdb.d/migrations/99-logs.sql
content: |
\set pguser `echo "supabase_admin"`
\set pguser `echo "$POSTGRES_USER"`
\c _supabase
create schema if not exists _analytics;
alter schema _analytics owner to :pguser;
@ -633,7 +822,7 @@ services:
- supabase-db-config:/etc/postgresql-custom
supabase-analytics:
image: supabase/logflare:1.4.0
image: supabase/logflare:1.31.2
healthcheck:
test: ["CMD", "curl", "http://127.0.0.1:4000/health"]
timeout: 5s
@ -655,11 +844,10 @@ services:
- DB_PORT=${POSTGRES_PORT:-5432}
- DB_PASSWORD=${SERVICE_PASSWORD_POSTGRES}
- DB_SCHEMA=_analytics
- LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}
- LOGFLARE_PUBLIC_ACCESS_TOKEN=${SERVICE_PASSWORD_LOGFLARE}
- LOGFLARE_PRIVATE_ACCESS_TOKEN=${SERVICE_PASSWORD_LOGFLAREPRIVATE}
- LOGFLARE_SINGLE_TENANT=true
- LOGFLARE_SINGLE_TENANT_MODE=true
- LOGFLARE_SUPABASE_MODE=true
- LOGFLARE_MIN_CLUSTER_SIZE=1
# Comment variables to use Big Query backend for analytics
- POSTGRES_BACKEND_URL=postgresql://supabase_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/_supabase
@ -670,7 +858,7 @@ services:
# GOOGLE_PROJECT_ID=${GOOGLE_PROJECT_ID}
# GOOGLE_PROJECT_NUMBER=${GOOGLE_PROJECT_NUMBER}
supabase-vector:
image: timberio/vector:0.28.1-alpine
image: timberio/vector:0.53.0-alpine
healthcheck:
test:
[
@ -722,13 +910,13 @@ services:
inputs:
- project_logs
route:
kong: 'starts_with(string!(.appname), "supabase-kong")'
auth: 'starts_with(string!(.appname), "supabase-auth")'
rest: 'starts_with(string!(.appname), "supabase-rest")'
realtime: 'starts_with(string!(.appname), "realtime-dev")'
storage: 'starts_with(string!(.appname), "supabase-storage")'
functions: 'starts_with(string!(.appname), "supabase-functions")'
db: 'starts_with(string!(.appname), "supabase-db")'
kong: 'contains(string!(.appname), "supabase-kong")'
auth: 'contains(string!(.appname), "supabase-auth")'
rest: 'contains(string!(.appname), "supabase-rest")'
realtime: 'contains(string!(.appname), "supabase-realtime")'
storage: 'contains(string!(.appname), "supabase-storage")'
functions: 'contains(string!(.appname), "supabase-edge-functions")'
db: 'contains(string!(.appname), "supabase-db")'
# Ignores non nginx errors since they are related with kong booting up
kong_logs:
type: remap
@ -741,10 +929,13 @@ services:
.metadata.request.headers.referer = req.referer
.metadata.request.headers.user_agent = req.agent
.metadata.request.headers.cf_connecting_ip = req.client
.metadata.request.method = req.method
.metadata.request.path = req.path
.metadata.request.protocol = req.protocol
.metadata.response.status_code = req.status
url, split_err = split(req.request, " ")
if split_err == null {
.metadata.request.method = url[0]
.metadata.request.path = url[1]
.metadata.request.protocol = url[2]
}
}
if err != null {
abort
@ -793,14 +984,20 @@ services:
parsed, err = parse_regex(.event_message, r'^(?P<time>.*): (?P<msg>.*)$')
if err == null {
.event_message = parsed.msg
.timestamp = to_timestamp!(parsed.time)
.timestamp = parse_timestamp!(value: parsed.time, format: "%d/%b/%Y:%H:%M:%S %z")
.metadata.host = .project
}
# Filter out healthcheck logs from Realtime
realtime_logs_filtered:
type: filter
inputs:
- router.realtime
condition: '!contains(string!(.event_message), "/health")'
# Realtime logs are structured so we parse the severity level using regex (ignore time because it has no date)
realtime_logs:
type: remap
inputs:
- router.realtime
- realtime_logs_filtered
source: |-
.metadata.project = del(.project)
.metadata.external_id = .metadata.project
@ -825,6 +1022,13 @@ services:
.metadata.context[0].host = parsed.hostname
.metadata.context[0].pid = parsed.pid
}
# Function logs are unstructured messages on stderr
functions_logs:
type: remap
inputs:
- router.functions
source: |-
.metadata.project_ref = del(.project)
# Postgres logs some messages to stderr which we map to warning severity level
db_logs:
type: remap
@ -839,8 +1043,8 @@ services:
if err != null || parsed == null {
.metadata.parsed.error_severity = "info"
}
if parsed != null {
.metadata.parsed.error_severity = parsed.level
if parsed.level != null {
.metadata.parsed.error_severity = parsed.level
}
if .metadata.parsed.error_severity == "info" {
.metadata.parsed.error_severity = "log"
@ -856,8 +1060,11 @@ services:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
uri: 'http://supabase-analytics:4000/api/logs?source_name=gotrue.logs.prod&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=gotrue.logs.prod'
logflare_realtime:
type: 'http'
inputs:
@ -866,8 +1073,11 @@ services:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
uri: 'http://supabase-analytics:4000/api/logs?source_name=realtime.logs.prod&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=realtime.logs.prod'
logflare_rest:
type: 'http'
inputs:
@ -876,8 +1086,11 @@ services:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
uri: 'http://supabase-analytics:4000/api/logs?source_name=postgREST.logs.prod&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=postgREST.logs.prod'
logflare_db:
type: 'http'
inputs:
@ -886,21 +1099,24 @@ services:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
# We must route the sink through kong because ingesting logs before logflare is fully initialised will
# lead to broken queries from studio. This works by the assumption that containers are started in the
# following order: vector > db > logflare > kong
uri: 'http://supabase-kong:8000/analytics/v1/api/logs?source_name=postgres.logs&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=postgres.logs'
logflare_functions:
type: 'http'
inputs:
- router.functions
- functions_logs
encoding:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
uri: 'http://supabase-analytics:4000/api/logs?source_name=deno-relay-logs&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=deno-relay-logs'
logflare_storage:
type: 'http'
inputs:
@ -909,8 +1125,11 @@ services:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
uri: 'http://supabase-analytics:4000/api/logs?source_name=storage.logs.prod.2&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=storage.logs.prod.2'
logflare_kong:
type: 'http'
inputs:
@ -920,16 +1139,19 @@ services:
codec: 'json'
method: 'post'
request:
retry_max_duration_secs: 10
uri: 'http://supabase-analytics:4000/api/logs?source_name=cloudflare.logs.prod&api_key=${LOGFLARE_API_KEY?LOGFLARE_API_KEY is required}'
retry_max_duration_secs: 30
retry_initial_backoff_secs: 1
headers:
x-api-key: '${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}'
uri: 'http://supabase-analytics:4000/api/logs?source_name=cloudflare.logs.prod'
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}
command: ["--config", "etc/vector/vector.yml"]
- LOGFLARE_PUBLIC_ACCESS_TOKEN=${SERVICE_PASSWORD_LOGFLARE}
command: ["--config", "/etc/vector/vector.yml"]
supabase-rest:
image: postgrest/postgrest:v12.2.12
image: postgrest/postgrest:v14.6
depends_on:
supabase-db:
# Disable this if you are using an external Postgres database
@ -939,6 +1161,8 @@ services:
environment:
- PGRST_DB_URI=postgres://authenticator:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}
- 'PGRST_DB_SCHEMAS=${PGRST_DB_SCHEMAS:-public,storage,graphql_public}'
- PGRST_DB_MAX_ROWS=${PGRST_DB_MAX_ROWS:-1000}
- PGRST_DB_EXTRA_SEARCH_PATH=${PGRST_DB_EXTRA_SEARCH_PATH:-public}
- PGRST_DB_ANON_ROLE=anon
- PGRST_JWT_SECRET=${SERVICE_PASSWORD_JWT}
- PGRST_DB_USE_LEGACY_GUCS=false
@ -947,7 +1171,7 @@ services:
command: "postgrest"
exclude_from_hc: true
supabase-auth:
image: supabase/gotrue:v2.174.0
image: supabase/gotrue:v2.186.0
depends_on:
supabase-db:
# Disable this if you are using an external Postgres database
@ -975,7 +1199,7 @@ services:
- GOTRUE_DB_DRIVER=postgres
- GOTRUE_DB_DATABASE_URL=postgres://supabase_auth_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}
# The base URL your site is located at. Currently used in combination with other settings to construct URLs used in emails.
- GOTRUE_SITE_URL=${SERVICE_URL_SUPABASEKONG}
- GOTRUE_SITE_URL=${GOTRUE_SITE_URL:-${SERVICE_URL_SUPABASEKONG}}
# A comma separated list of URIs (e.g. "https://foo.example.com,https://*.foo.example.com,https://bar.example.com") which are permitted as valid redirect_to destinations.
- GOTRUE_URI_ALLOW_LIST=${ADDITIONAL_REDIRECT_URLS}
- GOTRUE_DISABLE_SIGNUP=${DISABLE_SIGNUP:-false}
@ -1038,7 +1262,7 @@ services:
realtime-dev:
# This container name looks inconsistent but is correct because realtime constructs tenant id by parsing the subdomain
image: supabase/realtime:v2.34.47
image: supabase/realtime:v2.76.5
container_name: realtime-dev.supabase-realtime
depends_on:
supabase-db:
@ -1062,6 +1286,7 @@ services:
timeout: 5s
interval: 5s
retries: 3
start_period: 10s
environment:
- PORT=4000
- DB_HOST=${POSTGRES_HOSTNAME:-supabase-db}
@ -1072,11 +1297,9 @@ services:
- DB_AFTER_CONNECT_QUERY=SET search_path TO _realtime
- DB_ENC_KEY=supabaserealtime
- API_JWT_SECRET=${SERVICE_PASSWORD_JWT}
- FLY_ALLOC_ID=fly123
- FLY_APP_NAME=realtime
- SECRET_KEY_BASE=${SECRET_PASSWORD_REALTIME}
- METRICS_JWT_SECRET=${SERVICE_PASSWORD_JWT}
- ERL_AFLAGS=-proto_dist inet_tcp
- ENABLE_TAILSCALE=false
- DNS_NODES=''
- RLIMIT_NOFILE=10000
- APP_NAME=realtime
@ -1084,6 +1307,7 @@ services:
- LOG_LEVEL=error
- RUN_JANITOR=true
- JANITOR_INTERVAL=60000
- DISABLE_HEALTHCHECK_LOGGING=true
command: >
sh -c "/app/bin/migrate && /app/bin/realtime eval 'Realtime.Release.seeds(Realtime.Repo)' && /app/bin/server"
supabase-minio:
@ -1121,7 +1345,7 @@ services:
exit 0
supabase-storage:
image: supabase/storage-api:v1.14.6
image: supabase/storage-api:v1.44.2
depends_on:
supabase-db:
# Disable this if you are using an external Postgres database
@ -1160,7 +1384,7 @@ services:
- UPLOAD_FILE_SIZE_LIMIT=524288000
- UPLOAD_FILE_SIZE_LIMIT_STANDARD=524288000
- UPLOAD_SIGNED_URL_EXPIRATION_TIME=120
- TUS_URL_PATH=upload/resumable
- TUS_URL_PATH=/upload/resumable
- TUS_MAX_SIZE=3600000
- ENABLE_IMAGE_TRANSFORMATION=true
- IMGPROXY_URL=http://imgproxy:8080
@ -1168,46 +1392,32 @@ services:
- DATABASE_SEARCH_PATH=storage
- NODE_ENV=production
- REQUEST_ALLOW_X_FORWARDED_PATH=true
# - ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.ewogICJyb2xlIjogImFub24iLAogICJpc3MiOiAic3VwYWJhc2UiLAogICJpYXQiOiAxNzA4OTg4NDAwLAogICJleHAiOiAxODY2ODQxMjAwCn0.jCDqsoXGT58JnAjf27KOowNQsokkk0aR7rdbGG18P-8
# - SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.ewogICJyb2xlIjogInNlcnZpY2Vfcm9sZSIsCiAgImlzcyI6ICJzdXBhYmFzZSIsCiAgImlhdCI6IDE3MDg5ODg0MDAsCiAgImV4cCI6IDE4NjY4NDEyMDAKfQ.GA7yF2BmqTzqGkP_oqDdJAQVt0djjIxGYuhE0zFDJV4
# - POSTGREST_URL=http://supabase-rest:3000
# - PGRST_JWT_SECRET=${SERVICE_PASSWORD_JWT}
# - DATABASE_URL=postgres://supabase_storage_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}
# - FILE_SIZE_LIMIT=52428800
# - STORAGE_BACKEND=s3
# - STORAGE_S3_BUCKET=stub
# - STORAGE_S3_ENDPOINT=http://supabase-minio:9000
# - STORAGE_S3_PROTOCOL=http
# - STORAGE_S3_REGION=stub
# - STORAGE_S3_FORCE_PATH_STYLE=true
# - AWS_ACCESS_KEY_ID=${SERVICE_USER_MINIO}
# - AWS_SECRET_ACCESS_KEY=${SERVICE_PASSWORD_MINIO}
# - AWS_DEFAULT_REGION=stub
# - FILE_STORAGE_BACKEND_PATH=/var/lib/storage
# - TENANT_ID=stub
# # TODO: https://github.com/supabase/storage-api/issues/55
# - REGION=stub
# - ENABLE_IMAGE_TRANSFORMATION=true
# - IMGPROXY_URL=http://imgproxy:8080
- ANON_KEY=${SERVICE_SUPABASEANON_KEY}
- SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}
- POSTGREST_URL=http://supabase-rest:3000
- PGRST_JWT_SECRET=${SERVICE_PASSWORD_JWT}
- STORAGE_PUBLIC_URL=${SERVICE_URL_SUPABASEKONG}
- TENANT_ID=${STORAGE_TENANT_ID:-storage-single-tenant}
volumes:
- ./volumes/storage:/var/lib/storage
imgproxy:
image: darthsim/imgproxy:v3.8.0
image: darthsim/imgproxy:v3.30.1
healthcheck:
test: ["CMD", "imgproxy", "health"]
timeout: 5s
interval: 5s
retries: 3
environment:
- IMGPROXY_BIND=:8080
- IMGPROXY_LOCAL_FILESYSTEM_ROOT=/
- IMGPROXY_USE_ETAG=true
- IMGPROXY_ENABLE_WEBP_DETECTION=${IMGPROXY_ENABLE_WEBP_DETECTION:-true}
- IMGPROXY_AUTO_WEBP=${IMGPROXY_AUTO_WEBP:-true}
- IMGPROXY_MAX_SRC_RESOLUTION=16.8
volumes:
- ./volumes/storage:/var/lib/storage
supabase-meta:
image: supabase/postgres-meta:v0.89.3
image: supabase/postgres-meta:v0.95.2
depends_on:
supabase-db:
# Disable this if you are using an external Postgres database
@ -1221,9 +1431,10 @@ services:
- PG_META_DB_NAME=${POSTGRES_DB:-postgres}
- PG_META_DB_USER=supabase_admin
- PG_META_DB_PASSWORD=${SERVICE_PASSWORD_POSTGRES}
- CRYPTO_KEY=${SERVICE_PASSWORD_PGMETACRYPTO}
supabase-edge-functions:
image: supabase/edge-runtime:v1.67.4
image: supabase/edge-runtime:v1.71.2
depends_on:
supabase-analytics:
condition: service_healthy
@ -1234,26 +1445,40 @@ services:
retries: 3
environment:
- JWT_SECRET=${SERVICE_PASSWORD_JWT}
- SUPABASE_URL=${SERVICE_URL_SUPABASEKONG}
- SUPABASE_URL=http://supabase-kong:8000
- SUPABASE_PUBLIC_URL=${SERVICE_URL_SUPABASEKONG}
- SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}
- SUPABASE_SERVICE_ROLE_KEY=${SERVICE_SUPABASESERVICE_KEY}
- SUPABASE_DB_URL=postgresql://postgres:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}
# TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
# TODO: Allow configuring VERIFY_JWT per function.
- VERIFY_JWT=${FUNCTIONS_VERIFY_JWT:-false}
volumes:
- ./volumes/functions:/home/deno/functions
- deno-cache:/root/.cache/deno
- type: bind
source: ./volumes/functions/main/index.ts
target: /home/deno/functions/main/index.ts
content: |
import { serve } from 'https://deno.land/std@0.131.0/http/server.ts'
import * as jose from 'https://deno.land/x/jose@v4.14.4/index.ts'
console.log('main function started')
const JWT_SECRET = Deno.env.get('JWT_SECRET')
const SUPABASE_URL = Deno.env.get('SUPABASE_URL')
const VERIFY_JWT = Deno.env.get('VERIFY_JWT') === 'true'
// Create JWKS for ES256/RS256 tokens (newer tokens)
let SUPABASE_JWT_KEYS: ReturnType<typeof jose.createRemoteJWKSet> | null = null
if (SUPABASE_URL) {
try {
SUPABASE_JWT_KEYS = jose.createRemoteJWKSet(
new URL('/auth/v1/.well-known/jwks.json', SUPABASE_URL)
)
} catch (e) {
console.error('Failed to fetch JWKS from SUPABASE_URL:', e)
}
}
function getAuthToken(req: Request) {
const authHeader = req.headers.get('authorization')
if (!authHeader) {
@ -1266,23 +1491,61 @@ services:
return token
}
async function verifyJWT(jwt: string): Promise<boolean> {
const encoder = new TextEncoder()
const secretKey = encoder.encode(JWT_SECRET)
try {
await jose.jwtVerify(jwt, secretKey)
} catch (err) {
console.error(err)
async function isValidLegacyJWT(jwt: string): Promise<boolean> {
if (!JWT_SECRET) {
console.error('JWT_SECRET not available for HS256 token verification')
return false
}
return true
const encoder = new TextEncoder();
const secretKey = encoder.encode(JWT_SECRET)
try {
await jose.jwtVerify(jwt, secretKey);
} catch (e) {
console.error('Symmetric Legacy JWT verification error', e);
return false;
}
return true;
}
serve(async (req: Request) => {
async function isValidJWT(jwt: string): Promise<boolean> {
if (!SUPABASE_JWT_KEYS) {
console.error('JWKS not available for ES256/RS256 token verification')
return false
}
try {
await jose.jwtVerify(jwt, SUPABASE_JWT_KEYS)
} catch (e) {
console.error('Asymmetric JWT verification error', e);
return false
}
return true;
}
async function isValidHybridJWT(jwt: string): Promise<boolean> {
const { alg: jwtAlgorithm } = jose.decodeProtectedHeader(jwt)
if (jwtAlgorithm === 'HS256') {
console.log(`Legacy token type detected, attempting ${jwtAlgorithm} verification.`)
return await isValidLegacyJWT(jwt)
}
if (jwtAlgorithm === 'ES256' || jwtAlgorithm === 'RS256') {
return await isValidJWT(jwt)
}
return false;
}
Deno.serve(async (req: Request) => {
if (req.method !== 'OPTIONS' && VERIFY_JWT) {
try {
const token = getAuthToken(req)
const isValidJWT = await verifyJWT(token)
const isValidJWT = await isValidHybridJWT(token);
if (!isValidJWT) {
return new Response(JSON.stringify({ msg: 'Invalid JWT' }), {
@ -1348,9 +1611,7 @@ services:
// https://deno.land/manual/getting_started/setup_your_environment
// This enables autocomplete, go to definition, etc.
import { serve } from "https://deno.land/std@0.177.1/http/server.ts"
serve(async () => {
Deno.serve(async () => {
return new Response(
`"Hello from Edge Functions!"`,
{ headers: { "Content-Type": "application/json" } },
@ -1367,7 +1628,7 @@ services:
- /home/deno/functions/main
supabase-supavisor:
image: 'supabase/supavisor:2.5.1'
image: 'supabase/supavisor:2.7.4'
healthcheck:
test:
- CMD
@ -1379,13 +1640,14 @@ services:
timeout: 5s
interval: 5s
retries: 10
start_period: 30s
depends_on:
supabase-db:
condition: service_healthy
supabase-analytics:
condition: service_healthy
environment:
- POOLER_TENANT_ID=dev_tenant
- POOLER_TENANT_ID=${POOLER_TENANT_ID:-dev_tenant}
- POOLER_POOL_MODE=transaction
- POOLER_DEFAULT_POOL_SIZE=${POOLER_DEFAULT_POOL_SIZE:-20}
- POOLER_MAX_CLIENT_CONN=${POOLER_MAX_CLIENT_CONN:-100}
@ -1402,10 +1664,20 @@ services:
- 'METRICS_JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- REGION=local
- 'ERL_AFLAGS=-proto_dist inet_tcp'
- 'DB_POOL_SIZE=${POOLER_DB_POOL_SIZE:-5}'
# TLS for downstream connections (fixes Supabase CLI TLS requirement)
- GLOBAL_DOWNSTREAM_CERT_PATH=/etc/ssl/server.crt
- GLOBAL_DOWNSTREAM_KEY_PATH=/etc/ssl/server.key
command:
- /bin/sh
- "-c"
- '/app/bin/migrate && /app/bin/supavisor eval "$$(cat /etc/pooler/pooler.exs)" && /app/bin/server'
- |
if [ ! -f /etc/ssl/server.crt ]; then
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
-keyout /etc/ssl/server.key -out /etc/ssl/server.crt \
-subj "/CN=supabase-pooler"
fi
/app/bin/migrate && /app/bin/supavisor eval "$$(cat /etc/pooler/pooler.exs)" && /app/bin/server
volumes:
- type: bind
source: ./volumes/pooler/pooler.exs

View file

@ -1,6 +1,6 @@
# documentation: https://docs.twenty.com
# slogan: Twenty is a CRM designed to fit your unique business needs.
# category: cms
# category: productivity
# tags: crm, self-hosted, dashboard
# logo: svgs/twenty.svg
# port: 3000

View file

@ -1,6 +1,6 @@
# documentation: https://docs.getunleash.io
# slogan: Open source feature flag management for enterprises.
# category: productivity
# category: devtools
# tags: unleash,feature flags,feature toggles,ab testing,open source
# logo: svgs/unleash.svg
# port: 4242

View file

@ -1,6 +1,6 @@
# documentation: https://docs.getunleash.io
# slogan: Open source feature flag management for enterprises.
# category: productivity
# category: devtools
# tags: unleash,feature flags,feature toggles,ab testing,open source
# logo: svgs/unleash.svg
# port: 4242

View file

@ -1,6 +1,6 @@
# documentation: https://docs.usesend.com/self-hosting/overview
# slogan: Usesend is an open-source alternative to Resend, Sendgrid, Mailgun and Postmark etc.
# category: messaging
# category: email
# tags: resend, mailer, marketing emails, transaction emails, self-hosting, postmark
# logo: svgs/usesend.svg
# port: 3000

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -512,6 +512,9 @@
'--cap-add=NET_ADMIN --cap-add=NET_RAW',
'--privileged --init',
'--memory=512m --cpus=2',
'--entrypoint "sh -c \'npm start\'"',
'--entrypoint "sh -c \'php artisan schedule:work\'"',
'--hostname "my-host"',
]);
});

View file

@ -1,16 +1,16 @@
{
"coolify": {
"v4": {
"version": "4.0.0-beta.471"
"version": "4.0.0-beta.472"
},
"nightly": {
"version": "4.0.0"
},
"helper": {
"version": "1.0.12"
"version": "1.0.13"
},
"realtime": {
"version": "1.0.11"
"version": "1.0.12"
},
"sentinel": {
"version": "0.0.21"