Skip to main content

Overview

Kaneo supports private uploads in task descriptions and task comments. Uploads use an S3-compatible object storage backend:
  • the Kaneo API creates presigned upload URLs
  • the browser uploads files directly to the storage backend
  • Kaneo finalizes uploads into private asset records
  • Kaneo serves uploaded files back through its own API
This means Kaneo can work with multiple backends as long as they expose a compatible S3-style API. Current behavior:
  • images render inline
  • other files such as CSV, PDF, or ZIP are inserted as attachment cards/links
  • uploaded files are private by default and are not meant to be served from public bucket URLs
If you want backend-specific setup examples, see the storage backends guide. For local or self-hosted deployments, MinIO is the recommended storage backend. It gives you:
  • a stable S3-compatible API
  • a web console for bucket management
  • a simple Docker setup
  • better compatibility for direct browser uploads
Example .env values:
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin

S3_ENDPOINT=http://minio:9000
S3_BUCKET=kaneo-uploads
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_REGION=us-east-1
S3_FORCE_PATH_STYLE=true
Create the bucket before using uploads. You do not need to make the bucket public.

Important: internal Docker URLs are not enough in production

When Kaneo generates a presigned upload URL, the browser uploads directly to your storage backend. That means S3_ENDPOINT must be reachable by the browser, not just by Docker containers. For example:
  • http://minio:9000 works only inside Docker
  • https://files.example.com works from the browser
If your Kaneo deployment is public, do not leave S3_ENDPOINT set to http://minio:9000. Use a public MinIO hostname instead, for example:
S3_ENDPOINT=https://files.cloud.kaneo.app
S3_BUCKET=kaneo-uploads
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_REGION=us-east-1
S3_FORCE_PATH_STYLE=true
The simplest production setup is:
  • cloud.kaneo.app for Kaneo
  • files.cloud.kaneo.app for MinIO
Expose MinIO on its own public hostname through your reverse proxy. Why this is needed:
  • Kaneo signs uploads against S3_ENDPOINT
  • the browser uses that signed URL directly
  • Docker-internal names like minio are not resolvable from a user’s browser
Using a dedicated subdomain is recommended over proxying MinIO under a path. Reads do not need a public bucket URL, because Kaneo serves uploaded assets back through /api/asset/:id.

Other S3-compatible backends

Kaneo is not tied to MinIO. Any deployment can point Kaneo at another S3-compatible backend by changing the S3_* environment variables. Examples:
  • AWS S3
  • Cloudflare R2
  • MinIO
  • fs

Using fs

fs is another S3-compatible option if you want a lightweight local object store. It is built by friends of Kaneo. When using fs, configure Kaneo like this:
S3_ENDPOINT=http://fs:2600
S3_BUCKET=kaneo-uploads
S3_ACCESS_KEY_ID=<your-access-key>
S3_SECRET_ACCESS_KEY=<your-secret-key>
S3_REGION=us-east-1
S3_FORCE_PATH_STYLE=true
Important notes for fs:
  • buckets are created through the S3 API or CLI, not through a web dashboard
  • Kaneo still expects the bucket to exist before uploads start
  • direct browser uploads may need reverse-proxy CORS headers, because fs does not currently implement S3 CORS APIs

Required Kaneo storage variables

Set these in your Kaneo deployment:
NameDescription
S3_ENDPOINTS3-compatible API endpoint used by Kaneo
S3_BUCKETBucket used for uploaded files
S3_ACCESS_KEY_IDAccess key used to create presigned upload URLs
S3_SECRET_ACCESS_KEYSecret key used to create presigned upload URLs
S3_REGIONRegion used for signing
S3_FORCE_PATH_STYLEUsually true for MinIO and fs
Optional:
NameDescription
S3_PUBLIC_BASE_URLOptional public asset base URL. Kaneo does not require this for the current private asset flow.
S3_MAX_IMAGE_UPLOAD_BYTESMaximum allowed upload size in bytes for images and other uploaded files
S3_PRESIGN_TTL_SECONDSPresigned upload URL lifetime

Testing uploads

Once storage is configured:
  1. Open a task.
  2. Paste, drag, or select a file inside the description or a comment.
  3. Confirm images render inline and other files render as attachment cards/links.
If uploads fail:
  • confirm the bucket exists
  • confirm the credentials can write to the bucket
  • confirm the storage endpoint is reachable from the browser
  • confirm S3_ENDPOINT is a public URL, not an internal Docker hostname
  • if using direct browser uploads, confirm CORS is configured correctly for your storage endpoint