Self-hosting setup and maintenance
A minimal “development” setup (e.g. for a staging or a QA environment) is:
This setup has no redundancy. If the replica set fails, you may need to recreate it from scratch which will re-sync all clients.
For production, we recommend running a high-availability setup:
For scaling up, add 1x PowerSync API container per 100 connections. The MongoDB replica set should be scaled based on CPU and memory usage.
The replication container handles replicating from the source database to PowerSync’s bucket storage.
The replication process is run using the docker command start -r sync
, for example docker run powersync start -r sync
.
Only one process can replicate at a time. If multiple are running concurrently, you may see an error [PSYNC_S1003] Sync rules have been locked by another process for replication
.
If you use rolling deploys, it is normal to see this error for a short duration while multiple processes are running.
Memory and CPU usage of the replication container is primarily driven by write load on the source database. A good starting point is 1GB memory and 1 vCPU for the container, but this may be scaled down depending on the load patterns.
Set the environment variable NODE_OPTIONS=--max-old-space-size=800
for 800MB, or set to 80% of the total assigned memory if scaling up or down.
The API container handles streaming sync connections, as well as any other API calls.
The replication process is run using the docker command start -r api
, for example docker run powersync start -r api
.
Each API container is limited to 200 concurrent connections, but we recommend targeting 100 concurrent connections or less per container. This may change as we implement additional performance optimizations.
Memory and CPU usage of API containers are driven by:
A good starting point is 1GB memory and 1 vCPU per container, but this may be scaled up or down depending on the specific load patterns.
Set the environment variable NODE_OPTIONS=--max-old-space-size=800
for 800MB, or set to 80% of the total assigned memory if scaling up or down.
We recommend running a compact job daily as a cron job, or after any large maintenance jobs. For details, see the documentation on Compacting Buckets.
Run the compact job using the docker command compact
, for example docker run powersync compact
.
The compact job uses up to 1GB memory for compacting, if available. Set the environment variable NODE_OPTIONS=--max-old-space-size=800
for 800MB, or set to 80% of the total assigned memory if scaling up or down.
A load balancer is required in front of the API containers to provide TLS support and load balancing. Most cloud providers have built-in options for load balancing, such as ALB on AWS.
It is currently required to host the API container on a dedicated subdomain - we do not support running it on the same subdomain as another service.
For self-hosting, nginx is always a good option. A basic nginx configuration could look like this:
When using nginx as a Kubernetes ingress, set the proxy buffering option as an annotation on the ingress:
If the load balancer supports health checks, it may be configured to poll the API container at /probes/liveness
. This endpoint is expected to have a 200 response when the container is healthy. See Healthchecks for details.
Occasionally, new versions of the PowerSync service image may require migrations on the underlying storage database. This is also specifically required the first time the service starts up on a new storage database.
By default, migrations are run as part of the replication and API containers. In some cases, a migration may add significant delay to the container startup.
To avoid this startup delay, the migrations may be run as a separate job on each update, before replacing the rest of the containers. To run the migrations, run the docker command migrate up
, for example docker run powersync migrate up
.
In this case, disable automatic migrations in the config:
Note that if you disable automatic migrations, and do not run the migration job manually, the service may run with an outdated storage schema version. This may lead to unexpected and potentially difficult-to-debug errors in the service.
We recommend using Git to backup your configuration files.
None of the containers use any local storage, so no backups are required there.
The sync bucket storage database may be backed up using the recommendations for the storage database system. This is not a strong requirement, since this data can be recovered by re-replicating from the source database.