Skip to main content
Here are the recommended items you should implement as part of supporting PowerSync in a production environment.
  1. Client SDK Diagnostics - Implement a sync diagnostics screen/view in your client application that provides critical sync information.
  2. Client logging - Implement logging in your client application to capture sync events and errors.
  3. Issue Alerts - Trigger notifications when the PowerSync replicator runs into errors.
  4. Database - Making sure your database is ready for production when integrated with PowerSync.

Client specific

SDK Diagnostics

It’s important to know what’s going on with a PowerSync enabled client application, this becomes useful during debugging issues with end users. We recommend adding a view/screen in your application that offers diagnostic information about a client. Here you would want to add the following client specific information:
  1. connected - Boolean; True if the client is connected to the PowerSync Service instance. False if not.
  2. connecting - Boolean; True if the client is attempting to connect to the PowerSync Service instance. False if not.
  3. uploading - Boolean; If the client has a network connection and changes in the upload queue are present this will be set to true when the client attempts to upload changes to the backend API in the uploadData function. This option can be found on the dataFlowStatus object.
  4. downloading - Boolean; If the client is connected to the PowerSync Service and new data is available, this will be set to true, else it will be false. This option can be found on the dataFlowStatus object.
  5. hasSynced - Boolean; True if the client completed a full sync at least once. False if the client never completed a full sync.
  6. lastSyncedAt - DateTime; Timestamp of when the client last completed a full sync.
Each of the PowerSync Client SDKs have the SyncStatus class that can be used to access the fields mentioned above. In addition to the SyncStatus options above, it’s also a good idea to see what the current length of the upload queue looks like. The upload queue contains all local mutations that need to be processed by the client specific uploadData implementation. To get this information you can simply count the number of rows present in the internal ps_crud SQLite table e.g.
SELECT COUNT(*) AS row_count FROM ps_crud;
If you’re interested in learning more about the internal PowerSync SQLite schema, see the Client Architecture section of the docs.

Client logging

Using Sentry logging for Log Aggregation

This is just an example of how to implement Sentry logging. The actual implementation is up to you as the developer. You don’t have to use Sentry logging, but we recommend using some sort of log aggregation service in production.
App Entry Point
createRoot(document.getElementById("root")!,
{
  onUncaughtError: Sentry.reactErrorHandler((error, errorInfo) => {
    console.warn('Uncaught error', error, errorInfo.componentStack);
  }),
  // Callback called when React catches an error in an ErrorBoundary.
  onCaughtError: Sentry.reactErrorHandler(),
  // Callback called when React automatically recovers from errors.
  onRecoverableError: Sentry.reactErrorHandler(),
}).render(
  <StrictMode>
    <SystemProvider>
      <App />
    </SystemProvider>
  </StrictMode>
);
System.ts
import * as Sentry from '@sentry/react';
import { createBaseLogger, LogLevel } from '@powersync/react-native';;

// Initialize Sentry
Sentry.init({
  dsn: 'YOUR_SENTRY_DSN_HERE',
  transport: Sentry.makeBrowserOfflineTransport(Sentry.makeFetchTransport), // Handle offline scenarios
  enableLogs: true // Enable Sentry logging
});

const logger = createBaseLogger();
logger.useDefaults();
logger.setLevel(LogLevel.WARN);

logger.setHandler((messages, context) => {
  if (!context?.level) return;

  // Get the main message and combine any additional data
  const messageArray = Array.from(messages);
  const mainMessage = String(messageArray[0] || 'Empty log message');
  const extraData = messageArray.slice(1).reduce((acc, curr) => ({ ...acc, ...curr }), {});

  const level = context.level.name.toLowerCase();

  // Addbreadcrumb: creates a trail of events leading up to errors
  // This helps debug by showing PowerSync state/operations before crashes
  // Breadcrumbs appear in Sentry error reports for context
  // We capture all levels (including info/debug) since we might want to know
  // what operations happened before an error occurred
  Sentry.addBreadcrumb({
    message: mainMessage,
    level: level as Sentry.SeverityLevel,
    data: extraData,
    timestamp: Date.now()
  });

  // Only send warnings and errors to Sentry
  if (level == 'warn' || level == 'error') {
    console[level](`PowerSync ${level.toUpperCase()}:`, mainMessage, extraData);
    Sentry.logger[level](mainMessage, extraData);
  }
});

// Create PowerSync instance
export const powerSync = new PowerSyncDatabase({
  schema: AppSchema,
  database: {
    dbFilename: 'example.db'
  },
  logger: logger // Pass the logger to PowerSync
});

// Register a listener to monitor PowerSync status changes and log upload/download errors that are not handled directly by the SDK
powerSync.registerListener({
  statusChanged: (status) => {
    // Check for download errors and log them with context
    if(status.dataFlowStatus?.downloadError) {
      logger.error("PowerSync sync download failed", {
        userSession: connector.currentSession,    // Current user session for tracking
        lastSyncAt: status?.lastSyncedAt,        // When the last successful sync occurred
        connected: status?.connected,            // Network connection status
        sdkVersion: powerSync.sdkVersion || 'unknown', // PowerSync SDK version for debugging
        downloadError: status.dataFlowStatus?.downloadError // The actual download error details
      });
    }

    // Check for upload errors and log them with context
    if(status.dataFlowStatus?.uploadError) {
      logger.error("PowerSync sync upload failed", {
        userSession: connector.currentSession,    // Current user session for tracking
        lastSyncAt: status?.lastSyncedAt,        // When the last successful sync occurred
        connected: status?.connected,            // Network connection status
        sdkVersion: powerSync.sdkVersion || 'unknown', // PowerSync SDK version for debugging
        uploadError: status.dataFlowStatus?.uploadError   // The actual upload error details
      });
    }
  }
});

// Usage with additional context
logger.error('PowerSync sync failed', {
  userId: userID,
  lastSyncAt: status?.lastSyncedAt,
  connected: status?.connected,
  sdkVersion: powerSync.sdkVersion || 'unknown',
});

Best Practices

  • Log Level Management: Use appropriate log levels (WARN/ERROR) in production
  • Structured Logging: Include relevant context like user IDs, operation types, timestamps
  • Offline Resilience: Always have a local fallback for critical logs
  • Performance: Be mindful of log volume to avoid performance impacts
  • Privacy: Ensure sensitive data is not logged or is properly sanitized
  • Retention: Implement log rotation/cleanup for local storage to manage device storage (if applicable)

Issue Alerts

PowerSync Cloud

The PowerSync Cloud dashboard offers features and functionality that makes it easy to monitor the replication process from your source DB to your PowerSync Service instance and raise alerts when issues occur. We highly recommend you read the sections below and configure alerts as suggested.

Replication Issue Alerts

At a minimum we recommend creating an issue alert for Replication issues. For details instructions on how to configure Issue Alerts, see the Issue Alerts section of the Monitoring and Alerting docs. Here’s quick example of what the Issue alert should look like to catch replication issues: Example replication issue alert setup Once configured, create a Webhook alert or Email notifications to ensure you are notified when replication issues arise.

PowerSync Self-Host

To view the health and errors for a self-hosted PowerSync Service there are a few different options:

Health Check Endpoints

The PowerSync Service offers a few HTTP endpoints you can probe to perform health checks on an instance. These endpoints will return a specific HTTP status code dependent on the current health of the instance, but will not give specific error information. For more information on this, see the Health checks docs.

Diagnostics API

The PowerSync Service Diagnostics API is an easy way to get details around specific errors that are taking place on an instance. To configure replication issue alerts for self-hosted instances, we recommend using the Diagnostics API which ships with the PowerSync Service, as the source of replication issues that could occur. First, make sure you’ve configured the Diagnostics API for your PowerSync Service. To do so, follow the steps outlined in the PowerSync Self-Host Diagnostics docs. Once enabled, send a request to the Diagnostics API to see the current status. The response of the request from the Diagnostics API would look something like this:
{
	"data": {
		"connections": [
			{
				"id": "default",
				"postgres_uri": "postgresql://powersync:5432/postgres",
				"connected": true,
				"errors": []
			}
		],
		"active_sync_rules": {
			"connections": [
				{
					"id": "default",
					"tag": "default",
					"slot_name": "powersync_1_6489",
					"initial_replication_done": true,
					"last_lsn": "00000000/0AB81970",
					"last_keepalive_ts": "2025-08-26T15:51:49.746Z",
					"last_checkpoint_ts": "2025-08-26T15:44:10.624Z",
					"replication_lag_bytes": 0,
					"tables": [
						{
							"schema": "public",
							"name": "counters",
							"replication_id": [
								"id"
							],
							"data_queries": true,
							"parameter_queries": false,
							"errors": []
						}
					]
				}
			],
			"errors": []
		}
	}
}
The easiest way to check for replication issues is to look at the Diagnostics endpoint on intervals and keep an eye on the errors arrays, this will populate errors as they arise on the service.

Database Best Practices

Postgres

Managing & Monitoring Replication Lag

Because PowerSync relies on Postgres logical replication, it’s important to consider the size of the max_slot_wal_keep_size and monitoring lag of replication slots used by PowerSync in a production environment to ensure lag of replication slots do not exceed the max_slot_wal_keep_size.
The max_slot_wal_keep_size Postgres configuration parameter limits the size of the Write-Ahead Log (WAL) files that replication slots can hold.
The WAL growth rate is expected to increase substantially during the initial replication of large datasets with high update frequency, particularly for tables included in the PowerSync publication. During normal operation (after Sync Rules are deployed) the WAL growth rate is much smaller than the initial replication period, since the PowerSync service can replicate ~5k operations per second, meaning the WAL lag is typically in the MB range as opposed to the GB range. When deciding what to set the max_slot_wal_keep_size configuration parameter the following should be taken in account:
  1. Database size - This impacts the time it takes to complete the initial replication from the source Postgres database.
  2. Sync Rules complexity - This also impacts the time it takes to complete the initial replication.
  3. Postgres update frequency - The frequency of updates (of tables included in the publication you create for PowerSync) during initial replication. The WAL growth rate is directly proportional to this.
To view the current replication slots that are being used by PowerSync you can run the following query:
SELECT slot_name,
    plugin,
    slot_type,
    active,
    pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS replication_lag
FROM pg_replication_slots;
To view the current configured value of the max_slot_wal_keep_size you can run the following query:
SELECT setting as max_slot_wal_keep_size
FROM pg_settings
WHERE name = 'max_slot_wal_keep_size'
It’s recommended to check the current replication slot lag and max_slot_wal_keep_size when deploying Sync Rules changes to your PowerSync Service instance, especially when you’re working with large database volumes. If you notice that the replication lag is greater than the current max_slot_wal_keep_size it’s recommended to increase value of the max_slot_wal_keep_size on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays.

Managing Replication Slots

Under normal operating conditions when new Sync Rules are deployed to a PowerSync Service instance, a new replication slot will also be created and used for replication. The old replication slot from the previous version of the Sync Rules will still remain, until Sync Rules reprocessing is completed, at which point the old replication slot will be removed by the PowerSync Service. However, in some cases, a replication slot may remain without being used. Usually this happens when a PowerSync Service instance is de-provisioned, stopped intentionally or due to unexpected errors. This results in excessive disk usage due to the continued growth of the WAL. To check which replication slots used by a PowerSync Service are no longer active, the following query can be executed against the source Postgres database:
SELECT slot_name,
    pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS replication_lag
FROM pg_replication_slots WHERE active = false;
If you have inactive replication slots that need to be cleaned up, you can drop them using the following query:
SELECT slot_name,
    pg_drop_replication_slot(slot_name)
FROM pg_replication_slots where active = false;
The alternative to manually checking for inactive replication slots would be to configure the idle_replication_slot_timeout configuration parameter on the source Postgres database.
The idle_replication_slot_timeout configuration parameter is only available from PostgresSQL 18 and above.
The idle_replication_slot_timeout will invalidate replication slots that have remained inactive for longer than the value set for the idle_replication_slot_timeout parameter. It’s recommended to configure this parameter for source Postgres databases as this will prevent runaway WAL growth for replication slots that are no longer active or used by the PowerSync Service.