Skip to main content
Instead of syncing entire tables, you tell PowerSync exactly which data each user/client can sync. You write simple SQL-like queries to define streams of data, and your client app subscribes to the streams it needs. PowerSync handles the rest, keeping data in sync in real-time and making it available offline. For example, you might create a stream that syncs only the current user’s to-do items, another for shared projects they have access to, and another for reference data that everyone needs. Your app subscribes to these streams on demand, and only that data syncs to the client-side SQLite database. Offline-first apps that need all relevant data available upfront can use auto_subscribe: true so streams sync automatically when clients connect.
Beta ReleaseSync Streams are now in beta and production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to migrate from Sync Rules.We welcome your feedback — please share with us in Discord.

Defining Streams

Streams are defined in a configuration file. Each stream has a name and a query that specifies which rows to sync using SQL-like syntax. The query can reference parameters like the authenticated user’s ID to personalize what each user receives.
In the PowerSync Dashboard:
  1. Select your project and instance
  2. Go to Sync Streams
  3. Edit the YAML directly in the dashboard
  4. Click Deploy to validate and deploy
config:
  edition: 3

streams:
  todos:
    query: SELECT * FROM todos WHERE owner_id = auth.user_id()
Available stream options:
config:
  edition: 3

streams:
  <stream_name>:
    # CTEs (optional) - define with block inside each stream
    with:
      <cte_name>: SELECT ... FROM ...

    # Behavior options (place above query/queries)
    auto_subscribe: true    # Auto-subscribe clients on connect (default: false)
    priority: 1             # Sync priority (optional). Lower number -> higher priority
    accept_potentially_dangerous_queries: true  # Silence security warnings (default: false)

    # Query options (use one)
    query: SELECT * FROM <table> WHERE ...         # Single query
    queries:                                       # Multiple queries
      - SELECT * FROM <table_a> WHERE ...
      - SELECT * FROM <table_b> WHERE ...

    
OptionDefaultDescription
querySQL-like query defining which data to sync. Use either query or queries, not both. See Writing Queries.
queriesArray of queries defining which data to sync. More efficient than defining separate streams: the client manages one subscription and PowerSync merges the data from all queries (see Multiple Queries per Stream).
withCTEs available to this stream’s queries. Define the with block inside each stream.
auto_subscribefalseWhen true, clients automatically subscribe on connect.
prioritySync priority (lower value = higher priority). See Prioritized Sync.
accept_potentially_dangerous_queriesfalseSilences security warnings when queries use client-controlled parameters (i.e. connection parameters and subscription parameters), as opposed to authentication parameters that are signed as part of the JWT. Set to true only if you’ve verified the query is safe. See Using Parameters.

Basic Examples

There are two independent concepts to understand:
  • What data the stream returns. For example:
    • Global data: No parameters. Same data for all users (e.g. reference tables like categories).
    • Filtered data: Filters the data by a parameter value. This can make use of auth parameters from the JWT token (such as the user ID or other JWT claims), subscription parameters (specified by the client when it subscribes to a stream at any time), or connection parameters (specified at connection). Different users will get different sets of data based on the parameters. See Using Parameters for the full reference.
  • When the client syncs the data
    • Auto-subscribe: Client automatically subscribes on connect (auto_subscribe: true)
    • On-demand: Client explicitly subscribes when needed (default behavior)

Global Data

Data without parameters is “global” data, meaning the same data goes to all users/clients. This is useful for reference tables:
config:
  edition: 3

streams:
  # Same categories for everyone
  categories:
    query: SELECT * FROM categories

  # Same active products for everyone
  products:
    query: SELECT * FROM products WHERE active = true
Global data streams still require clients to subscribe explicitly unless you set auto_subscribe: true

Filtering Data by User

Use auth.user_id() or other JWT claims to return different data per user:
config:
  edition: 3

streams:
  # Each user gets their own lists
  my_lists:
    query: SELECT * FROM lists WHERE owner_id = auth.user_id()

  # Each user gets their own orders
  my_orders:
    query: SELECT * FROM orders WHERE user_id = auth.user_id()

Filtering Data Based on Subscription Parameters

Use subscription.parameter() for data that clients subscribe to explicitly:
config:
  edition: 3

streams:
  # Sync todos for a specific list when the client subscribes with a list_id
  list_todos:
    query: |
      SELECT * FROM todos 
      WHERE list_id = subscription.parameter('list_id')
        AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
// Client subscribes with the list they want to view
const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();

Using Auto-Subscribe

Set auto_subscribe: true to sync data automatically when clients connect. This is useful for:
  • Reference data that all users need, or that are needed in many screens in the app.
  • User data that should always be available offline
  • Maintaining Sync Rules default behavior (“sync everything upfront”) when migrating to Sync Streams
config:
  edition: 3

streams:
  # Global data, synced automatically
  categories:
    auto_subscribe: true
    query: SELECT * FROM categories

  # User-scoped data, synced automatically
  my_orders:
    auto_subscribe: true
    query: SELECT * FROM orders WHERE user_id = auth.user_id()

  # Parameterized data, subscribed on-demand (no auto_subscribe)
  order_items:
    query: |
      SELECT * FROM order_items 
      WHERE order_id = subscription.parameter('order_id')
        AND order_id IN (SELECT id FROM orders WHERE user_id = auth.user_id())

Client-Side Usage

Subscribe to streams from your client app:
const sub = await db.syncStream('list_todos', { list_id: 'abc123' })
  .subscribe({ ttl: 3600 });

// Wait for this subscription to have synced
await sub.waitForFirstSync();

// When the component needing the subscription is no longer active...
sub.unsubscribe();
React hooks:
const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: 'abc123' } });
// Check download progress or subscription information
stream?.progress;
stream?.subscription.hasSynced;
The useQuery hook can wait for Sync Streams before running queries:
const { data } = useQuery(
  'SELECT * FROM todos WHERE list_id = ?',
  [listId],
  { streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] }
);

TTL (Time-To-Live)

Each subscription has a ttl that keeps data cached after unsubscribing. This enables warm cache behavior — when users return to a screen and you re-subscribe to relevant streams, data is already available on the client. Default TTL is 24 hours. See Client-Side Usage for details.
// Set TTL in seconds when subscribing
const sub = await db.syncStream('todos', { list_id: 'abc' })
  .subscribe({ ttl: 3600 }); // Cache for 1 hour after unsubscribe

Developer Notes

  • SQL Syntax: Stream queries use a SQL-like syntax with SELECT statements. You can use subqueries, INNER JOIN, and CTEs for filtering. GROUP BY, ORDER BY, and LIMIT are not supported. See Writing Queries for details on joins, multiple queries per stream, and other features.
  • Type Conversion: Data types from your source database (Postgres, MongoDB, MySQL, SQL Server) are converted when synced to the client’s SQLite database. SQLite has a limited type system, so most types become text and you may need to parse or cast values in your app code. See Type Mapping for details on how each type is handled.
  • Primary Key: PowerSync requires every synced table to have a primary key column named id of type text. If your backend uses a different column name or type, you’ll need to map it. For MongoDB, collections use _id as the ID field; you must alias it in your stream queries (e.g. SELECT _id as id, * FROM your_collection).
  • Case Sensitivity: To avoid issues across different databases and platforms, use lowercase identifiers for all table and column names in your Sync Streams. If your backend uses mixed case, see Case Sensitivity for how to handle it.
  • Bucket Limits: PowerSync uses internal partitions called buckets to efficiently sync data. There’s a default limit of 1,000 buckets per user/client. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. You can use multiple queries per stream to reduce bucket count.
  • Troubleshooting: If data isn’t syncing as expected, the Sync Diagnostics Client helps you inspect what’s happening for a specific user — you can see which buckets the user has and what data is being synced.

Examples & Demos

See Examples & Demos for working demo apps and complete application patterns.

Migrating from Legacy Sync Rules

If you have an existing project using legacy Sync Rules, see the Migration Guide for step-by-step instructions, syntax changes, and examples.