Skip to main content
PowerSync supports JSON/JSONB and array columns. They are synced as JSON text and can be queried with SQLite JSON functions on the client. Other custom Postgres types can be synced by serializing their values to text in the client-side schema. When updating client data, you have the option to replace the entire column value with a string or enable advanced schema options to track more granular changes and include custom metadata.

JSON and JSONB

The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in Sync Streams (or legacy Sync Rules). Note: Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.

Postgres

JSON columns are represented as:
ALTER TABLE todos
ADD COLUMN custom_payload json;

Sync Streams

PowerSync treats JSON columns as text. Use json_extract() and other JSON functions in stream queries. Subscribe per list to sync only that list’s todos:
config:
  edition: 3
streams:
  my_json_todos:
    auto_subscribe: true
    with:
      owned_lists: SELECT id AS list_id FROM lists WHERE owner_id = auth.user_id()
    query: SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') IN owned_lists
The client subscribes once per list (e.g. db.syncStream('my_json_todos', { list_id: listId }).subscribe()).

Client SDK

Schema Add your JSON column as a text column in your client-side schema definition. For advanced update tracking, see Advanced Schema Options.
Table(
  name: 'todos',
  columns: [
    Column.text('custom_payload'),
    // ... other columns ...
  ],
  // Optionally, enable advanced update tracking options (see details at the end of this page):
  trackPreviousValues: true, 
  trackMetadata: true, 
  ignoreEmptyUpdates: true, 
)
Writing Changes You can write the entire updated column value as a string, or, with trackPreviousValues enabled, compare the previous and new values to process only the changes you care about:
// Full replacement (basic):
await db.execute('UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', [
  '{"foo": "bar", "baz": 123}',
  'op-metadata-example', // Example metadata value
  '00000000-0000-0000-0000-000000000000'
]);

// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
import 'dart:convert';

if (op.op == UpdateType.put && op.previousValues != null) {
  var oldJson = jsonDecode(op.previousValues['custom_payload'] ?? '{}');
  var newJson = jsonDecode(op.opData['custom_payload'] ?? '{}');
  var metadata = op.metadata; // Access metadata here
  // Compare oldJson and newJson to determine what changed
  // Use metadata as needed as you process the upload
}

Arrays

PowerSync treats array columns as JSON text. This means that the SQLite JSON operators can be used on any array columns. Additionally, array membership is supported in Sync Streams (or legacy Sync Rules) so you can sync rows based on whether a parameter value appears in an array column. Note: Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.

Postgres

Array columns are defined in Postgres using the following syntax:
ALTER TABLE todos
ADD COLUMN unique_identifiers text[];

Sync Streams

Array columns are converted to text by the PowerSync Service. A text array as defined above would be synced to clients as the following string: ["00000000-0000-0000-0000-000000000000", "12345678-1234-1234-1234-123456789012"] Array Membership
Sync rows where a subscription parameter value is in the row’s array column using IN:
config:
  edition: 3
streams:
  custom_todos:
    query: SELECT * FROM todos WHERE subscription.parameter('list_id') IN unique_identifiers
The client subscribes per list (e.g. db.syncStream('custom_todos', { list_id: listId }).subscribe()).
See these additional details when using the IN operator: Operators

Client SDK

Schema Add your array column as a text column in your client-side schema definition. For advanced update tracking, see Advanced Schema Options.
const todos = new Table(
  {
    unique_identifiers: column.text,
    // ... other columns ...
  },
  {
    // Optionally, enable advanced update tracking options (see details at the end of this page):
    trackPrevious: true,
    trackMetadata: true,
    ignoreEmptyUpdates: true,
  }
);
Writing Changes You can write the entire updated column value as a string, or, with trackPreviousValues enabled, compare the previous and new values to process only the changes you care about:
// Full replacement (basic):
await db.execute(
  'UPDATE todos set unique_identifiers = ?, _metadata = ? WHERE id = ?',
  ['["DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF"]', 'op-metadata-example', '00000000-0000-0000-0000-000000000000']
);

// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op === UpdateType.PUT && op.previousValues) {
  const oldArray = JSON.parse(op.previousValues['unique_identifiers'] ?? '[]');
  const newArray = JSON.parse(op.opData['unique_identifiers'] ?? '[]');
  const metadata = op.metadata; // Access metadata here
  // Compare oldArray and newArray to determine what changed
  // Use metadata as needed as you process the upload
}
Attention Supabase users: Supabase can handle writes with arrays, but you must convert from string to array using jsonDecode in the connector’s uploadData function. The default implementation of uploadData does not handle complex types like arrays automatically.

Custom Types

PowerSync respects Postgres custom types: DOMAIN types sync as their inner type, custom type columns as JSON objects, arrays of custom types as JSON arrays, and ranges (and multi-ranges) as structured JSON. This behavior is the default for Sync Streams. For configuration and legacy behavior, see Compatibility. For type handling in queries, see Types.

Postgres

Postgres allows developers to create custom data types for columns. For example:
create type location_address AS (
    street text,
    city text,
    state text,
    zip numeric
);

Sync Streams

The custom type column is serialized as JSON and you can use json_extract() and other JSON functions in stream queries:
config:
  edition: 3
streams:
  todos_by_city:
    query: SELECT * FROM todos WHERE json_extract(location, '$.city') = subscription.parameter('city')

Client SDK

Schema Add your custom type column as a text column in your client-side schema definition. For advanced update tracking, see Advanced Schema Options.
const todos = new Table(
  {
    location: column.text,
    // ... other columns ...
  },
  {
    // Optionally, enable advanced update tracking options (see details at the end of this page):
    trackPrevious: true,
    trackMetadata: true,
    ignoreEmptyUpdates: true,
  }
);
Writing Changes You can write the entire updated column value as a string, or, with trackPreviousValues enabled, compare the previous and new values to process only the changes you care about:
// Full replacement (basic):
await db.execute(
  'UPDATE todos set location = ?, _metadata = ? WHERE id = ?',
  ['("1234 Update Street",Denver,CO,80212)', 'op-metadata-example', 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b']
);

// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op === UpdateType.PUT && op.previousValues) {
  const oldCustomType = op.previousValues['location'] ?? 'null';
  const newCustomType = op.opData['location'] ?? 'null';
  const metadata = op.metadata; // Access metadata here
  // Compare oldCustomType and newCustomType to determine what changed
  // Use metadata as needed as you process the upload
}

Bonus: Mashup

What if we had a column defined as an array of custom types, where a field in the custom type was JSON? Consider the following Postgres schema:
-- define custom type
CREATE TYPE extended_location AS (
    address_label text,
    json_address json
);

-- add column
ALTER TABLE todos
ADD COLUMN custom_locations extended_location[];

Advanced Schema Options to Process Writes

With arrays and JSON fields, it’s common for only part of the value to change during an update. To make handling these writes easier, you can enable advanced schema options that let you track exactly what changed in each row—not just the new state.
  • trackPreviousValues (or trackPrevious in our JS SDKs): Access previous values for diffing JSON or array fields. Accessible later via CrudEntry.previousValues.
  • trackMetadata: Adds a _metadata column for storing custom metadata. Value of the column is accessible later via CrudEntry.metadata.
  • ignoreEmptyUpdates: Skips updates when no data has actually changed.
These advanced schema options were introduced in the following SDK versions:
  • Flutter v1.13.0
  • React Native v1.20.1
  • JavaScript/Web v1.20.1
  • Kotlin v1.1.0
  • Swift v1.1.0
  • Node.js v0.4.0
  • .NET v0.0.6-alpha.1