PowerSync is compatible with more advanced types such as arrays and JSON.
PowerSync is compatible with advanced Postgres types, including arrays and JSON/JSONB. These types are represented as text columns in the client-side schema. When updating client data, you have the option to replace the entire column value with a string or enable advanced schema features to track more granular changes and include custom metadata.
With arrays and JSON fields, it’s common for only part of the value to change during an update. To make handling these writes easier, you can enable advanced schema options that let you track exactly what changed in each row—not just the new state.
trackPreviousValues
: Access previous values for diffing custom types, arrays, or JSON fields. Accessible later via CrudEntry.previousValues
.trackMetadata
: Adds a _metadata
column for storing custom metadata. Value of the column is accessible later via CrudEntry.metadata
.ignoreEmptyUpdates
: Skips updates when no data has actually changed.These advanced schema options are available in the following SDK versions: Flutter v1.13.0, React Native v1.20.1, JavaScript/Web v1.20.1, Kotlin Multiplatform v1.1.0, Swift v1.1.0, and Node.js v0.4.0.
PowerSync serializes custom types as text. For details, see types in sync rules.
Postgres allows developers to create custom data types for columns. For example:
Custom type columns are converted to text by the PowerSync Service. A column of type location_address
, as defined above, would be synced to clients as the following string:
("1000 S Colorado Blvd.",Denver,CO,80211)
It is not currently possible to extract fields from custom types in Sync Rules, so the entire column must be synced as text.
Schema
Add your custom type column as a text
column in your client-side schema definition. For advanced update tracking, see Advanced Schema Options.
Writing Changes
You can write the entire updated column value as a string, or, with trackPreviousValues
enabled, compare the previous and new values to process only the changes you care about:
PowerSync treats array columns as JSON text. This means that the SQLite JSON operators can be used on any array columns.
Additionally, some helper methods such as array membership are available in Sync Rules.
Note: Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.
Array columns are defined in Postgres using the following syntax:
Array columns are converted to text by the PowerSync Service. A text array as defined above would be synced to clients as the following string:
["00000000-0000-0000-0000-000000000000", "12345678-1234-1234-1234-123456789012"]
Array Membership
It’s possible to sync rows dynamically based on the contents of array columns using the IN
operator. For example:
See these additional details when using the IN
operator: Operators
Schema
Add your array column as a text
column in your client-side schema definition. For advanced update tracking, see Advanced Schema Options.
Writing Changes
You can write the entire updated column value as a string, or, with trackPreviousValues
enabled, compare the previous and new values to process only the changes you care about:
Attention Supabase users: Supabase can handle writes with arrays, but you must convert from string to array using jsonDecode
in the connector’s uploadData
function. The default implementation of uploadData
does not handle complex types like arrays automatically.
The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in Sync Rules.
Note: Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.
JSON columns are represented as:
PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as json_extract()
.
Schema
Add your JSON column as a text
column in your client-side schema definition. For advanced update tracking, see Advanced Schema Options.
Writing Changes
You can write the entire updated column value as a string, or, with trackPreviousValues
enabled, compare the previous and new values to process only the changes you care about:
What if we had a column defined as an array of custom types, where a field in the custom type was JSON? Consider the following Postgres schema: