You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can modify compress.pgbench and nocompress.pgbench so only the documents with id between 1 and 3000 will be requested. It will simulate a case when all data *does* fit into memory. In this case we see 141K TPS (JSONB) vs 134K TPS (ZSON) which is 5% slower.
324
+
We can modify compress.pgbench and nocompress.pgbench so only the documents with
325
+
id between 1 and 3000 will be requested. It will simulate a case when all the
326
+
data *does* fits into the memory. In this case we see 141K TPS (JSONB) vs 134K
327
+
TPS (ZSON) which is 5% slower.
323
328
324
-
The compression ratio could be different depending on the documents, the database schema, the number of rows, etc. But in general ZSON compression is much better than build-in PostgreSQL compression (PGLZ):
329
+
The compression ratio could be different depending on the documents, the
330
+
database schema, the number of rows, etc. But in general ZSON compression is
331
+
much better than build-in PostgreSQL compression (PGLZ):
325
332
326
333
```
327
334
before | after | ratio
@@ -339,4 +346,6 @@ The compression ratio could be different depending on the documents, the databas
339
346
14204420096 | 9832841216 | 0.692238130775149
340
347
```
341
348
342
-
Not only disk space is saved. Data loaded to shared buffers is not decompressed. It means that memory is also saved and more data could be accessed without loading it from the disk.
349
+
Not only is the disk space saved, but the data loaded to shared buffers is not
350
+
decompressed. It means that memory is also saved and more data could be
0 commit comments