Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 13cd5a6

Browse filesBrowse files
committed
benchmark.md - cosmetic changes and minor fixes
1 parent 4b34c41 commit 13cd5a6
Copy full SHA for 13cd5a6

File tree

1 file changed

+14
-5
lines changed
Filter options

1 file changed

+14
-5
lines changed

‎docs/benchmark.md

Copy file name to clipboardExpand all lines: docs/benchmark.md
+14-5
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Benchmark
22

3-
**Disclaimer**: Synthetic benchmarks could not be trusted. Re-check everything on specific hardware, configuration, data and workload!
3+
**Disclaimer**: Synthetic benchmarks could not be trusted. Re-check everything
4+
on specific hardware, configuration, data and workload!
45

56
We used the following server:
67

@@ -9,7 +10,8 @@ We used the following server:
910
* HDD
1011
* swap is off
1112

12-
To simulate scenario when database doesn't fit into memory we used `stress`:
13+
To simulate the scenario when a database doesn't fit into the memory we used
14+
`stress`:
1315

1416
```
1517
sudo stress --vm-bytes 21500m --vm-keep -m 1 --vm-hang 0
@@ -319,9 +321,14 @@ tps = 1086.396431 (excluding connections establishing)
319321

320322
In this case ZSON gives about 11.8% more TPS.
321323

322-
We can modify compress.pgbench and nocompress.pgbench so only the documents with id between 1 and 3000 will be requested. It will simulate a case when all data *does* fit into memory. In this case we see 141K TPS (JSONB) vs 134K TPS (ZSON) which is 5% slower.
324+
We can modify compress.pgbench and nocompress.pgbench so only the documents with
325+
id between 1 and 3000 will be requested. It will simulate a case when all the
326+
data *does* fits into the memory. In this case we see 141K TPS (JSONB) vs 134K
327+
TPS (ZSON) which is 5% slower.
323328

324-
The compression ratio could be different depending on the documents, the database schema, the number of rows, etc. But in general ZSON compression is much better than build-in PostgreSQL compression (PGLZ):
329+
The compression ratio could be different depending on the documents, the
330+
database schema, the number of rows, etc. But in general ZSON compression is
331+
much better than build-in PostgreSQL compression (PGLZ):
325332

326333
```
327334
before | after | ratio
@@ -339,4 +346,6 @@ The compression ratio could be different depending on the documents, the databas
339346
14204420096 | 9832841216 | 0.692238130775149
340347
```
341348

342-
Not only disk space is saved. Data loaded to shared buffers is not decompressed. It means that memory is also saved and more data could be accessed without loading it from the disk.
349+
Not only is the disk space saved, but the data loaded to shared buffers is not
350+
decompressed. It means that memory is also saved and more data could be
351+
accessed without loading it from the disk.

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.