Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 1077b2b

Browse filesBrowse files
committed
refactor: get rid of stages concept, use only jobs (#152)
1 parent 088c103 commit 1077b2b
Copy full SHA for 1077b2b

33 files changed

+487
-582
lines changed

‎cmd/database-lab/main.go

Copy file name to clipboardExpand all lines: cmd/database-lab/main.go
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ func main() {
5353
log.Fatalf(errors.WithMessage(err, "failed to parse config"))
5454
}
5555

56-
log.DEBUG = cfg.Debug
56+
log.DEBUG = cfg.Global.Debug
5757
log.Dbg("Config loaded", cfg)
5858

5959
// TODO(anatoly): Annotate envs in configs. Use different lib for flags/configs?

‎configs/config.example.logical_generic.yml

Copy file name to clipboardExpand all lines: configs/config.example.logical_generic.yml
+104-105Lines changed: 104 additions & 105 deletions
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,9 @@ global:
4444
# data initialization is configured (see below).
4545
dataDir: "/var/lib/dblab/data"
4646

47-
# Debugging, when enabled, allows to see more in the Database Lab logs
48-
# (not PostgreSQL logs). Enable in the case of troubleshooting.
49-
debug: false
47+
# Debugging, when enabled, allows to see more in the Database Lab logs
48+
# (not PostgreSQL logs). Enable in the case of troubleshooting.
49+
debug: false
5050

5151
# Details of provisioning – where data is located,
5252
# thin cloning method, etc.
@@ -131,110 +131,109 @@ provision:
131131
# blocks location. Not supported for managed cloud Postgres services
132132
# such as Amazon RDS.
133133
retrieval:
134-
stages:
135-
- initialize
134+
# The jobs section must not contain physical and logical restore jobs simultaneously.
135+
jobs:
136+
- logical-dump
137+
- logical-restore
138+
- logical-snapshot
136139

137140
spec:
138-
# The initialize stage provides declarative initialization of the PostgreSQL data directory used by Database Lab Engine.
139-
# The stage must not contain physical and logical restore jobs simultaneously.
140-
initialize:
141-
jobs:
142-
# Dumps PostgreSQL database from provided source.
143-
- name: logical-dump
144-
options:
145-
# The dump file will be automatically created on this location and then used to restore.
146-
# Ensure that there is enough disk space.
147-
dumpLocation: "/var/lib/dblab/db.dump"
148-
149-
# The Docker image containing the tools required to get a dump.
150-
dockerImage: "postgres:12-alpine"
151-
152-
# Source of data.
153-
source:
154-
# Source types: "local", "remote", "rds"
155-
type: remote
156-
157-
# Connection parameters of the database to be dumped.
158-
connection:
159-
# Database connection parameters.
160-
# Currently, only password can be specified via environment variable (PGPASSWORD),
161-
# everything else needs to be specified here.
162-
dbname: postgres
163-
host: 34.56.78.90
164-
port: 5432
165-
username: postgres
166-
167-
# Connection password. The environment variable PGPASSWORD can be used instead of this option.
168-
# The environment variable has a higher priority.
169-
password: postgres
170-
171-
# Options for a partial dump.
172-
# partial:
173-
# tables:
174-
# - test
175-
176-
# Use parallel jobs to dump faster.
177-
# It's ignored if "restore" is present because "pg_dump | pg_restore" is always single-threaded.
178-
parallelJobs: 2
179-
180-
# Options for direct restore to Database Lab Engine instance.
181-
# Uncomment this if you prefer restoring from the dump on the fly. In this case,
182-
# you do not need to use "logical-restore" job. Keep in mind that unlike "logical-restore",
183-
# this option does not support parallelization, it is always a single-threaded (both for
184-
# dumping on the source, and restoring on the destination end).
185-
# restore:
186-
# # Restore data even if the Postgres directory (`global.dataDir`) is not empty.
187-
# # Note the existing data might be overwritten.
188-
# forceInit: false
189-
190-
# Restores PostgreSQL database from the provided dump. If you use this block, do not use
191-
# "restore" option in the "logical-dump" job.
192-
- name: logical-restore
193-
options:
194-
dbname: "test"
195-
196-
# The location of the archive file (or directory, for a directory-format archive) to be restored.
197-
dumpLocation: "/var/lib/dblab/db.dump"
198-
199-
# The Docker image containing the tools required to restore.
200-
dockerImage: "postgres:12-alpine"
201-
202-
# Use parallel jobs to restore faster.
203-
parallelJobs: 2
204-
205-
206-
# Restore data even if the Postgres directory (`global.dataDir`) is not empty.
207-
# Note the existing data might be overwritten.
208-
forceInit: false
209-
210-
# Options for a partial dump.
211-
# partial:
212-
# tables:
213-
# - test
214-
215-
- name: logical-snapshot
216-
options:
217-
# It is possible to define a pre-precessing script. For example, "/tmp/scripts/custom.sh".
218-
# Default: empty string (no pre-processing defined).
219-
# This can be used for scrubbing eliminating PII data, to define data masking, etc.
220-
preprocessingScript: ""
221-
222-
# Adjust PostgreSQL configuration
223-
configs:
224-
# In order to match production plans with Database Lab plans set parameters related to Query Planning as on production.
225-
shared_buffers: 1GB
226-
# shared_preload_libraries – copy the value from the source
227-
shared_preload_libraries: "pg_stat_statements"
228-
# work_mem and all the Query Planning parameters – copy the values from the source.
229-
# To do it, use this query:
230-
# select format($$%s = '%s'$$, name, setting)
231-
# from pg_settings
232-
# where
233-
# name ~ '(work_mem$|^enable_|_cost$|scan_size$|effective_cache_size|^jit)'
234-
# or name ~ '(^geqo|default_statistics_target|constraint_exclusion|cursor_tuple_fraction)'
235-
# or name ~ '(collapse_limit$|parallel|plan_cache_mode)';
236-
work_mem: "100MB"
237-
# ... put Query Planning parameters here
141+
# Dumps PostgreSQL database from provided source.
142+
logical-dump:
143+
options:
144+
# The dump file will be automatically created on this location and then used to restore.
145+
# Ensure that there is enough disk space.
146+
dumpLocation: "/var/lib/dblab/db.dump"
147+
148+
# The Docker image containing the tools required to get a dump.
149+
dockerImage: "postgres:12-alpine"
150+
151+
# Source of data.
152+
source:
153+
# Source types: "local", "remote", "rds"
154+
type: remote
155+
156+
# Connection parameters of the database to be dumped.
157+
connection:
158+
# Database connection parameters.
159+
# Currently, only password can be specified via environment variable (PGPASSWORD),
160+
# everything else needs to be specified here.
161+
dbname: postgres
162+
host: 34.56.78.90
163+
port: 5432
164+
username: postgres
165+
166+
# Connection password. The environment variable PGPASSWORD can be used instead of this option.
167+
# The environment variable has a higher priority.
168+
password: postgres
169+
170+
# Options for a partial dump.
171+
# partial:
172+
# tables:
173+
# - test
174+
175+
# Use parallel jobs to dump faster.
176+
# It's ignored if "restore" is present because "pg_dump | pg_restore" is always single-threaded.
177+
parallelJobs: 2
178+
179+
# Options for direct restore to Database Lab Engine instance.
180+
# Uncomment this if you prefer restoring from the dump on the fly. In this case,
181+
# you do not need to use "logical-restore" job. Keep in mind that unlike "logical-restore",
182+
# this option does not support parallelization, it is always a single-threaded (both for
183+
# dumping on the source, and restoring on the destination end).
184+
# restore:
185+
# # Restore data even if the Postgres directory (`global.dataDir`) is not empty.
186+
# # Note the existing data might be overwritten.
187+
# forceInit: false
188+
189+
# Restores PostgreSQL database from the provided dump. If you use this block, do not use
190+
# "restore" option in the "logical-dump" job.
191+
logical-restore:
192+
options:
193+
dbname: "test"
194+
195+
# The location of the archive file (or directory, for a directory-format archive) to be restored.
196+
dumpLocation: "/var/lib/dblab/db.dump"
197+
198+
# The Docker image containing the tools required to restore.
199+
dockerImage: "postgres:12-alpine"
200+
201+
# Use parallel jobs to restore faster.
202+
parallelJobs: 2
203+
204+
205+
# Restore data even if the Postgres directory (`global.dataDir`) is not empty.
206+
# Note the existing data might be overwritten.
207+
forceInit: false
208+
209+
# Options for a partial dump.
210+
# partial:
211+
# tables:
212+
# - test
213+
214+
logical-snapshot:
215+
options:
216+
# It is possible to define a pre-precessing script. For example, "/tmp/scripts/custom.sh".
217+
# Default: empty string (no pre-processing defined).
218+
# This can be used for scrubbing eliminating PII data, to define data masking, etc.
219+
preprocessingScript: ""
220+
221+
# Adjust PostgreSQL configuration
222+
configs:
223+
# In order to match production plans with Database Lab plans set parameters related to Query Planning as on production.
224+
shared_buffers: 1GB
225+
# shared_preload_libraries – copy the value from the source
226+
shared_preload_libraries: "pg_stat_statements"
227+
# work_mem and all the Query Planning parameters – copy the values from the source.
228+
# To do it, use this query:
229+
# select format($$%s = '%s'$$, name, setting)
230+
# from pg_settings
231+
# where
232+
# name ~ '(work_mem$|^enable_|_cost$|scan_size$|effective_cache_size|^jit)'
233+
# or name ~ '(^geqo|default_statistics_target|constraint_exclusion|cursor_tuple_fraction)'
234+
# or name ~ '(collapse_limit$|parallel|plan_cache_mode)';
235+
work_mem: "100MB"
236+
# ... put Query Planning parameters here
238237

239238
cloning:
240239
# Deprecated field. Default: "base".

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.