Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

I have tried to import entire planet several times with very powerful VMs.

Each time there was another problem.

Last time I was very close and I got log:

2024-12-15 03:22:56: Starting indexing postcodes using 96 threads
2024-12-15 03:22:56: Starting postcodes (location_postcode) (using batch size 20)
2024-12-15 03:22:56: Done 0/0 in 0 @ 0.000 per second - FINISHED postcodes (location_postcode)

+ sudo -E -u nominatim nominatim admin --check-database
2024-12-15 03:22:57: Using project directory: /nominatim
2024-12-15 03:22:57: Checking database
Checking database connection ... OK
Checking database_version matches Nominatim software version ... OK
Checking for placex table ... OK
Checking for placex content ... OK
Checking that tokenizer works ... OK
Checking for wikipedia/wikidata data ... OK
Checking indexing status ... OK
Checking that database indexes are complete ... OK
Checking that all database indexes are valid ... OK
Checking TIGER external data table. ... OK
Freezing database
+ '[' '' '!=' '' ']'
+ '[' true = true ']'
+ echo 'Freezing database'
+ sudo -E -u nominatim nominatim freeze
2024-12-15 03:24:00: Using project directory: /nominatim
Traceback (most recent call last):
File "/usr/local/bin/nominatim", line 5, in <module>
exit(cli.nominatim(module_dir=None, osm2pgsql_path=None))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/nominatim_db/cli.py", line 260, in nominatim
return get_set_parser().run(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/nominatim_db/cli.py", line 122, in run
ret = args.command.run(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/nominatim_db/clicmd/freeze.py", line 40, in run
freeze.drop_update_tables(conn)
File "/usr/local/lib/python3.12/dist-packages/nominatim_db/tools/freeze.py", line 42, in drop_update_tables
drop_tables(conn, *tables, cascade=True)
File "/usr/local/lib/python3.12/dist-packages/nominatim_db/db/connection.py", line 94, in drop_tables
cur.execute(sql.format(pysql.Identifier(name)))
File "/usr/local/lib/python3.12/dist-packages/psycopg/cursor.py", line 97, in execute
raise ex.with_traceback(None)
psycopg.errors.OutOfMemory: out of shared memory
HINT:  You might need to increase max_locks_per_transaction.

Next time I start docker it downloads entire .pbf again but then it also fails to complete with another error...

I have made a snapshot of my disk after the first error.

Can you recommend what should I do to try to continue process after the "out of shared memory" error?

I started the import with:

sudo docker run \
-v ${my_FLATNODE_PATH}:/nominatim/flatnode \
-v ${my_DATABASE_PATH}:/var/lib/postgresql/${my_POSTGRES_VERSION}/main \
-e POSTGRES_SHARED_BUFFERS=168GB \
-e POSTGRES_MAINTENANCE_WORK_MEM=100GB \
-e POSTGRES_AUTOVACUUM_WORK_MEM=50GB \
-e POSTGRES_WORK_MEM=512MB \
-e POSTGRES_EFFECTIVE_CACHE_SIZE=400GB \
-e POSTGRES_SYNCHRONOUS_COMMIT=on \
-e POSTGRES_MAX_WAL_SIZE=32GB \
-e POSTGRES_CHECKPOINT_TIMEOUT=60min \
-e POSTGRES_CHECKPOINT_COMPLETION_TARGET=0.9 \
-e PBF_URL=${my_PBF_URL} \
-e UPDATE_MODE=none \
-e FREEZE=true \
-e REVERSE_ONLY=false \
-e IMPORT_WIKIPEDIA=true \
-e IMPORT_US_POSTCODES=true \
-e IMPORT_GB_POSTCODES=true \
-e IMPORT_STYLE=full \
-e IMPORT_TIGER_ADDRESSES=true \
-e THREADS=$(nproc) \
--shm-size=256g \
-e NOMINATIM_PASSWORD=${my_NOMINATIM_PASSWORD} \
-p 8080:8080 \
--name nominatim \
mediagis/nominatim:${my_NOMINATIM_VERSION}
You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
🙏
Q&A
Labels
None yet
1 participant
Morty Proxy This is a proxified and sanitized view of the page, visit original site.