Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

maflcko
Copy link
Member

@maflcko maflcko commented Oct 14, 2025

Storage is cheap, so there was little value in extracting build layers in the CI images.

However, the GHA cache is limited to 10GB, so extracting a shared base layer could help to reduce the overall footprint. Possibly, it could even speed up image building, because installation is only done once.

@DrahtBot DrahtBot changed the title ci: Build ci_native_base image layer ci: Build ci_native_base image layer Oct 14, 2025
@DrahtBot DrahtBot added the Tests label Oct 14, 2025
@maflcko maflcko marked this pull request as draft October 14, 2025 08:13
@DrahtBot
Copy link
Contributor

DrahtBot commented Oct 14, 2025

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage & Benchmarks

For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33620.

Reviews

See the guideline for information on the review process.
A summary of reviews will appear here.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #33562 (DRAFT: add a freebsd job using systemlibs by willcl-ark)
  • #33549 (ci: Add macOS cross task for arm64-apple-darwin by maflcko)
  • #33185 (guix: update time-machine to 5cb84f2013c5b1e48a7d0e617032266f1e6059e2 by fanquake)
  • #32953 ([POC] ci: Skip compilation when running static code analysis by hebasto)
  • #32162 (depends: Switch from multilib to platform-specific toolchains by hebasto)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@maflcko
Copy link
Member Author

maflcko commented Oct 14, 2025

(currently a draft, because this is based on some other pulls)

@maflcko
Copy link
Member Author

maflcko commented Oct 14, 2025

Also, it won't work anyway, due to --cache-to: https://github.com/maflcko/bitcoin-core-with-ci/actions/runs/18490105374/job/52681542962#step:6:149:

+ docker buildx ls --format '{{.DriverEndpoint}} {{.Name}}'
Using existing docker based buildx: default
Building ci_native_base image layer
+ docker buildx build --file=/home/runner/work/bitcoin-core-with-ci/bitcoin-core-with-ci/ci/test_imagefile_base --platform=linux --label=bitcoin-ci-test --tag=ci_native_base --cache-from type=gha,scope=ci_win64 --cache-to type=gha,mode=max,ignore-error=true,scope=ci_win64 --load /home/runner/work/bitcoin-core-with-ci/bitcoin-core-with-ci
ERROR: failed to build: Cache export is not supported for the docker driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at https://docs.docker.com/go/build-cache-backends/
Command '['docker', 'buildx', 'build', '--file=/home/runner/work/bitcoin-core-with-ci/bitcoin-core-with-ci/ci/test_imagefile_base', '--platform=linux', '--label=bitcoin-ci-test', '--tag=ci_native_base', '--cache-from', 'type=gha,scope=ci_win64', '--cache-to', 'type=gha,mode=max,ignore-error=true,scope=ci_win64', '--load', '/home/runner/work/bitcoin-core-with-ci/bitcoin-core-with-ci']' returned non-zero exit status 1.
Error: Process completed with exit code 1.

@willcl-ark
Copy link
Member

I think if we want something like this to work it could be tricky without using a registry to host the base image which all other jobs pulled from (on a cache hit).

It may be possible without one, but as each job is scoped to itself, which is what keeps the caches seperate,: https://github.com/bitcoin/bitcoin/actions/runs/18489967967/job/52681101678?pr=33620#step:9:151 ... I think current approach would end up with one base_image per job, which is not quite what we are after here.

One possible approach I have in mind which could work:

  • have a new (second) docker builder (with a fixed/shared scope) to build the base_image/imports it into the docker store
  • use a second build (with current ci config) which builds based on this base_image (which is now in the local docker store)
  • ?
  • profit

cc @m3dwards in case you have any other ideas for approach here?

@maflcko maflcko force-pushed the 2510-ci-base-image branch from a145313 to 5d12fe0 Compare October 14, 2025 09:33
@maflcko
Copy link
Member Author

maflcko commented Oct 14, 2025

  • have a new (second) docker builder (with a fixed/shared scope) to build the base_image/imports it into the docker store

  • use a second build (with current ci config) which builds based on this base_image (which is now in the local docker store)

yeah, I think this makes sense. I just can't even figure out how to both support the gha cache backend and also the local storage. edit: Upstream bug: moby/buildkit#2343

We want to support:

  • GHA "read-only" on branch pushes or pull requests, using the cache if available, and building all layers locally (optionally falling back to the local storage for the base layer)
  • GHA "write-push" on main pushes, using the cache if available, and pushing to it (the base layer and the final image)
  • Cirrus (both of the above)
  • Fully local docker buildx (ideally, but optionally falling back to the local store for the base layer)
  • Fully local podman build (I think this should work trivially)

Maybe all of this is too complicated and not worth it?

@willcl-ark
Copy link
Member

We want to support:

* GHA "read-only" on branch pushes or pull requests, using the cache if available, and building all layers locally (optionally falling back to the local storage for the base layer)

* GHA "write-push" on main pushes, using the cache if available, and pushing to it (the base layer and the final image)

* Cirrus (both of the above)

* Fully local docker buildx (ideally, but optionally falling back to the local store for the base layer)

* Fully local podman build (I think this should work trivially)

So the approach we have used currently is to have CI-only config here:

# Configure docker build cache backend
#
# On forks the gha cache will work but will use Github's cache backend.
# Docker will check for variables $ACTIONS_CACHE_URL, $ACTIONS_RESULTS_URL and $ACTIONS_RUNTIME_TOKEN
# which are set automatically when running on GitHub infra: https://docs.docker.com/build/cache/backends/gha/#synopsis
# Use cirrus cache host
if [[ ${{ inputs.cache-provider }} == 'cirrus' ]]; then
url_args="url=${CIRRUS_CACHE_HOST},url_v2=${CIRRUS_CACHE_HOST}"
else
url_args=""
fi
# Always optimistically --cache‑from in case a cache blob exists
args=(--cache-from "type=gha${url_args:+,${url_args}},scope=${CONTAINER_NAME}")
# If this is a push to the default branch, also add --cache‑to to save the cache
if [[ ${{ github.event_name }} == "push" && ${{ github.ref_name }} == ${{ github.event.repository.default_branch }} ]]; then
args+=(--cache-to "type=gha${url_args:+,${url_args}},mode=max,ignore-error=true,scope=${CONTAINER_NAME}")
fi
# Always `--load` into docker images (needed when using the `docker-container` build driver).
args+=(--load)
echo "DOCKER_BUILD_CACHE_ARG=${args[*]}" >> $GITHUB_ENV

which appends flags to the docker build only in CI. We always add --cache-from with the correct scope of the job, then --cache-to if we are on master branch, along with --load to always load build images into the local store.

If we are on Cirrus we also update the required URLs.

Maybe all of this is too complicated and not worth it?

I agree this could be tricky to support all configurations you list though.

This has a few benefits:

* The raw python3 -c "..." command to calculate --cpuset-cpus lives in a
  Python context.
* The shellcheck SC2086 warning is disabled for the whole command, but
  is only needed for the DOCKER_BUILD_CACHE_ARG env var.  So in Python,
  only pass this one env var to shlex.split() for proper word splitting.

The comments are moved, which can be checked via the git options:
--color-moved=dimmed-zebra --color-moved-ws=ignore-all-space
@maflcko maflcko force-pushed the 2510-ci-base-image branch from 5d12fe0 to 63a063f Compare October 15, 2025 10:54
@maflcko maflcko force-pushed the 2510-ci-base-image branch from 63a063f to 21cfc05 Compare October 15, 2025 10:55
@maflcko
Copy link
Member Author

maflcko commented Oct 15, 2025

Hmm, then we should probably revert 6c4fe40 (or disable docker caching for GHA explicitly), because with the ~20 tasks here, and each task having a ccache size of 500MB, the GHA limit of 10GB should already be reached.

With the 10GB limit, it seems better to use if for ccache+depends than to cache installed packages?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.