Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

How can I bundle a shared library in a wheel with mesonpy? #410

Unanswered
oscarbenjamin asked this question in Q&A
Discussion options

I am trying to use meson-python to build a cython extension module that links against GMP. What I have so far is here:
https://github.com/oscarbenjamin/mesontest

It can be tested with:

git clone https://github.com/oscarbenjamin/mesontest.git
cd mesontest/
python -m build

Currently if there is a system install of GMP that should succeed. However if there is no system install of GMP then it will download GMP and build it as a meson subproject. While meson builds GMP correctly mesonpy fails to package the files into a wheel:

$ python -m build
...
meson-python: error: Could not map installation path to an equivalent wheel directory: '{prefix}'

ERROR Backend subprocess exited when trying to invoke get_requires_for_build_wheel

Since GMP is an autotools project I'm using meson's unstable-external_project feature to build it as a subproject with:

project('gmp', 'c',
  version: '6.2.1',
  meson_version: '>= 0.65.0',
  license: ['LGPL-3.0-only', 'GPL-2.0-only'],
)

mod = import('unstable-external_project')

p = mod.add_project('configure',
  configure_options: [
    '--prefix=@PREFIX@',
    '--libdir=@PREFIX@/@LIBDIR@',
    '--includedir=@PREFIX@/@INCLUDEDIR@',
    '--enable-fat',
    '--enable-shared=yes',
    '--enable-static=no',
  ],
)

gmp_dep = p.dependency('gmp')
meson.override_dependency('gmp', gmp_dep)

I can build fine with meson/ninja:

$ meson setup build --wipe
...
$ ninja -C build
ninja: Entering directory `build'
[0/4] Generating subprojects/gmp-6.2.1/gmp-6.2.1 with a custom command
[4/4] Linking target meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so

That creates the expected files:

$ tree build/subprojects/gmp-6.2.1/dist/
build/subprojects/gmp-6.2.1/dist/
└── usr
    └── local
        ├── include
        │   └── gmp.h
        ├── lib
        │   └── x86_64-linux-gnu
        │       ├── libgmp.la
        │       ├── libgmp.so -> libgmp.so.10.4.1
        │       ├── libgmp.so.10 -> libgmp.so.10.4.1
        │       ├── libgmp.so.10.4.1
        │       └── pkgconfig
        │           └── gmp.pc
        └── share
            └── info
                ├── dir
                ├── gmp.info
                ├── gmp.info-1
                └── gmp.info-2

8 directories, 10 files

I think somehow mesonpy needs to be told what to do with these files. Ideally the gmp .so files would get bundled into the wheel or at least that is what I would want to happen when running this in cibuildwheel but maybe there it should be done by cibuildwheel rather than by mesonpy...

The examples for mesonpy only show two very complicated projects and then a simple one that links a library statically:
https://meson-python.readthedocs.io/en/latest/projects-using-meson-python.html#projects-using-meson-python

I am actually not sure if the build I have is linking statically or dynamically because I have tried many configurations and have never seen the gmp so get bundled into the wheel. I can actually patch mesonpy to make it work:

--- mesonpy/__init__.py.backup	2023-04-23 11:50:55.660441459 +0100
+++ mesonpy/__init__.py	2023-04-23 13:23:36.872080467 +0100
@@ -160,7 +160,7 @@ def _map_to_wheel(sources: Dict[str, Dic
 
             path = _INSTALLATION_PATH_MAP.get(anchor)
             if path is None:
-                raise BuildError(f'Could not map installation path to an equivalent wheel directory: {str(destination)!r}')
+                continue
 
             if path == 'purelib' or path == 'platlib':
                 package = destination.parts[1]

With that I can build a wheel and install and use it:

$ cd tmp/
$ pip install ../dist/meson_test-0.0.1-cp311-cp311-linux_x86_64.whl 
$ python -c 'import meson_test; print(meson_test.pow1000(2))'
b'10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376'

Am I misunderstanding here how mesonpy is supposed to work or should it be able to handle this?

Is it better not to try to have meson build GMP and just say that it is the user's responsibility to have GMP installed before building the extension module?

You must be logged in to vote

Replies: 7 comments · 20 replies

Comment options

Good question @oscarbenjamin. A few initial thoughts:

  • I think we'd like to support this use case and want it to work. I'm thinking about doing this for SciPy too at some point, to avoid the pain of "can't find a BLAS library".
  • Building a shared library and then using it from a Python extension module is supposed to work on Linux and macOS as of today. Not on Windows yet, see the feature request at Linking against libraries from meson on Windows #265.
    • However, the one test case we have (see tests/packages/link-against-local-lib/meson.build) uses link_with, which is not quite right for this use case.
    • We need a new test case using subproject, and then using it as dependency(...) rather than an explicit link_with.
  • What is then supposed to happen is that building a wheel completes. However, vendoring external dependencies into a wheel is not the job of meson-python but rather of auditwheel/delocate/delvewheel.

Your test package looks quite good to me, the way it handles GMP as dependency seems right. The question is how to fix it - there's probably a better way than your patch above.

You must be logged in to vote
1 reply
@oscarbenjamin
Comment options

  • What is then supposed to happen is that building a wheel completes. However, vendoring external dependencies into a wheel is not the job of meson-python but rather of auditwheel/delocate/delvewheel.

Okay, yes I see that. There are other cases than cibuildwheel though. It would be really nice to use the meson subprojects feature or similar to be able to manage and build the dependencies when building locally from sdist or VCS. For cibuildwheel I would want to build GMP in the CIBW_BEFORE_ALL stage though before mesonpy gets called. Ideally the build configuration could somehow be shared for all of these things.

Comment options

The reason why your patched meson-python produces a wheel that works is because either the Python extension module is compiled statically against GMP or because the Python extension is linked dynamically and you have GMP installed on the system where you are running it.

If you don't need GMP bundled with the Python wheel (if you can assume a compatible version of GMP is installed on the target system, or if you link statically to it), the way to make your current solution work is to do not install the GMP subproject. I'm not so sure how to do this in practice when you are dealing with an external_project subproject.

You must be logged in to vote
6 replies
@dnicolodi
Comment options

I don't think this is the case, unless @oscarbenjamin passes a buildidr to meson-python: the build directory where the subproject is gone once the wheel is build, thus the extension has an RPATH pointing to the location of the subproject build directory, but the library there is long gone when the extension is loaded. Indeed, for including the library in the wheel we would need to treat it as a local library. For doing that, it seems that what is missing is being able to map the installation path of the parts of the subproject into the wheel locations.

@oscarbenjamin
Comment options

It is linking dynamically but I have GMP installed on the system:

$ ldd meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so 
	linux-vdso.so.1 (0x00007ffce7b6f000)
	libgmp.so.10 => /lib/x86_64-linux-gnu/libgmp.so.10 (0x00007facf2ee2000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007facf2c00000)
	/lib64/ld-linux-x86-64.so.2 (0x00007facf2f7f000)

$ ls -l /lib/x86_64-linux-gnu/libgmp.so.10
lrwxrwxrwx 1 root root 16 Apr  5 21:00 /lib/x86_64-linux-gnu/libgmp.so.10 -> libgmp.so.10.4.1

I was getting confused by the fact that libgmpy3-dev is needed in order to build the wheel but at runtime there is already a libgmp.so even if I uninstall the -dev files.

@dnicolodi
Comment options

The libgmp3-dev package (there is a typo in the package name in your comment) is required for the C header to be available for compilation. Most distributions split libraries in runtime and development library packages, where the -dev package essentially contains the static library version of the library and the header files.

@oscarbenjamin
Comment options

So how does the linker end up picking the system gmp? It is a bit worrying that we've built a local GMP and then ended up linking to the system one.

During configuration meson can tell that the GMP headers are not installed although it reports this as a "run-time dependency":

Run-time dependency gmp found: NO (tried pkgconfig and cmake)
Looking for a fallback subproject for the dependency gmp

Executing subproject gmp 

It therefore decides to download and build GMP:

$ meson setup build --wipe
The Meson build system
Version: 1.1.0
Source dir: /home/oscar/current/active/tmp/mesontest
Build dir: /home/oscar/current/active/tmp/mesontest/build
Build type: native build
Project name: meson-test
Project version: undefined
C compiler for the host machine: cc (gcc 11.3.0 "cc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0")
C linker for the host machine: cc ld.bfd 2.38
Cython compiler for the host machine: cython (cython 0.29.34)
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python3 found: YES (/home/oscar/.pyenv/versions/3.11.3/envs/mesontest-3.11/bin/python3.11)
Found pkg-config: /usr/bin/pkg-config (0.29.2)
Found CMake: /usr/bin/cmake (3.22.1)
WARNING: CMake Toolchain: Failed to determine CMake compilers state
Run-time dependency gmp found: NO (tried pkgconfig and cmake)
Looking for a fallback subproject for the dependency gmp

Executing subproject gmp 

gmp| Project name: gmp
gmp| Project version: 6.2.1
gmp| C compiler for the host machine: cc (gcc 11.3.0 "cc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0")
gmp| C linker for the host machine: cc ld.bfd 2.38
gmp| subprojects/gmp-6.2.1/meson.build:7: WARNING: Module External build system has no backwards or forwards compatibility and might not exist in future releases.
gmp| Program /home/oscar/current/active/tmp/mesontest/subprojects/gmp-6.2.1/configure found: YES (/home/oscar/current/active/tmp/mesontest/subprojects/gmp-6.2.1/configure)
gmp| Program make found: YES (/usr/bin/make)
gmp| External project gmp-6.2.1: configure
gmp| Build targets in project: 1
gmp| Subproject gmp finished.

Dependency gmp found: YES 6.2.1 (overridden)
Build targets in project: 2

meson-test undefined

  Subprojects
    gmp: YES 1 warnings

Found ninja-1.10.1 at /usr/bin/ninja

Then when I actually build it links against the system installed GMP even though we just built a local GMP (and they are not guaranteed to be compatible):

$ ninja -v -C build
ninja: Entering directory `build'
[0/4] /home/oscar/.pyenv/versions/mesontest-3.11/bin/meson --internal externalproject --name gmp-6.2.1 --srcdir /home/oscar/current/active/tmp/mesontest/subprojects/gmp-6.2.1 --builddir /home/oscar/current/active/tmp/mesontest/build/subprojects/gmp-6.2.1/build --installdir /home/oscar/current/active/tmp/mesontest/build/subprojects/gmp-6.2.1/dist --logdir /home/oscar/current/active/tmp/mesontest/build/meson-logs --make /usr/bin/make subprojects/gmp-6.2.1/gmp-6.2.1.stamp subprojects/gmp-6.2.1/gmp-6.2.1.d
[2/4] cython -M --fast-fail -3 /home/oscar/current/active/tmp/mesontest/meson_test/_meson_test.pyx -o meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p/meson_test/_meson_test.pyx.c
[3/4] cc -Imeson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p -Imeson_test -I../meson_test -Isubprojects/gmp-6.2.1 -I/home/oscar/current/active/tmp/mesontest/build/subprojects/gmp-6.2.1/dist/usr/local/include -I/home/oscar/.pyenv/versions/3.11.3/include/python3.11 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -w -std=c99 -O2 -g -fPIC -MD -MQ meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p/meson-generated_meson_test__meson_test.pyx.c.o -MF meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p/meson-generated_meson_test__meson_test.pyx.c.o.d -o meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p/meson-generated_meson_test__meson_test.pyx.c.o -c meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p/meson_test/_meson_test.pyx.c
[4/4] cc  -o meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so.p/meson-generated_meson_test__meson_test.pyx.c.o -Wl,--as-needed -Wl,--allow-shlib-undefined -shared -fPIC -L/home/oscar/current/active/tmp/mesontest/build/subprojects/gmp-6.2.1/dist/usr/local/lib/x86_64-linux-gnu -Wl,--start-group -lgmp -Wl,--end-group
$ ldd build/meson_test/_meson_test.cpython-311-x86_64-linux-gnu.so
	linux-vdso.so.1 (0x00007ffe3b567000)
	libgmp.so.10 => /lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f8161c1f000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8161800000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f8161cbc000)

It is using -L/home/oscar/current/active/tmp/mesontest/build/subprojects/gmp-6.2.1/dist/usr/local/lib/x86_64-linux-gnu and that directory contains the files:

$ ls /home/oscar/current/active/tmp/mesontest/build/subprojects/gmp-6.2.1/dist/usr/local/lib/x86_64-linux-gnu
libgmp.la  libgmp.so  libgmp.so.10  libgmp.so.10.4.1  pkgconfig

However the .so ends up linking against the absolute path /lib/x86_64-linux-gnu/libgmp.so.10 (not /usr/local/lib/... or some local path).

@dnicolodi
Comment options

So how does the linker end up picking the system gmp? It is a bit worrying that we've built a local GMP and then ended up linking to the system one.

This is how dynamic linking works. The Python extension module knows that it needs symbols contained in a shared library named libgmp.so.10 and at most it has a list of locations where to look for it in addition to the standard locations configured for the shared library loader. All locations are probed in order and the first libgmp.so.10 found is loaded and dynamically linked to the extension module.

Simplifying a bit, for compiling you need the header files. In this case, Meson tries to find them using pkg-config and cmake. It does not find them and thus falls back to compiling the gmp subproject which will provide them. However, the libgmp.so.10 build as part of the subproject is not installed anywhere, thus when you load the Python extension module, the dynamic loader resolves the symbols via the system installed libgmp.so.10.

So far everything is working as expected. What does not work is that the subproject adds some files to the Meson installation manifest, and meson-python does not handle these correctly. It looks like something specific to external_subproject.

You need to decide is whether you want to include libgmp.so.10 in the Python wheel or require the users of your package to provide a system installed libgmp.so.10. Both are valid solution. Which one to prefer depends on the nature of your project and which kind of users you expect for the package.

If you expect the users of provide the library via a system dependency, I would drop the subproject fallback and require the development package to be installed to compile the Python wheel. Alternatively, you can still use the subproject fallback for getting what is required for the compilation, but exclude the content of the subproject from the installation. I don't know what is the easiest way to achieve that in the case of an external_subproject subproject. What is almost certain to work is to use instllation tags to filter the installed components via the tool.meson-python.args.intall setting in pyproject.toml.

Adding libgmp.so.10 into the wheel and adjusting the RPATH for the extension module to look for it in the right place is supported transparently by meson-python only on Linux ans macOS. If you need this to work on Windows, there is some manual setup required.

A third option is to link to libgmp.so.10 statically. In this case you just need to exclude the subproject from the installed components. For example as explained above.

Comment options

Looking a the source of the error reported by meson-python, this is the install manifest for the project:

$ meson introspect build --install-plan -i
{
    "targets": {
        ".../mesontest/build/meson_test/_meson_test.cpython-311-darwin.so": {
            "destination": "{py_platlib}/meson_test/_meson_test.cpython-311-darwin.so",
            "tag": "runtime"
        }
    },
    "python": {
        ".../mesontest/meson_test/__init__.py": {
            "destination": "{py_platlib}/meson_test/__init__.py",
            "tag": "python-runtime"
        }
    },
    "install_subdirs": {
        ".../mesontest/build/subprojects/gmp-6.2.1/dist/usr/local": {
            "destination": "{prefix}/.",
            "tag": null,
            "exclude_dirs": [],
            "exclude_files": []
        }
    }
}

meson-python does not know where to place the files that are copied into the generic {prefix} directory. Fixing this for projects that install only a shared library requires some assumptions, but is not too hard. However, something would need to be done about all other files installed by the subproject but that do not belong in the Python wheel. Even in the case of a simple (in terms of deployment) library like GMP, there are quite a few:

build/subprojects/gmp-6.2.1/dist/
└── usr
    └── local
        ├── include
        │   └── gmp.h
        ├── lib
        │   ├── libgmp.10.dylib
        │   ├── libgmp.dylib -> libgmp.10.dylib
        │   ├── libgmp.la
        │   └── pkgconfig
        │       └── gmp.pc
        └── share
            └── info
                ├── dir
                ├── gmp.info
                ├── gmp.info-1
                └── gmp.info-2

At the moment I don't see how a solution could look like.

You must be logged in to vote
1 reply
@oscarbenjamin
Comment options

From my perspective it would be fine if I needed to add some configuration somewhere that tells mesonpy which of these files I want to keep and where they should go.

Comment options

You need to decide is whether you want to include libgmp.so.10 in the Python wheel or require the users of your package to provide a system installed libgmp.so.10.

The problem is I want different things in different situations:

  • Building wheels for PyPI. This will take place in CI using cibuildwheel. There I will build GMP but I will do it in the CIBW_BEFORE_ALL stage and subsequently auditwheel, delocate, delvewheel will bundle the shared library with name mangling as needed. Since I will build GMP before building the wheels the situation is effectively like having a system installed GMP. However if I'm using meson/meson-python as my build system then I would ideally like to use those to manage doing this building and I don't see yet how to do that.
  • The project will also be packaged for conda and linux distros etc and they will definitely want to link with the system installed dependencies.
  • Some users will need to build from source for whatever reason and will start from the sdist (perhaps implicitly e.g. via pip). In this case I would like to use the system provided dependencies by default but I would also like to make it so that GMP can be built if needed. These users are not being given a wheel but rather the sdist that is generated by mesonpy and uses mesonpy as its build-system. That sdist potentially needs to be able to build a wheel that could include GMP so that it can be installed locally. Ideally the sdist contains the information needed to build a compatible version of GMP in a way that is compatible with the extension module(s) in the project.
  • For development it really needs to be possible to build the main dependencies in a way that is isolated from the system installed libraries. I want to build and install them local to the venv/project somehow and I want to be able to switch to different versions of the dependencies e.g. to test new releases.

Potentially the answer to all of these is that is not really for mesonpy to build/bundle the library in the wheel and I am just not quite using it in the expected way. Perhaps I can open the question up a bit:

  • If I am using meson and meson-python as the build system how can I do all of the things that are not just about building a wheel against system installed libs?
  • How do I make it easy for a contributor to build the dependencies locally in a development setup?
  • How do I provide configuration for someone building from source to be able to build the dependencies in the correct way?
  • How can I have an editable install?
  • How do I scope the versions/builds of these dependencies to a venv rather than installing them globally?

Looking at meson + meson-python made me wonder if it can somehow help with some of these things. The approach that I went for here seemed to give reasonable answers to some of the questions above because it means I can manage dependencies in the meson build configuration, users can have them be built if needed, the dependencies are contained to a venv etc.

Linking statically also answers those questions in the same way but has the downside that if I have many extension modules depending on GMP (and other C libraries) then I will end up duplicating the GMP etc library code across those many extension modules.

You must be logged in to vote
8 replies
@minrk
Comment options

I'm exploring updating pyzmq's build system to either meson or scikit-build-core, and I think this describes my use case exactly - find libzmq if it's around, download and build it if it's not. This is complicated by the fact that the latest libzmq release itself has added a dependency on libsodium, so I need to build two libraries. The hard part has been making sure it's available at runtime, especially for users who pip install from source, such as a platform that doesn't have a wheel (e.g. Python prerelease testers, some folks cross-compiling for phones I think).

Our solution thus far has been to build libzmq as a distutils Extension to inherit all the compiler configuration, w3hich has worked surprisingly well for many years, but this has all kinds of subtle, hairy consequences that I'd like to lose.

@rgommers
Comment options

Hi @minrk, thanks for sharing your use case. We've made some progress here I think - or at least gained more experience on how to best do this - since the last post on this thread.

The main question I have for pyzmq now is whether, if libzmq is not already installed and you must download and build it, you have to use it as a shared library, or you can build it as a static library (libsodium too) and fold it into a Python extension module? The latter is simpler, because it avoids issues with things like Windows not having RPATH support.

@minrk
Comment options

static library might work for bundled libzmq/libsodium. An issue has been that it's used in quite a few extensions, which would make static linking complicated. But that's no longer the case, there is now only one Extension to build, so static linking may well be an option.

I'll need to learn quite a bit about how to make this work for Windows. libzmq/libsodium both use autotools on non-Windows, but for Windows libsodium has visual studio solutions while ZeroMQ uses CMake.

pyzmq is also a little complicated because the backend is compiled with Cython for CPython and CFFI for PyPy.

So I need something like:

if libzmq_found:
    locate libs, includes with pkg-config/etc. (standard, simple)
else:
    fetch libsodium, libzmq
    if Windows:
        build static libsodium with msbuild
        build static libzmq against static libsodium with cmake
    else:
        build static libsodium with autotools
        build static libzmq against static libsodium with autotools

if PyPy:
    _zmq = cffi extension
else:
    _zmq = cython extension

if libzmq_found:
    dynamically_link(_zmq, libzmq)
else:
    statically_link(_zmq, local libzmq)
@rgommers
Comment options

Great, static libraries should make this easier. I think I can answer most of how to do the above, but probably this is the point to transfer this to a new discussion to keep this one for the shared library case. Would you mind copying this comment over?

@rgommers
Comment options

pyzmq conversation continues in gh-556

Comment options

Having looked into this in more detail I don't think that the approach that I was attempting to follow of building and bundling external dependency shared libraries into the wheel with mesonpy is workable. At least I think that significant changes in mesonpy would be needed to make this work.

Currently the expectation is that building uses a PEP 517 build frontend like python -m build or pip install .... The frontend creates an isolated environment and installs the Python build dependencies before invoking the build backend which is in mesonpy. The expectation in this setup is that any non-Python dependencies are installed "system-wide" before even the PEP 517 frontend is invoked.

When we want to ship wheels that bundle the dependencies we arrange to have the non-Python dependencies installed system-wide first, invoke the PEP 517 build and then repair afterwards. The repair step bundles the shared libraries but also performs the necessary name mangling or RPATH modifications so that the bundled libraries can be used without conflicting with system libs.

Even if the user does not have GMP installed when the wheel is built it is possible that they would install GMP later. It is not appropriate for mesonpy to install the libraries system-wide. It is also not enough just to bundle the shared library files without also taking steps to ensure that those libraries and the extension modules linking to them are isolated from system libs. If we want mesonpy to build the non-Python dependencies then we need it to include part of what the repair step is doing so it needs to perform a subset of what auditwheel/delocate/delvewheel currently do.

This seems like a change of scope for mesonpy and its role as a PEP 517 backend that is invoked by commands like pip install python-flint. Maybe that change of scope is reasonable but I don't think that I can make it work for python-flint without the support being added to mesonpy first.

What could work is if this was all using static linking but I think that is not a good fit for python-flint which has many extension modules all linking to the fairly large libflint.so.

I would like to use meson to manage building the dependencies in a development checkout and also as an option for users who want to setup the dependencies easily when building from source. I think though that what we need for that is a separate meson build configuration like having a subdirectory dependencies with a separate dependencies/meson.build file. This build config would not be used by mesonpy and the PEP 517 build but could be run as a separate step beforehand (like CIBW_BEFORE_ALL):

cd dependencies
meson setup --flint-ver=3.1.0 --prefix=$(pwd)/../.local
meson install
cd ..
PKG_CONFIG_PATH=$(pwd)/.local pip install --no-build-isolation --editable .

Then perhaps if using spin as a development frontend we could provide a coherent interface for managing this local dependency stack when installing/building python-flint or running the tests etc.

You must be logged in to vote
3 replies
@rgommers
Comment options

I think though that what we need for that is a separate meson build configuration like having a subdirectory dependencies with a separate dependencies/meson.build file.

That'd be https://mesonbuild.com/Subprojects.html; subprojects can be built automatically before building the main project by meson-python.

The one thing that is missing I think is the auditwheel-like step, if you're getting a shared library dependency that itself depends on other shared libraries. I don't have time for a more well thought out response right now, but I think if this works:

python -m build --wheel  # including subprojects built automatically
auditwheel dist/*whl

and this does too:

pip install -e . --no-build-isolation  # works because rpath's not yet stripped

then you're not in such a bad place right?

the frontend creates an isolated environment and installs the Python build dependencies before invoking the build backend which is in mesonpy. The expectation in this setup is that any non-Python dependencies are installed "system-wide" before even the PEP 517 frontend is invoked.

I'll note that:

  1. non-isolated builds are certainly a first class citizen, as they're used heavily for all distro packaging and for (e.g.) local development in a conda env
  2. non system-wide native dependency installs work too on Linux and macOS, with the limitation on transitive rpaths you ran into. And on Windows it always requires delvewheel, because no RPATH support at all there.
@oscarbenjamin
Comment options

I think if this works:

python -m build --wheel  # including subprojects built automatically
auditwheel dist/*whl

If we have that setup then what happens if a user installs from source from PyPI?

I am thinking of:

pip install --no-binary python-flint python-flint

The backend would build the libraries as subprojects but would not install them and would not bundle them in the wheel. Then the user is left with a broken install. I think that if the backend can't bundle the libraries then it is important to fail the build unless those libraries are already available externally.

From a development perspective we are in a good place but it is the user-install-from-source case that motivates the design I attempted in the OP. If we can't make that part work transparently then I think it is better if the mesonpy backend does not attempt to build the libs.

@eli-schwartz
Comment options

Users building from source when running pip install can be fraught with risk. The most blatant risk is on Windows where it's relatively unlikely that the user has any compiler installed at all, whether that's mingw or MSVC. Projects with Fortran code have it even worse...

I think that's part of why @rgommers and others are actually sort of hoping to make pip default to erroring out if there are wheels available for a package but no wheel available for your environment.

Comment options

I am revisiting this after a discussion with someone trying to install python-flint (flintlib/python-flint#193).

A user wants to install a newer version of python-flint but we don't provide wheels for their platform (Linux aarch64 because GitHub Actions does not support this yet). They have previously built from source fine though and would be happy to do so again.

However they have the wrong version of Flint installed. Their Flint installation is part of a bundle of other software that uses an older version of Flint. They can't just install a newer version of Flint system-wide because it would break all of the other software.

We handle this situation nicely with the binary wheels on PyPI because auditwheel mangles and bundles libflint.so into the wheel. We have all of the code needed to be able to build Flint for the user and the build would succeed on their system but meson-python does not have any facility to mangle and bundle the binary into a wheel.

I said in my last comment that it seems like meson-python should possibly not be expected to do this building and bundling of non-Python libraries. However I should amend that statement at least to say that it is perhaps not a good idea to do that by default. At the very least it would be good to be able to have an option like:

# Build and bundle Flint 3.1 into the Python installation:
pip install --config-settings=setup-args=-Dbuild_flint==3.1 python-flint

I think that this can't work right now though because meson-python does not provide a facility to mangle and bundle the shared libraries but it is clear to me now that this would be a useful thing to be able to do.

You must be logged in to vote
0 replies
Comment options

Hi, has there been any progress on this?

I have a similar use case.

I have a subproject (mylib), which always builds a shared library dependency (mylib_dep). I want to link my Python extension against it and include it in the wheel.

Example:

python = import('python').find_installation(pure: false)

mylib_proj = subproject('mylib', ...)
mylib_dep = mylib_proj.get_variable('mylib_dep') # Shared mylib lib dependency

python.extension_module(..., dependency: mylib_dep, install: true, subdir: 'myModule')

The subproject's call to library(), which produces the mylib_lib variable, does set install: true. mylib_lib is then used to produce the mylib_dep variable via declare_dependency(). Then, as seen above, mylib_deb is extracted from the subproject and used in the current project.

However, when I build the wheel only the Python extension .so is present. How can I include the shared library from the subproject?

You must be logged in to vote
1 reply
@kivkiv12345
Comment options

Depending on your use-case, a workaround could be linking the dependency statically by setting default_library=static.
But this breaks the use-cases where the dependency symbols should be found outside of the extension module itself.
EDIT: just found a way to bundle the dependency in the .whl, using custom_target(). See gh-556 for the details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
🙏
Q&A
Labels
None yet
7 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.