A tool for testing Ceph locally using nested rootless podman containers
ceph-devstack is a tool that can deploy and manage containerized versions of teuthology and its associated services, to test Ceph (or just teuthology) on your local machine. It lets you avoid:
- Accessing Ceph's Sepia lab
- Needing dedicated storage devices to test Ceph OSDs
It is currently under active development and has not yet had a formal release.
☑︎ CentOS 9.Stream should work out of the box
☑︎ CentOS 8.Stream mostly works - but has not yet passed a Ceph test
☐ A recent Fedora should work but has not been tested
☒ Ubuntu does not currently ship a new enough podman
☒ MacOS will require special effort to support since podman operations are done inside a VM
- A supported operating system
- podman 4.0+ using the
crunruntime.- On CentOS 8, modify
/etc/containers/containers.confto set the runtime
- On CentOS 8, modify
- Linux kernel 5.12+, or 4.15+ and
fuse-overlayfs - cgroup v2
- On CentOS 8, see ./docs/cgroup_v2.md
- podman's DNS plugin, from the
podman-pluginspackage - A user account that has
sudoaccess and also is a member of thediskgroup - The following sysctl settings:
fs.aio-max-nr=1048576kernel.pid_max=4194304
- If using SELinux in enforcing mode:
setsebool -P container_manage_cgroup=truesetsebool -P container_use_devices=true
ceph-devstack doctor will check the above and report any issues.
sudo usermod -a -G disk $(whoami) # and re-login afterward
git clone -b ceph-devstack https://github.com/ceph/teuthology/
cd teuthology && ./bootstrap
python3 -m venv venv
source ./venv/bin/activate
python3 -m pip install git+https://github.com/zmc/ceph-devstack.gitNote: ceph-devstack expects to find a teuthology repository at
~/src/teuthology; it's possibile to override that location with the
--teuthology-repo flag.
First, you'll want to build all the containers:
ceph-devstack buildNext, you can start them with:
ceph-devstack startOnce everything is started, a message similar to this will be logged:
View test results at http://smithi065.front.sepia.ceph.com:8081/
This link points to the running Pulpito instance. Test archives are also stored in the --data-dir (default: ~/.local/share/ceph-devstack).
To watch teuthology's output, you can:
podman logs -f teuthologyIf you want testnode containers to be replaced as they are stopped and destroyed, you can:
ceph-devstack watchWhen finished, this command removes all the resources that were created:
ceph-devstack removeBy default, we run the teuthology:no-ceph suite to self-test teuthology. If we wanted to test Ceph itself, we could use the orch:cephadm:smoke-small suite:
export TEUTHOLOGY_SUITE=orch:cephadm:smoke-smallIt's possible to skip the automatic suite-scheduling behavior:
export TEUTHOLOGY_SUITE=noneWe default to providing three testnode containers. If you want more, you can:
ceph-devstack create --testnode-count NThis value will be stored as a label on the teuthology container when it is created, so subsequent start, watch, stop and remove invocations won't require the flag to be passed again.
If you need to use "real" testnodes and have access to a lab, there are a few additonal steps to take. We will use the Sepia lab as an example below:
To give the teuthology container access to your SSH private key (via podman secret):
export SSH_PRIVKEY_PATH=$HOME/.ssh/id_rsaTo use an ansible "secrets" or "inventory" repo:
$ export ANSIBLE_INVENTORY_PATH=$HOME/src/ceph-sepia-secretsTo lock machines from the lab:
ssh teuthology.front.sepia.ceph.com
~/teuthology/virtualenv/bin/teuthology-lock \
--lock-many 1 \
--machine-type smithi \
--desc "teuthology dev testing"Once you have your machines locked, you need to provide a list of their hostnames and their machine type:
export TESTNODES="smithiXXX.front.sepia.ceph.com,smithiYYY.front.sepia.ceph.com"
export MACHINE_TYPE="smithi"