Bug #5365
closedrepo-install-test runs systemd inside docker, doesn't work with newer systemd versions on host
100%
Description
repo-install-test runs systemd and Osmocom systemd services to ensure they start up properly (#3369).
The way this is implemented works on our jenkins nodes, but not when trying locally with a more recent systemd version on the host.
Related:
https://github.com/systemd/systemd/issues/19245
I've researched this lately and found that it does work when using podman instead of docker, and that it's officially supported there. Here's a reference article, and I did try it out and it worked:
https://www.redhat.com/sysadmin/improved-systemd-podman
So we could migrate repo-install-test to use podman instead of docker to resolve this.
Until this is done, if repo-install-test is failing with current master of docker-playground.git and osmo-ci.git, and patches need to be tested against it: use the build parameters of the jenkins job to specify a branch.
Updated by osmith over 1 year ago
- Status changed from New to In Progress
- % Done changed from 0 to 60
Updated by laforge over 1 year ago
is this related to
https://jenkins.osmocom.org/jenkins/job/gerrit-libosmocore-build/89/a2=default,a3=default,a4=default,arch=arm-none-eabi,label=osmocom-gerrit/console ?
I've disabled host2-deb9build for now.
Updated by osmith over 1 year ago
laforge wrote in #note-3:
is this related to
https://jenkins.osmocom.org/jenkins/job/gerrit-libosmocore-build/89/a2=default,a3=default,a4=default,arch=arm-none-eabi,label=osmocom-gerrit/console ?I've disabled host2-deb9build for now.
It isn't, it seems the host2-deb9build lxc config wasn't compatible anymore with cgroupsv1? But in any case, since the plan was to disable the deb9 lxcs soon this doesn't matter much.
Updated by osmith over 1 year ago
- % Done changed from 60 to 90
Updated by osmith over 1 year ago
- % Done changed from 90 to 50
Harald wrote in https://gerrit.osmocom.org/c/osmo-ci/+/30471/2:
I'm in general quite sceptic. We're replacing docker with podman as supposedly we can run more thngs there without container specific hacks. But it looks like we're adding various new workarounds here, so I'm wondering if this really is the way to go. Maybe we simply want to run those tests in a qemu-kvm virtual machine, after all? we could always start from the same clean disk image, with COW for the changes done by the job. And once it's completed, we delete the COW and the next execution starts from clean rootfs again.
Agreed, that sounds better. Changing the job to run in qemu-kvm.
Updated by osmith over 1 year ago
Enabled kvm inside build3-deb11build-ansible by appending to /var/lib/lxc/deb11build-ansible/config on build3:
# kvm device access for OS#5365 lxc.mount.entry = /dev/kvm dev/kvm none bind,create=file lxc.cgroup2.devices.allow = c 10:232 rw
Updated by osmith over 1 year ago
- % Done changed from 80 to 90
Updated by osmith over 1 year ago
- Status changed from In Progress to Resolved
- % Done changed from 90 to 100