Project

General

Profile

Actions

Bug #5365

closed

repo-install-test runs systemd inside docker, doesn't work with newer systemd versions on host

Added by osmith over 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
Start date:
12/21/2021
Due date:
% Done:

100%

Spec Reference:

Description

repo-install-test runs systemd and Osmocom systemd services to ensure they start up properly (#3369).

The way this is implemented works on our jenkins nodes, but not when trying locally with a more recent systemd version on the host.

Related:
https://github.com/systemd/systemd/issues/19245

I've researched this lately and found that it does work when using podman instead of docker, and that it's officially supported there. Here's a reference article, and I did try it out and it worked:
https://www.redhat.com/sysadmin/improved-systemd-podman

So we could migrate repo-install-test to use podman instead of docker to resolve this.

Until this is done, if repo-install-test is failing with current master of docker-playground.git and osmo-ci.git, and patches need to be tested against it: use the build parameters of the jenkins job to specify a branch.

Actions #2

Updated by osmith over 1 year ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 60
Actions #4

Updated by osmith over 1 year ago

laforge wrote in #note-3:

is this related to
https://jenkins.osmocom.org/jenkins/job/gerrit-libosmocore-build/89/a2=default,a3=default,a4=default,arch=arm-none-eabi,label=osmocom-gerrit/console ?

I've disabled host2-deb9build for now.

It isn't, it seems the host2-deb9build lxc config wasn't compatible anymore with cgroupsv1? But in any case, since the plan was to disable the deb9 lxcs soon this doesn't matter much.

Actions #5

Updated by osmith over 1 year ago

  • % Done changed from 60 to 90
Actions #7

Updated by osmith over 1 year ago

  • % Done changed from 90 to 50

Harald wrote in https://gerrit.osmocom.org/c/osmo-ci/+/30471/2:

I'm in general quite sceptic. We're replacing docker with podman as supposedly we can run more thngs there without container specific hacks. But it looks like we're adding various new workarounds here, so I'm wondering if this really is the way to go. Maybe we simply want to run those tests in a qemu-kvm virtual machine, after all? we could always start from the same clean disk image, with COW for the changes done by the job. And once it's completed, we delete the COW and the next execution starts from clean rootfs again.

Agreed, that sounds better. Changing the job to run in qemu-kvm.

Actions #8

Updated by osmith over 1 year ago

  • % Done changed from 50 to 80
Actions #9

Updated by osmith over 1 year ago

Enabled kvm inside build3-deb11build-ansible by appending to /var/lib/lxc/deb11build-ansible/config on build3:

# kvm device access for OS#5365
lxc.mount.entry = /dev/kvm dev/kvm none bind,create=file
lxc.cgroup2.devices.allow = c 10:232 rw
Actions #10

Updated by osmith over 1 year ago

  • % Done changed from 80 to 90
Actions #11

Updated by osmith over 1 year ago

  • Status changed from In Progress to Resolved
  • % Done changed from 90 to 100
Actions

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 48.8 MB)