Project

General

Profile

Actions

Feature #2642

closed

Jenkins job to verify PKG_CHECK_MODULES correctness

Added by laforge over 6 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
Start date:
11/15/2017
Due date:
% Done:

100%

Spec Reference:

Description

Right before tagging a new version of a given program (e.g. osmo-bts), we would want to have a manually-triggered jenkins job that verifies the correctness of our PKG_CHECK_MODULES statements.

So this jenkins job would have to scan/parse configure.ac and find lines like

PKG_CHECK_MODULES(LIBOSMOCORE, libosmocore  >= 0.10.0)

and then check-out the 0.10.0 tag of libosmocore, build and install that dependency and finally attempt to do a "make check" on the program we triggered the jenkins job for (osmo-bts in the example above).

The difficulties/challenges I see are:
  • ensure we start from a clean build system that has none of our libs installed
  • have a list of mappings between library name and git repository name so the git repo for a given project can be determied
  • find out a way to deal with the fact that we have one source repo libosmocore which installs libosmo{core,gsm,ctrl,vty,codec,coding}.
    • simply detect this and use the latest(newest) version required by them for checkout+build+install?
    • actually find a way how we can only do a "make install" of one of the libraries from libosmocore.git?
  • it's sad that we'd have to have jenkins jobs for each of our projects, which seems a bit like bloat. Maybe it could also simply be a script in osmo-ci.git that we manually execute in a well-defined build environment such as a docker container? This way we wouldn't need yet another dozen of jenkins jobs.

Checklist

  • make it easy to change the git url (jenkins job and command line)
  • merge all CI jobs into one
  • allow specifying the git revision
  • adjust code to gerrit review from neels
  • add a project parameter (or similar) to the CI job (default: all projects)
  • update code in gerrit review
  • WARNING messages if another project asks for an older version
  • allow the job to be only triggered manually

Related issues

Related to OsmoTRX - Bug #3578: osmo-trx-uhd, osmo-trx-usrp1 lack a --version cmdline optionResolvedosmith09/20/2018

Actions
Related to osmo-sip-connector - Bug #3577: osmo-sip-connector lacks a --version cmdline optionResolvedosmith09/20/2018

Actions
Related to OsmoSGSN - Bug #3576: osmo-gtphub lacks a --version optionResolvedosmith09/20/2018

Actions
Actions #1

Updated by laforge over 5 years ago

  • Assignee changed from 4368 to osmith
Actions #2

Updated by laforge over 5 years ago

  • Priority changed from Normal to High
Actions #3

Updated by osmith over 5 years ago

  • Status changed from New to In Progress

it's sad that we'd have to have jenkins jobs for each of our projects, which seems a bit like bloat. Maybe it could also simply be a script in osmo-ci.git that we manually execute in a well-defined build environment such as a docker container? This way we wouldn't need yet another dozen of jenkins jobs.

+1 for that, I will implement it that way.

A few questions:
  • Will there be a Jenkins job at all, or will there just be a script that gets manually executed before each release?
  • In which repository should the script-running-Docker go (and its additional files, similar to this folder structure)? I would have put it in docker-playground.git, and I will start with that, but since you've mentioned osmocom-ci.git, maybe that's the desired location?
Actions #4

Updated by osmith over 5 years ago

  • % Done changed from 0 to 10

I have started with the implementation, in Python this time because the logic will be too complex for a shell script from what I can tell.

So far parsing the configure.ac file is working, I can get all osmocom libraries listed with PKG_CHECK_MODULES along with their version numbers.

However, I have realized that this needs to work recursively: we won't have any osmocom programs installed, so when checking the versions of osmo-bts, not only would we need to build libosmocore of a certain version. But also the libraries it depends on, e.g. libosmo-abis.

So I am coding roughly the following algorithm:
  • clone the program
    • figure out its dependencies
    • for each dependency, clone it and recurse
  • now all dependencies are downloaded and a stack with the build order exists
  • print the build stack
  • build one library after another from the stack, with the given version
Actions #5

Updated by laforge over 5 years ago

On Tue, Sep 11, 2018 at 03:00:02PM +0000, osmith [REDMINE] wrote:

I have started with the implementation, in Python this time because the logic will be too complex for a shell script from what I can tell.

that's fine, of course. In the end, we care about the functionality, and not in which language it is written. Python is of course a good choice.

So far parsing the configure.ac file is working, I can get all osmocom libraries listed with PKG_CHECK_MODULES along with their version numbers.

good.

However, I have realized that this needs to work recursively: we won't have any osmocom programs installed, so when checking the versions of osmo-bts, not only would we need to build libosmocore of a certain version. But also the libraries it depends on, e.g. libosmo-abis.

We don't generally do automatic dependency discovery. Everywhere where we build osmocom code, we state
dependencies more or less explicitly. Think of the jenkins/gerrit build verification jobs. When you pus h a patch againts osmo-bts, then the build verification script for osmo-bts contains the knowledge that you must first build libosmocore.git and libosmo-abis.git before building osmo-bts.

btw: your above example is wrong, as libosmocore doesn't depend on libosmo-abis. However,
both libosmo-abis and osmo-bts (and virtually anything else) depends on libosmocore

So I am coding roughly the following algorithm:
  • clone the program
    • figure out its dependencies
    • for each dependency, clone it and recurse
  • now all dependencies are downloaded and a stack with the build order exists
  • print the build stack
  • build one library after another from the stack, with the given version

that could be done, but I think it's too much effort. The number of osmocom programs and their
dependencies between them don't change too often. So having that hard-coded in the script and
just obtaining version numbers from configure.ac would probably be sufficient.

But then, maybe the clean version you propose isn't all that much extra effort...

Actions #6

Updated by laforge over 5 years ago

On Tue, Sep 11, 2018 at 09:44:42AM +0000, osmith [REDMINE] wrote:

A few questions:
  • Will there be a Jenkins job at all, or will there just be a script that gets manually executed before each release?

up for debate. I think having a jenkins job is a good idea. Whether one then just manually trigges
it, or whether one adds automatic triggers is then a policy deicison.

What is clear is that any new jenkins jobs should be specified in jenkins-job-builder and not be created
manually (see other examples in osmo-ci.git)

  • In which repository should the script-running-Docker go (and its additional files, similar to this folder structure)? I would have put it in docker-playground.git, and I will start with that, but since you've mentioned osmocom-ci.git, maybe that's the desired location?
It could go eiether way. In docker-plaground.git we currently have
  • Dockerfiles for containers we use in our own osmocom.org hosting
  • Dockerfiles for the TTCN3 test automatization

whereas osmo-ci.git contains Docker containers for build testing and jenkins related scripts + job
definitions.

I'd probably lean a bit more towards osmo-ci.git

Actions #8

Updated by osmith over 5 years ago

  • % Done changed from 10 to 20

btw: your above example is wrong, as libosmocore doesn't depend on libosmo-abis. However,
both libosmo-abis and osmo-bts (and virtually anything else) depends on libosmocore

Right, thanks for pointing that out.

But then, maybe the clean version you propose isn't all that much extra effort...

These words sounded like you wouldn't tear my head off if I did some more experiments in the recursive direction I had already begun coding. So I've stuck to that for now, but we can of course replace that with a static list if you guys find that more elegant.

Anyway, here's the progress:
  • Proof of concept is working
  • It generates the following output:
Looking at osmo-bts:master
 * libosmocore:0.11.0
 * libosmovty:0.11.0 (part of libosmocore)
 * libosmogsm:0.11.0 (part of libosmocore)
 * libosmoctrl:0.11.0 (part of libosmocore)
 * libosmocodec:0.11.0 (part of libosmocore)
 * libosmocoding:0.11.0 (part of libosmocore)
 * libosmoabis:0.5.0 (part of libosmo-abis)
 * libosmotrau:0.5.0 (part of libosmo-abis)
Looking at libosmocore:0.11.0
Looking at libosmo-abis:0.5.0
Cloning git repo: git://git.osmocom.org/libosmo-abis
 * libosmocore:0.11.0
 * libosmovty:0.11.0 (part of libosmocore)
 * libosmogsm:0.11.0 (part of libosmocore)
Dependency graph:
 * osmo-bts:master depends: {'libosmocore': '0.11.0', 'libosmo-abis': '0.5.0'}
 * libosmocore:0.11.0 depends: {}
 * libosmo-abis:0.5.0 depends: {'libosmocore': '0.11.0'}
Build order:
 * libosmocore:0.11.0
 * libosmo-abis:0.5.0
 * osmo-bts:master
Building libosmocore:0.11.0
 + autoreconf -fi
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, '.'.
libtoolize: copying file './ltmain.sh'
...
+ ./configure (also followed by the output, same counts for the ones below)
+ make
+ make check (this is probably not needed?)
+ sudo make install
+ sudo ldconfig
Building libosmo-abis:0.5.0
...
Building osmo-bts:master
...
  CC       rsl.o
rsl.c: In function ‘encr_info2lchan’:
rsl.c:900:43: error: ‘gsm0808_chosen_enc_alg_names’ undeclared (first use in this function); did you mean ‘gsm48_chan_mode_names’?
  const char *ciph_name = get_value_string(gsm0808_chosen_enc_alg_names, *val);
                                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           gsm48_chan_mode_names
rsl.c:900:43: note: each undeclared identifier is reported only once for each function it appears in
make[3]: *** [Makefile:429: rsl.o] Error 1

So it looks like it finds one of those errors already, that it is supposed to catch :)

The code is on Gerrit already, but still heavily WIP (missing lots of comments, hardcoded paths, ...). I've learned from #osmocom that I can also push code to a private branch, so I'll use that feature in the future when the code is not ready for review yet.

One more question came up:

Some git repositories hold multiple libraries, like libosmocore. Should we make sure, that all references in one configure.ac to one git repository have the same tag? Otherwise we can't really test it with that approach.
For example: what if osmo-bts would depend on libosmovty:0.1 and libosmogsm:0.2?

Actions #9

Updated by osmith over 5 years ago

  • % Done changed from 20 to 40

Pushed my current WIP version to osmith/dependency-check of osmo-ci.git.

I'm making good progress. Unless anything else comes up, it should be ready for review some time on Monday.

Quick rundown of what I did today:
  • figured out why the script couldn't parse osmo-msc properly and fixed it
  • the dependency parsing is pretty solid now, tested and working with: osmo-bts, osmo-pcu, osmo-hlr, osmo-mgw, osmo-msc, osmo-bsc, osmo-sgsn, osmo-ggsn
  • using docker is almost working. Only problem is, that sudo is not installed in the container, and I figured instead of calling sudo make install, it would be better to not install these test builds to /usr/local/... anyway, but rather a temporary folder somewhere (similar to what neels' osmo-dev scripts are doing)
  • code refactoring
  • use Python's argparse module to parse the arguments. The help output is as follows. That should make it pretty user friendly for the case where it may not run in CI (and also it makes it much easier to develop, I can just disable building by not passing -b for example)
    usage: dependency-check.py [-h] [-g GITDIR] [-b] [-d] [-j JOBS] program
    
    This script verifies that Osmcom programs really build with the dependency
    versions they claim to support in configure.ac. In order to do that, it clones
    the dependency repositories if they don't exist in gitdir already, and checks
    out the minimum version tag. This happens recursively for their dependencies
    as well.
    
    positional arguments:
      program               which Osmocom program to look at (e.g. 'osmo-bts')
    
    optional arguments:
      -h, --help            show this help message and exit
      -g GITDIR, --gitdir GITDIR
                            folder to which the sources will be cloned (default:
                            /home/user/code)
      -b, --build           don't only parse the dependencies, but also try to
                            build the program
      -d, --docker          run the test in a fresh 'debian-stretch-build'
                            instance
      -j JOBS, --jobs JOBS  parallel build jobs (for make)
    

Another edge case I have been wondering about:

let's say osmo-msc:master depends on:
  • libosmocore 0.10.0
  • libosmo-abis 0.5.0
and libosmo-abis:0.5.0 depends on:
  • libosmocore 0.11.0

I think we need to throw an error in that case, osmo-msc must then also at least depend on libosmocore 0.11.0. Does this make sense?

Actions #10

Updated by osmith over 5 years ago

  • % Done changed from 40 to 70

I've done some more fixes and polished the code. It is ready for review and testing now:
https://gerrit.osmocom.org/#/c/osmo-ci/+/10932/

Here's the updated help output:

usage: osmo-depcheck.py [-h] [-g GITDIR | -d] [-b] [-j JOBS] program

This script verifies that Osmocom programs really build with the dependency
versions they claim to support in configure.ac. In order to do that, it clones
the dependency repositories if they don't exist in gitdir already, and checks
out the minimum version tag. This happens recursively for their dependencies
as well.

positional arguments:
  program               which Osmocom program to look at (e.g. 'osmo-bts')

optional arguments:
  -h, --help            show this help message and exit
  -g GITDIR, --gitdir GITDIR
                        folder to which the sources will be cloned (default:
                        /home/user/code)
  -d, --docker          run the test in a fresh 'debian-stretch-build'
                        instance
  -b, --build           don't only parse the dependencies, but also try to
                        build the program
  -j JOBS, --jobs JOBS  parallel build jobs (for make)

Actions #11

Updated by osmith over 5 years ago

  • % Done changed from 70 to 80

I had a misconception about the Docker containers, because I thought they were always present on (at least some) Jenkins bots. That's why there is the -d switch, which executes the script in a debian-stretch-build container.

After asking in #osmocom, pespin commented:

as far as I know they are built in steps, and each step can be cached, so if that step didn't change since last built, it doesn't need to be run

So it would only be possible to use the Docker containers if the Jenkins skript used multiple build steps. The only benefit of the Docker containers would have been, that the build dependencies are already present, and that the source code checkouts are not in the host's file system. So with all that considered, running it in docker didn't seem worth it from my perspective.

Instead I've set the --gitdir (the folder where the sources will be checked out to) relative to the current Jenkins workspace, which will be cleared anyway. The configure prefixes are already in temporary folders, that will be deleted when the script exits (even if it fails, using atexit).

After spending the whole day with figuring out how the Jenkins stuff works, the jobs exist now. All of them are reporting errors so far, some are valid, and some are not. I'll look into this more tomorrow.

(I need to use --with-systemdsystemunitdir=$prefix/lib/systemd/system/ for example.)

Actions #12

Updated by laforge over 5 years ago

thanks for the update.

I think one of the larger questiosn now is whether we actually would run this job on
the "master" branch, as per our workflow we know that the version requirements are
broken until just immediately before we tag a release.

Example:
As we introduce new ABI or API quite frequently, we cannot bump the package version
of a library all the time with each of those commits. This in turn means that the
applications cannot update their dependencies until such new versions are tagged,
etc.

It would be best for pespin to comment on this, as he's taken the hat of being
in chage of doing most of the actual release work. I guess it's up to him to
decide when he would run such a job, and on which branch/source.

In any case, in terms of the output of the osmo-depchck script, I like it.

Copying from one of the build runs for osmo-bsc

ooking at osmo-bsc:master
Cloning git repo: git://git.osmocom.org/osmo-bsc
 * libosmocore:0.12.0
 * libosmovty:0.12.0 (part of libosmocore)
 * libosmoctrl:0.12.0 (part of libosmocore)
 * libosmogsm:0.12.0 (part of libosmocore)
 * libosmoabis:0.5.1 (part of libosmo-abis)
 * libosmo-netif:0.3.0
 * libosmo-sigtran:0.10.0 (part of libosmo-sccp)
 * libosmo-sccp:0.10.0
 * libosmo-mgcp-client:1.4.0 (part of osmo-mgw)
Looking at libosmo-abis:0.5.1
Cloning git repo: git://git.osmocom.org/libosmo-abis
 * libosmocore:0.12.0
 * libosmovty:0.12.0 (part of libosmocore)
 * libosmogsm:0.12.0 (part of libosmocore)
Looking at libosmocore:0.12.0
Cloning git repo: git://git.osmocom.org/libosmocore
Looking at osmo-mgw:1.4.0
Cloning git repo: git://git.osmocom.org/osmo-mgw
 * libosmocore:0.12.0
 * libosmogsm:0.12.0 (part of libosmocore)
 * libosmovty:0.12.0 (part of libosmocore)
 * libosmo-netif:0.3.0
Looking at libosmo-netif:0.3.0
Cloning git repo: git://git.osmocom.org/libosmo-netif
 * libosmocore:0.12.0
 * libosmogsm:0.12.0 (part of libosmocore)
 * libosmoabis:0.5.1 (part of libosmo-abis)
Looking at libosmo-sccp:0.10.0
Cloning git repo: git://git.osmocom.org/libosmo-sccp
 * libosmocore:0.12.0
 * libosmovty:0.12.0 (part of libosmocore)
 * libosmogsm:0.12.0 (part of libosmocore)
 * libosmo-netif:0.3.0
---
Dependency graph:
 * osmo-bsc:master depends: {'libosmo-abis': '0.5.1', 'libosmocore': '0.12.0', 'osmo-mgw': '1.4.0', 'libosmo-netif': '0.3.0', 'libosmo-sccp': '0.10.0'}
 * libosmo-abis:0.5.1 depends: {'libosmocore': '0.12.0'}
 * libosmocore:0.12.0 depends: {}
 * osmo-mgw:1.4.0 depends: {'libosmocore': '0.12.0', 'libosmo-netif': '0.3.0'}
 * libosmo-netif:0.3.0 depends: {'libosmo-abis': '0.5.1', 'libosmocore': '0.12.0'}
 * libosmo-sccp:0.10.0 depends: {'libosmocore': '0.12.0', 'libosmo-netif': '0.3.0'}
---
Build order:
 * libosmocore:0.12.0
 * libosmo-abis:0.5.1
 * libosmo-netif:0.3.0
 * osmo-mgw:1.4.0
 * libosmo-sccp:0.10.0
 * osmo-bsc:master
---
Actions #13

Updated by osmith over 5 years ago

So there were two bugs that caused failures for the wrong reason, but now all the Jenkins jobs are working as expected! \o/

pespin gave some feedback in #osmocom:

pespin_: from what I read in the ticket, I probably miss possibility to change the git_url from where projects are cloned
pespin_: this way while I'm releasing stuff (and I have some projects with tags only localled not yet pushed) I can clone from a local path
...
pespin_: osmith, latest master require latest master of dependencies, until a release is done and then dependencies are figure out checking how API/ABIs changed
pespin_: osmith, otherwise we'd be taking care of this kind of stuff every day for several hours with each change ;)
osmith: pespin_: yeah, LaF0rge mentioned this also in the issue and asked for your opinion on this. maybe it makes sense to trigger the job only manually right before the release? https://osmocom.org/issues/2642
pespin_: osmith, yes, and I think having only 1 jenkins job is enough
pespin_: with a paramater "PROJECT" or similar
pespin_: and you can run it with PROJECT=osmo-bsc for instance
pespin_: and if PROJECT is empty (default), run it for all projects in a default list
pespin_: and perhaps also anther parameter REF where you can pass a git ref to use for that project
pespin_: osmith, most interesting for me is the ability to clone from a defined git_url

also pespin mentioned a new idea, but the way laforge answered I'm not sure if this should be implemented:

pespin_: another feature which would be interesting: detect newest release number for each project and then print fat WARNING messages if another project asks for an older version in pkgcfg (so I can check that, but it can still be OK using an older version)
LaF0rge: pespin_: why would that last point be something to investigate?
LaF0rge: pespin_: the requirement should always be the oldest still-working version of the dependency
pespin_: LaF0rge, indeed, but it may provide me a list of deps for which I need to bump the PKGCFG version in case there was some API change or similar. Or we should only care in case compilation fails in this case?
pespin_: LaF0rge: that wrning stuff doesn't need to be enabled by default, it can be enabled through a cmd arg

Actions #14

Updated by osmith over 5 years ago

  • Checklist item make it easy to change the git url (jenkins job and command line) added
  • Checklist item merge all CI jobs into one added
  • Checklist item allow specifying the git revision added
  • Checklist item adjust code to gerrit review from neels added
  • Checklist item add a project parameter (or similar) to the CI job (default: all projects) added
  • Checklist item update code in gerrit review added
Actions #15

Updated by laforge over 5 years ago

I'll defer to what pespin needs. He's likely going to use it...

Actions #16

Updated by osmith over 5 years ago

  • Checklist item WARNING messages if another project asks for an older version added
Actions #17

Updated by osmith over 5 years ago

  • Checklist item allow the job to be only triggered manually added
  • Checklist item merge all CI jobs into one set to Done
  • Checklist item allow specifying the git revision set to Done
  • Checklist item add a project parameter (or similar) to the CI job (default: all projects) set to Done
Actions #18

Updated by osmith over 5 years ago

Now that there's only one job, it was weird to have it named "depcheck". I've recreated it as "Osmocom-depcheck", so it fits the naming scheme of the existing jobs better.

https://jenkins.osmocom.org/jenkins/job/Osmocom-depcheck/

I'm working through the remaining code change TODOs, ending with what neels pointed out in the review, and then I'll submit it for review again.

Actions #19

Updated by osmith over 5 years ago

  • Checklist item make it easy to change the git url (jenkins job and command line) set to Done
Actions #20

Updated by osmith over 5 years ago

  • Checklist item WARNING messages if another project asks for an older version set to Done

Specifying --old on the command line, or checking the PRINT_OLD_DEPENDS checkbox in the Jenkins job, reports old versions now:

Dependencies on old releases:
 * osmo-hlr:0.2.1 -> libosmocore:0.11.0 (latest: 0.12.0)
 * osmo-hlr:0.2.1 -> libosmo-abis:0.5.0 (latest: 0.5.1)
 * libosmo-abis:0.5.0 -> libosmocore:0.11.0 (latest: 0.12.0)
Actions #21

Updated by osmith over 5 years ago

  • Checklist item adjust code to gerrit review from neels set to Done
  • Checklist item update code in gerrit review set to Done
  • % Done changed from 80 to 90
Actions #22

Updated by osmith over 5 years ago

  • Checklist item allow the job to be only triggered manually set to Done
Actions #23

Updated by neels over 5 years ago

  • Related to Bug #3578: osmo-trx-uhd, osmo-trx-usrp1 lack a --version cmdline option added
Actions #24

Updated by neels over 5 years ago

  • Related to Bug #3577: osmo-sip-connector lacks a --version cmdline option added
Actions #25

Updated by neels over 5 years ago

  • Related to Bug #3576: osmo-gtphub lacks a --version option added
Actions #26

Updated by pespin over 5 years ago

Some stuff which you could improve:

When trying to fetch a tag that doesn't exist:

Looking at osmo-hlr:0.4.1
error: pathspec '0.4.1' did not match any file(s) known to git
Traceback (most recent call last):
  File "./osmo-depcheck.py", line 101, in <module>
    main()
  File "./osmo-depcheck.py", line 77, in main
    depends = dependencies.generate(args.gitdir, args.prefix, project, rev)
  File "/home/pespin/dev/sysmocom/git/osmo-ci/scripts/osmo-depcheck/dependencies.py", line 68, in generate
    git_clone(gitdir, prefix, program, version)
  File "/home/pespin/dev/sysmocom/git/osmo-ci/scripts/osmo-depcheck/dependencies.py", line 35, in git_clone
    version, "-q"], check=True)
  File "/usr/lib/python3.7/subprocess.py", line 468, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', '-C', '/home/pespin/dev/sysmocom/git/osmo-ci/scripts/osmo-depcheck/code/osmo-hlr', 'checkout', '0.4.1', '-q']' returned non-zero exit status 1.

Probably we want to check if the tag exists before trying to checkout, and if doesn't exist, print and exit with a meaningful error.

I also noticed that it doesn't pull in new tags (git fetch --tag) before re-running the command. So for instance if I check out the repo, then I create the tag and I rerun the osmo-depcheck the new tag is still not found because it's not pulled. As a result, every time you need to wipe your code/ subdir. I don't know if that's expected, but it's annoying if I need to use this each time while creating tags for releases (which are not yet pushed, only available through -u /path/to/my/local/git/repos/).

Next thing: I see it installs stuff to /tmp. I recall reading somewhere you were installing somewhere in jenkins workspace? IMHO you should do that. There should be --install-prefix or similar arg, which by default should be "code/install" (as the git repos are checked out into code/. They could be checked out in code/git for cleaner file structure). Then, I think you are building stuff in the git repo dir itself, probably good to build in a separate directory (code/build/$reponame?), since that's what we usually do and want to support, and we want to catch if building in a different builddir is broken.

Other than that, everything looks fine for me, good job!

Actions #27

Updated by osmith over 5 years ago

pespin wrote:

Probably we want to check if the tag exists before trying to checkout, and if doesn't exist, print and exit with a meaningful error.

I've updated the code to catch the error from git and print a meaningful error instead of the Python stack trace afterwards. now it looks like this:

$ ./osmo-depcheck.py osmo-hlr:0.4.1
Looking at osmo-hlr:0.4.1
Fetching tags...
error: pathspec '0.4.1' did not match any file(s) known to git.
ERROR: git checkout failed! Invalid version specified?

I also noticed that it doesn't pull in new tags (git fetch --tag) before re-running the command.

Fixed.

Next thing: I see it installs stuff to /tmp. I recall reading somewhere you were installing somewhere in jenkins workspace?

When running with jenkins, the source code gets downloaded to the jenkins workspace (using the --gitdir argument). The installation folder is a random temp folder, but it does get deleted for sure when the script either ran through or even if the Python script crashed with an exception:

ret = tempfile.mkdtemp(prefix="depcheck_")
atexit.register(shutil.rmtree, ret)
print("Temporary install folder: " + ret)
return ret

here should be --install-prefix or similar arg, which by default should be "code/install" (as the git repos are checked out into code/. They could be checked out in code/git for cleaner file structure).

Sure, I can implement it like that. Do you still want to have this considering that the folders do get deleted properly as described above?

Then, I think you are building stuff in the git repo dir itself, probably good to build in a separate directory (code/build/$reponame?), since that's what we usually do and want to support, and we want to catch if building in a different builddir is broken.

How do you do that, with a flag for configure or autoconf?

Other than that, everything looks fine for me, good job!

Thank you for the detailed feedback!


Current WIP version of the code is in osmith/dependency-check.

Actions #28

Updated by pespin over 5 years ago

osmith wrote:

When running with jenkins, the source code gets downloaded to the jenkins workspace (using the --gitdir argument). The installation folder is a random temp folder, but it does get deleted for sure when the script either ran through or even if the Python script crashed with an exception:
Sure, I can implement it like that. Do you still want to have this considering that the folders do get deleted properly as described above?

That directory is for sure not deleted if you kill -9 the prcess, which jenkins is known to do in some cases (like when you stop the job). In any case, it may be fine for our machines, but I still think it's not really nice for users to get their /tmp ramfs filled with megabytes of binary files without asking them or providing them with the possibility to use some other path.

Then, I think you are building stuff in the git repo dir itself, probably good to build in a separate directory (code/build/$reponame?), since that's what we usually do and want to support, and we want to catch if building in a different builddir is broken.

How do you do that, with a flag for configure or autoconf?

Something like this. So basically it depends on where do you run .configure from:

cd $prefix/code/git/libosmocore
autoreconf -fi
mkdir -p $prefix/code/build/libosmocore
cd $prefix/code/build/libosmocore
$prefix/code/git/libosmocore/configure --prefix="$prefix/code/install/" 
make clean
make
make install

Actions #29

Updated by neels over 5 years ago

It would be better to place all tmp folders inside the workspace, because /tmp might in fact fill up with aborted jobs.

pespin wrote:

That directory is for sure not deleted if you kill -9 the prcess, which jenkins is known to do in some cases (like when you stop the job). In any case, it may be fine for our machines, but I still think it's not really nice for users to get their /tmp ramfs filled with megabytes of binary files without asking them or providing them with the possibility to use some other path.

We are not considering anyone else to use this besides us. There are no "users".

Then, I think you are building stuff in the git repo dir itself, probably good to build in a separate directory (code/build/$reponame?), since that's what we usually do and want to support, and we want to catch if building in a different builddir is broken.

How do you do that, with a flag for configure or autoconf?

Now, let's hold the feature creep. It's totally fine to build in the git dir for this!
Even when building in a different dir, the source dir gets "contaminated" with ./configure and .version files, etc.
So you need a 'git clean -dxf' anyway.

Actions #30

Updated by osmith over 5 years ago

meanwhile in IRC:

osmith: pespin_: what would you say if I removed --gitdir, and added --workdir as parameter? the workdir would then have subfolders called "git", "build" and "install". so you could only configure one folder again (to make it less complex), but this one holds all data
pespin_: osmith, fine for me


neels wrote:

Now, let's hold the feature creep. It's totally fine to build in the git dir for this!
Even when building in a different dir, the source dir gets "contaminated" with ./configure and .version files, etc.
So you need a 'git clean -dxf' anyway.

Well, "catch if building in a different builddir is broken" sounds like a valid use case to me, and the way pespin described, basically only the build commands would need to be changed there. I would not call that feature creep (as long as we keep everything in one "workdir" as described above).

Actions #31

Updated by pespin over 5 years ago

We are not considering anyone else to use this besides us. There are no "users".

I tend to consider us as people using it, so I like considering at least myself as a user, and I don't like having that kind of stuff going into my /tmp :)

Actions #32

Updated by osmith over 5 years ago

Code updated and ready for review:
https://gerrit.osmocom.org/#/c/osmo-ci/+/11053/

Actions #33

Updated by neels over 5 years ago

Well, "catch if building in a different builddir is broken" sounds like a valid use case to me

It is, but absolutely not for this issue!
If we want to test for that, it would be in the master or gerrit build jobs, or maybe completely separate ones.

This here is supposed to test whether building with old versions works, not whether some old versions fail to build in another dir.
So I'd argue that testing that here is even potentially harmful, because you're needlessly compounding several failure causes.
If it's already in then nevermind for now, but we may need to go back to same-dir build for clarity, if such failure comes up.

Actions #34

Updated by neels over 5 years ago

pespin wrote:

We are not considering anyone else to use this besides us. There are no "users".

I tend to consider us as people using it

We're talking about a jenkins job here.

Nevertheless I agree to keep /tmp out of it.

Actions #35

Updated by osmith over 5 years ago

So I'd argue that testing that here is even potentially harmful, because you're needlessly compounding several failure causes.
If it's already in then nevermind for now, but we may need to go back to same-dir build for clarity, if such failure comes up.

It is already in there (see the latest gerrit link posted above).

Lesson learned: better wait for a clear decision until implementing it, and make smaller patches, so this would be easier to revert. However, if we do need to change it, it should be easy to do so (just change the function with the configure, make, make install calls, it's all in one place).

(Current status of this issue is: waiting for code review)

Actions #36

Updated by osmith over 5 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 90 to 100
Actions

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 48.8 MB)