Project

General

Profile

Actions

Bug #3135

closed

ttcn3-bts-test: timeout on missing osmo-bsc

Added by neels almost 6 years ago. Updated over 4 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
Category:
-
Target version:
-
Start date:
04/04/2018
Due date:
% Done:

0%

Spec Reference:
Tags:

Description

Recently, we managed to break our osmo-bsc-main docker container by means of a broken config file.
The result was that the BTS_Test got stuck upon starting the first test, never timing out and never reporting error.
Make sure that the test suite notices that something is wrong within a sensible timeout when the osmo-bsc.cfg is broken.

Actions #1

Updated by laforge almost 6 years ago

  • Project changed from Cellular Network Infrastructure to OsmoBTS
  • Assignee set to 4368
Actions #2

Updated by laforge almost 6 years ago

  • Tags set to TTCN3
Actions #3

Updated by laforge over 5 years ago

  • Assignee changed from 4368 to stsp
Actions #4

Updated by stsp over 5 years ago

This is likely a duplicate of issue #3149.

Actions #5

Updated by stsp over 5 years ago

With current osmo-ttcn3-hacks master, the BTS tests do not hang forever if osmo-bsc is missing:

MC2> MTC@fintan: Test case TC_chan_act_stress started.
MTC@fintan: Test case TC_chan_act_stress finished. Verdict: fail reason: Timeout waiting for ASP_IPA_EVENT_UP
MTC@fintan: Test case TC_chan_act_react started.
MTC@fintan: Test case TC_chan_act_react finished. Verdict: fail reason: Timeout waiting for ASP_IPA_EVENT_UP
MTC@fintan: Test case TC_chan_deact_not_active started.

Could this problem be specific to the docker setup? Can it still be provoked there?

Actions #6

Updated by stsp over 5 years ago

I don't have ready access to a Docker setup which I could use to quickly check if this is still a problem.
Could someone else check this on my behalf or should I invest time in building my own dockerized setup and try to reproduce?

Actions #7

Updated by stsp over 5 years ago

  • Status changed from New to In Progress
Actions #8

Updated by laforge over 5 years ago

On Tue, Aug 07, 2018 at 10:20:38AM +0000, stsp [REDMINE] wrote:

I don't have ready access to a Docker setup which I could use to quickly check if this is still a problem.
Could someone else check this on my behalf or should I invest time in building my own dockerized setup and try to reproduce?

The setup should be rather trivial and straight-forward. You will have to build a hand full of docker images
by thei respective "make" command in docker-playground, and then use the ./jenkins.sh to execute the test
suite. I think it can be expected from everyone in the team to do this.

Actions #9

Updated by fixeria over 5 years ago

  • Status changed from In Progress to Feedback

I just upgraded to the latest source code, and tested two things (separately):

  • IPA unit-id mismatch: 1234 vs 1200,
  • OML remote-ip mismatch 172.18.9.11 vs 172.18.9.100.

In both cases all tests (excluding TC_lapdm_selftest) have been failing, no hangs were observed...

Actions #10

Updated by laforge over 5 years ago

On Tue, Aug 07, 2018 at 12:13:42PM +0000, fixeria [REDMINE] wrote:

I just upgraded to the latest source code, and tested two things (separately):

- IPA unit-id mismatch: 1234 vs 1200,
- OML remote-ip mismatch 172.18.9.11 vs 172.18.9.100.

I'm not sure what you mean by 'mismatch'? We are executing the tests from the docker-playground.git
repositories automatically by jenkins evey night, and the config files contained in docker-playground.git
should definitely work.

Actions #11

Updated by laforge over 4 years ago

  • Status changed from Feedback to Rejected
Actions

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 48.8 MB)