Project

General

Profile

Actions

Redundancy between GbProxy » History » Revision 1

Revision 1/3 | Next »
daniel, 02/24/2021 05:06 PM


Redundancy between GbProxy

I commercial setup with redundant network setups an SGSNs (through SGSN pooling) having OsmoGbProxy be a single point of failure is not desired. However, there is no official specification to provide redundancy on that level because a Gb proxy simply exists.

OsmoGbProxy sits in between the BSS and SGSN and terminates the NS connections while transparently routing BSSGP messages back and forth.

To provide redundancy towards the SGSN multiple OsmoGbProxy processes need to appear as belonging to the same NS Entity. The SGSN needs to have different NSVC configured pointing to the different GbProxies or the GbProxy advertises (through IP-SNS) the other GbProxy as another endpoint. This should be entirely transparent to the SGSN.

This means:
  • NS needs to be able to announce "foreign" IP endpoints to the SGSN in SNS-CONFIG
  • NS needs to be able to disable/enable the transmission of SNS-SIZE to the SGSN at runtime
  • the SNS-CONFIG from the SGSN (listing its IP endpoints) is only received by the "master" gbproxy who has started the SNS-SIZE/CONFIG procedure * we will likely have to replicate that SGSN-originated SNS-CONFIG to the slave gbproxy; maybe simply spoof that UDP packet (and suppress sending a response). At least this way we'd not need to invent new parsers, etc?
On the BSS-side we also need to share an NSE:
  • each BSS is one NSE with multiple NS-VC (otherwise no redundancy is possible), no way to split that
  • a likely implementation would implement a 1:1 mapping of NS-VCs from BSS to SGSN side (thus also a 1:1 mapping between BSS NSE and SGSN NSE)
  • his also ensures downlink load sharing is performed inside the SGSN and gbproxy doesn't have to re-route user plane traffic
  • if one NSVC on the BSS side fails, we block the corresponding NS-VC on the SGSN side. This causes the SGSN to send the traffic over the remaining NS-VCs, as expected

Performing this 1:1 NSE mapping and 1:1 NS-VC mapping on the SGSN side will introduce the following externally visible changes:

  • not just one NSE per gbproxy, but one NSE per BSS-side NSE
  • one IP endpoints on the SGSN-facing gbproxy side per BSS NSVC (one IP endpoint maps to one BSS-side NS-VC)
  • there will be multiple SGSN-side NS-VC for each of those endpoints, as the SGSN has different IP endpoints itself
    (typically at least one EP for user traffic and one for signallign traffic)
Changes in OsmoGbProxy:
  • osmo-gbproxy and possibly libosmogb will need some support to allow the fine-grained control by the application (gbproxy) to control which NS-VC a given packet will go to
  • on BSSGP-level we need some state replication
  • per-BVC state for all BVCs needs to be replicated
State that needs to be replicated:
  • gbproxy_bvc - nse->nsei, sgsn_facing, bvci
  • gbproxy_cell - bvci, raid, cid
  • Can we just get away with ignoring the tlli/imsi cache? The worst would probably be missing a RESUME ACK, but it could also happen that our gbproxy goes down after receiving that and before replicating the state or forwarding it. In that case the resume ack would be resent after a time and the other gbproxy can route that.
  • Not sure if it's necessary to replay all bssgp messages (block/unblock/reset and *ack) or if it would be enough to simply set the state of the replicating gbproxy (features, locally_block, block_cause, blocked of unblocked). We should be able to ignore the "transient states" like wait_reset_ack - gbproxy would simply repeat the reset procedure.
    On the other hand this probably opens the door wide for race conditions. So maybe we simply forward all BSSGP signalling messages (bvci0) and await an ack from our gbproxy peer before we continue
Files (1)
gbproxy-redundancy.svg View gbproxy-redundancy.svg 14.2 KB daniel, 02/24/2021 05:09 PM

Updated by daniel about 3 years ago · 1 revisions

Add picture from clipboard (Maximum size: 48.8 MB)