Skip to end of metadata
Go to start of metadata

Overview

Audio Transcoding and Video Relay

The SBC SWe Cloud inter-operates with a third-party transcoding platform called Media Resource Function (MRF) to transcode audio and relay video/T140.

Note

Only the SBC SWe Cloud on OpenStack (D-SBC) supports this feature. 


The SBC SWe Cloud supports the following functionality:

  • Relaying both audio and video streams
  • Relaying audio, video and T140 streams
  • Audio transcode through MRF and video relay
  • Audio transcode through MRF and video/T140 relay
  • Audio transcode through MRF and T140 relay
  • Audio and T.140 transcode through MRF (see T.140 and TTY Interworking Support below)

Note

The SBC SWe Cloud supports this functionality only for MRF-transcoded calls on D-SBC platforms.

T.140 and TTY Interworking Support

Prior to release 7.1.0, the SBC SWe Cloud invoked MRF only for audio streams to achieve transcoding. Non-audio streams were relayed end-to-end even when the audio was sent to the MRF. Teletype (TTY) as the legacy service offered through encoding text characters as tones that are embedded in a carrier (PCMU, PCMA, or EVRC) media stream. The T.140 streams carry text as a separate payload.

Henceforth, the SBC SWe Cloud invokes MRF for T.140 and TTY interworking to achieve transcoding. When T.140 and TTY interwork, text characters exchange between the T.140 stream and the tones carried inband with the audio. If the audio is pass-through and T.140 requires transcoding, the SBC does not invoke MRF and instead rejects the text stream on the offer leg (see the following call flow). Keep in mind that T.140 pass-through scenarios are supported without any MRF interaction.

Note

This feature does not support sessions that only have a T.140 stream.

The SBC SWe Cloud does not invoke T.140 and TTY interworking when T.140 is present on both legs and has different transmission rates, or a difference in redundancy packet support.

Note

Only the SBC SWe Cloud on OpenStack (D-SBC) supports this feature. 

Figure : Audio and T.140 Transcoded

 

For T.140 and TTY interworking to succeed,

  • the offer received by the SBC must have a text stream with a valid IP and Port, and the answer received by the SBC must have a text stream with port=0,
  • the audio must be transcoded,
  • the audio codec on the TTY leg must be Baudot capable (G711U, G711A, or EVRC).

Note

  • If the flag transcode-only is enabled, like other packet-to-packet control configurations, the transcode-only applies only to the audio stream.
  • The T.140 stream can be passed-through or transcoded based on the above conditions.

The existing t140Call flag in the PSP achieves T.140 and TTY interworking for MRF transcoding. The following table outlines when the T.140 and TTY can and cannot interwork.

Table : PSP Configuration

Offer Leg Route

PSP (T.140 Call)

Answer Leg Route

PSP (T.140 Call)

 

Result

DisabledDisabledT.140 disabled on both legs
DisabledEnabledT.140 disabled on both legs
EnabledDisabledT.140-TTY can interwork
EnabledEnabledT.140-TTY can interwork

Note

A need for interworking is evident when an m=text line for the leg that sends T.140 is below the m=audio line for that leg, and when there is no m=text line for the other leg in the offer sent toward MRF.

If T.140 and TTY interworking is not required but audio transcoding is required, the audio streams go through MRF and the T.140 streams do not go through MRF.

If the audio is pass-through and T.140 requires transcoding, the SBC does not invoke MRF and instead rejects the text stream on the offer leg (see the following call flow).

Figure : Audio pass-through

 

If T.140 and TTY interworking is required but MRF does not support interworking, the SBC rejects the T.140 stream on the leg that offers T.140.

SBC SWe Cloud Limitations

  • Audio-less calls are not supported.
  • Only video and T140 streams among the non-audio streams are supported.

For configuration details, refer to:

To view media stream statistics, refer to Show Status Address Context - Call Status Details

Prerequisites to Invoking MRF

Note

The first three activities below cause the D-SBC to invoke MRF.

 

  1. Configure MRF Profile in S-SBC
  2. Configure Private LIF Groups in M-SBC
  3. Enable transcoding at the Packet Service Profile (refer to Packet Service Profile - CLI).
  4. Create a Path Check Profile, ARS profile, and CAC profile during the initial configuration

Note

Sonus recommends to not configure Path Check Profile and SIP ARS Profile on the same peer to avoid unexpected results. As a general rule, the Path Check Profile is configured on the access leg where there is less traffic, and the ARS Profile is configured on the peer leg where there is continuous traffic. 

Configure MRF Profile in S-SBC

This configuration example explains how to configure the MRF cluster profile in the S-SBC.

 

Click to expand...

The MRF servers are configured as FQDN or the IP address is decided by Routing Type configured in the MRF Profile.

 

Configure Private LIF Groups in M-SBC

This configuration example explains the CLI command required to configure the MRF cluster profile in M-SBC.

Click to expand...

Create a Path Check Profile, ARS profile, and CAC Profile During Initial Configuration

 

Path Check Profile

The Path Check Profile specifies the conditions that constitute a connectivity failure, and in the event of such a failure, the conditions that constitute a connectivity recovery.

  • For more information on path check, refer Service Profiles - Path Check Profile
  • For more information on creating IP Peer, refer to System Provisioning - IP Peer for GUI or Zone - IP Peer - CLI.

    Note

    If using an IP address, create different IP Peers for each IP addresses configured in MRF cluster profile as MRF IP address and attach the Path Check Profile.

    If using an FQDN, create the IP Peer with FQDN and attach the Path Check Profile.

ARS Profile

The Address Reachability Service (ARS) determines whether a server is reachable, able to blacklist a server IP address when unreachable, and remove the server from blacklist state. ARS profiles can be created to configure blacklisting and recovery algorithm variants. For more information, refer to Service Profiles - SIP ARS Profile (EMA) or SIP ARS Profile - CLI.

Create an ARS profile and attach to the MRF TG as configured in the cluster profile. The ARS feature controls the congestion to handle the 503 response.

CAC Profile

 

Invoking MRF Server

In a cluster profile, you can configure the routing type for FQDN or a list of IP addresses.

MRF Server configured as FQDN

When the FQDN is chosen, the FQDN resolves into a list of IP addresses.

If the MRF profile is configured with FQDN and a call is routed to MRF server(s) as follows:

  • If mrfPort is configured as '0', the SBC performs SRV query to fetch the port number based on the priority and weight. Post SRV query, it performs A or AAAA query to fetch the corresponding IP addresses.
  • If mrfPort is configured with a valid port number, the SBC performs only A or AAAA query.
  • if there is 'No Response' received from MRF server, SBC re-transmits the INVITE for six times that lasts 32 seconds. This number of re-transmissions is configurable under Trunk Group as follows. After these configurable number of re-transmissions, the SBC tries to re-transmits the alternative MRF server IP address available in the list by default.
 % set addressContext <address_context> zone <zone name> sipTrunkGroup <TG Name> signalling retryCounters invite <0-6>

The DNS crankback profile is configured such that, it retries the other records for any error responses received from MRF server. If the error code matches with the entry in the DNS crankbank profile, then SBC retries for alternative MRF server, otherwise the call will be rejected.

Selecting an SRV RR based on priority and weight

The SRV record look-up response is as follows: 


# _service._proto.name.
TTL
class
srv
 
priority
 
weight
 
port
target host
_sip._tcp.ribbon.com.
86400
IN
SRV
10
60
5060
bigserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
10
20
5060
smallserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
10
10
5060
smallserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
10
10
5070
smallserver.ribbon.com.
_sip._tcp.ribbon.com.
86400
IN
SRV
20
0
5060
backupserver.ribbon.com.

 

Priority : Determines the precedence of use of the record's data. The SRV record with the lowest-numbered priority value is used first. if the connection to the host fails, it fallbacks to other records of equal or higher priority.

Weight: If a service has multiple SRV records with the same priority value, clients use the weight to determine the host to be used. The weight value is relevant only in relation to other weight values for the service, and only among records with the same priority value.

The table above tabulates , the first four records share a priority of 10, so the weight field's value is used to determine which server (host and port combination) to contact. The sum of all four values is 100, so bigserver.ribbon.com is used 60% of the time. The two hosts mediumserver and smallserver is used for 20% of requests each, with half of the requests that are sent to smallserver ( that is 10% of the total requests) going to port 5060 and the remaining half to port 5070. If bigserver is unavailable, these two remaining machines  shares the load equally, because they are to be selected 50% of the time. If all four servers with priority 10 are unavailable, the record with the next highest priority value will be chosen, which is backupserver.ribbon.com.

  • If the target host is “.”, SBC discards the record.
  • SBC use priority and weight fields in SRV record selection.
    • SBC attempts to contact the target host with the lowest-numbered priority it can reach.
    • If multiple records has the same priority, the SBC usesthe weight field to select the target host with records with larger weight given a proportionately higher probability of being selected.
      • SBC uses “running sum” mechanism that is described in RFC 2782 for load-balancing across SRV RRs of the same priority.

Once a given SRV record is selected, the SBC performs A-Record lookup. If A-record lookup fails, AAAA record look-up is performed.

Selecting a A/AAAA RR based on configuration

SBC performes the A-Record lookup and if that fails AAAA record lookup is done. The SBC handles IPV4 and/or IPv6 addresses returned in AAAA record look-up response.

An example of A record look-up response is as follows:

bigserver.ribbon.com 86400 IN A 192.168.1.10

bigserver.ribbon.com 86400 IN A 192.168.1.11

An example of AAAA record look-up response is as follows:

bigserver.ribbon.com 86400 IN AAAA fe80:0:0:0:214:4fff:fe56:848d

bigserver.ribbon.com 86400 IN AAAA fd00:10:6b50:110::28

SBC distributes the A/AAAA records based on the configuration of recordOrder  in the DNS Group.

% set addressContext <addressContext name> dnsGroup <dnsGroup name> server <DNS server name> recordOrder <centralized-roundrobin | priority | roundrobin>

Where:

  • recordOrder – Indicates the lookup order of local name service records associated with the specified DNS server.
  • centralized-roundrobin – (recommended) This option uses the round-robin technique with respect to the whole system.
  • priority (default) – Indicates the lookup order is based on the order of entries returned in DNS response.
  • roundrobin – This option is used to share and distribute local records among internal SBC processes in a round-robin fashion. Over a large number of calls, a fair amount of distribution occur across all DNS records.

In case of multiple SRV RR and multiple A/AAAA RR, SBC selects the next-available SRV address (if all the IP addresses for a given A/AAAA record are tried and not reachable) and retries to reach the MRF sever, using the procedures specified above for selecting an SRV based on priority and weight.

MRF Server configured as IP

In this profile,

  1. A maximum of 4 IPV4/IPV6 can be configured.
  2. If "No Response or 504" is received from MRF, then SBC will not try for an alternative MRF server and the call gets rejected.
  3. If "488/500/503" response is received from any MRF server, then SBC tries for an alternative MRF server before rejecting the call.

If the MRF profile is configured with a list of MRF server IP addresses and a call is routed to MRF server(s) as follows:

  • S-SBC tries to connect to the configured MRF server IP addresses in a round-robin fashion.
  • If any failure/no response is received from an MRF server for a specific IP address, the same IP address is blacklisted. When blacklisted, S-SBC continuously sends an option message to MRF server to check whether the IP is active/inactive. Once the IP is active, S-SBC removes the IP address from the blacklist state and tries to connect to the same IP when the next call is routed to MRF Server.
  • S-SBC tries for the next available MRF server IP address configured in the list alternatively.
  • This process is repeated until S-SBC either receives a SUCCESS response from any of the MRF servers or all the MRF server IP addresses in list is exhausted.

Example: The MRF profile is configured with a list of MRF server IP addresses such as IP1, IP2, IP3 and IP4, then for the 1st call, S-SBC tries to connect for MRF server IP1. Meanwhile, S-SBC received 2nd, 3rd, 4th calls and connected to the MRF servers IP2, IP3 and IP4 respectively. For the 1st call, the S-SBC has received a Failure/No response from the MRF server IP1. Hence, the S-SBC tries with IP2 and connects successfully.

Signaling and Media Flow

Signaling and Media flow for a transcoded call using S-SBC, M-SBC and MRF:

  • S-SBC: Provides signaling services and responsible for allocating/activating/managing various resources (including MRF). Configures media flow through M-SBC and MRF.
  • M-SBC: Provides media services. Public interface is used to communicate with peers and private interface is used to communicate with MRF.
  • MRF: Provides transcoding services. Configured in private network of SBC and uses RFC-4117 interface to communicate with S-SBC.

 

Figure : Signaling and Media Flow

 


  • No labels