Skip to end of metadata
Go to start of metadata

On this page:

This page includes SBC SWe hardware and software requirements and recommendations.

To install and configure SBC SWe, make sure the Virtual Machine (VM) host meets the following recommended hardware, server platform and software requirements:

The recommended hardware and software settings are intended to ensure optimum SBC SWe stability and performance. If the recommended settings are not used, SBC SWe system may not behave as expected.

The SBC SWe software only runs on platforms using Intel processors. Platforms using AMD processors are not supported.

Server Hardware Requirements

ConfigurationRequirement
ProcessorIntel Xeon E3-1220v2 or any better Intel Xeon processors
RAMMinimum 24 GB
Hard DiskMinimum 500 GB
Network Interface Cards (NICs)
Minimum 4 NICs, if physical NIC redundancy is not required.

Otherwise, 8 NICs (preferably with SR-IOV capability to support future SWe optimizations).

Make sure NIC has multi-queue support which enhances network performance by allowing RX and TX queues to scale with the number of CPUs on multi-processor systems.

Only Intel I350 Ethernet adapter is supported for configuring as VMDirectPath I/O pass-through device.

Ports

Number of ports allowed:

  • 1 Management port
  • 1 HA port
  • 2 Media ports

See SBC SWe Network Listener Ports for port details.

BIOS Setting Recommendations

Sonus recommends the following BIOS settings for optimum performance:

Table : Recommended BIOS Settings for Optimum Performance

BIOS ParameterRecommended
Setting
Details
 Intel VT-x (Virtualization Technology) Enabled For hardware virtualization
Intel VT-d (Directed I/O)EnabledIf available
Intel Hyper-ThreadingEnabled 
Intel Turbo BoostEnabled 
CPU power managementMaximum Performance 

Sonus recommends the following BIOS settings for Data Direct I/O pass-through:

Table : BIOS Setting Recommendations for Data Direct I/O

BIOS ParameterRecommended
Setting
Default Value
HP Power Profile Maximum PerformanceBalanced Power and Performance
Thermal ConfigurationMaximum CoolingOptimal Cooling
HW PrefetchersDisabledEnabled
Adjacent Sector PrefetcherDisabledEnabled
Processor Power and Utilization MonitoringDisabledEnabled
Memory Pre-Failure NotificationDisabledEnabled
Memory Refresh Rate1x Refresh2x Refresh
Data Direct I/OEnabledDisabled
SR-IOVEnabledDisabled
Intel® VT-dEnabledDisabled

Software Requirements

The following are the VMware ESXi andSBC SWe software requirements:

VMware ESXi Requirements

SoftwareVersionFor More Information
vSphere ESXi 5.1 or above
  • Customized ESXi images for various server platforms are available on VMware and Hardware platform vendor sites.
    • It ensures that all the required drivers for network and storage controllers are available to run ESXi server.
    • Most of the customized ESXi images comes with customized management software to manage server running ESXi software.
    • Customized ESXi images for HP ProLiant and IBM servers are available at:
vSphere Client5.x or aboveVMware Knowledge Base
vCenter Server5.1 or abovevCenter Server

SBC SWe Requirements

For more information, see Downloading SBC SWe Software Package From SalesForce.

Sonus Recommendations for Optimum Performance

Following are recommended VMware ESXi and SBC SWe virtual machine (VM) configurations. 

General ESXi Recommendations

Following are recommended VMWare ESXi configurations:

  • Plan enough resources (RAM, CPU, NIC ports, hard disk, etc.) for all the virtual machines (VMs) to run on server platform, including resources needed by ESXi itself.
  • Allocate each VM only as much virtual hardware as that VM requires. Provisioning a VM with more resources than it requires can, in some cases, reduce the performance of that VM as well as other virtual machines sharing the same host.
  • Disconnect or disable any physical hardware devices (Floppy devices, Network interfaces, Storage controllers, Optical drives, USB controllers, etc.) under BIOS settings that you will not be using. This can free interrupt/CPU resources.
  • Use virtual hardware version 8 while creating new VMs (available only on ESXi 5.x). This provides additional capabilities to VMs such as supporting VM with up to 1 TB RAM, up to 32 vCPUs, etc.

ESXi Host Configuration Parameters

Use VMware vSphere client to configure the following ESXi host configuration parameters on the Advanced Settings page (see figure below) before installing the SBC SWe:

Table : ESXi Advanced Settings

ESXi Parameter

Recommended
Setting

Default
Value

Cpu.CoschedCrossCall

0

1

Cpu.CreditAgePeriod

500

3000

Cpu.HaltingIdleMsecPenalty0100

DataMover.HardwareAcceleratedInit

0

1

DataMover.HardwareAcceleratedMove

0

1

Disk.SchedNumReqOutstanding

256

32

Irq.BestVcpuRouting

1

0

Irq.RoutingPolicy02

Mem.BalancePeriod

0

15

Mem.SamplePeriod

0

60

Mem.ShareScanGHz

0

4

Mem.VMOverheadGrowthLimit

0

4294967295

Misc.TimerMaxHardPeriod

4000

100000

Misc.TimerMinHardPeriod

2000

100

Net.AllowPT

1

0

Net.MaxNetifRxQueueLen

500

100

Net.MaxNetifTxQueueLen

1000

500

Net.NetTxCompletionWorldlet

0

1

Net.NetTxWordlet

0

2

Numa.AutoSplitVM01

Numa.LTermFairnessInterval

0

5

Numa.MonMigEnable

0

1

Numa.PageMigEnable

0

1

Numa.PreferHT

1

0

Numa.RebalancePeriod

60000

2000

Numa.SwapInterval

1

3

Numa.SwapLoadEnable

0

1

Figure : ESXi Parameters

VM Configuration Recommendations

SettingsRecommended Configuration
vCPU

4 vCPUs VM – Use this configuration for all vNICs configuration. 

8 vCPUs VM – Use this configuration for NICs (PKT0 / PKT1) configured in VMDirectPath mode.

  • All vCPUs should be configured on single virtual socket.
  • Keep Resource Allocation setting marked as 'limited' with CPU freq. set to "physical processor CPU speed multiplied by number of vCPUs assigned to VM".

An 8 vCPUs VM configuration is only supported for E5-2690v2 processor.

Set CPU affinity using following guidelines:

  1. HT enabled:
    1. Use '0,2,4,6" core affinity for 4 vCPUs VM
    2. Use '0,2,4,6,8,10,12,14' core affinity for 8 vCPUs configuration.
  2. Set HT sharing as 'none'.  
  3. HT disabled – Use '0-3" core affinity for 4 vCPUs VM (supported only for E3-1220v2 processor)
vRAM

Keep Resource Allocation setting marked as 'limited' (16 GB).

Virtual Hard Disk

Set Hard Disk (virtual) size as 300 GB or more (based on requirements of retaining CDRs, logs, etc. for number of days)

  • Use Thick provisioning (eager zero)
  • Hard disk size cannot be changed once SBC SWe software is installed
vNICs

Set number of virtual NICs as 4 (1-MGMT, 1-HA, 1-PKT0 and 1-PKT1).

  • Use only VMXNET3 driver
  • RSS (Receive Side Scaling) feature of VMXNET3 driver is essential for spraying network packets to multiple vCPUs.
    RSS feature calculates the hash value from configured 4-tuples (source and destination IP address and port numbers) to determine which CPU will handle the incoming packets, maintaining order of packets for a given flow.
    • For UDP packets, RSS uses only source and destination IP addresses for calculating hash value and hence, distribution of  RTP packets to multiple vCPUs will be more uniform if media packets are coming from and going to different IP addresses. This is one of the important factor in determining SBC SWe  overall performance.
  • Always use automatic MAC address assignment option while creating vNICs.
  • Associate each vNIC to separate vSwitches
  • Use ESXi NIC teaming feature to achieve redundancy at physical NIC level.
vSwitch settings
  • Use four different vSwitches for each vNICs onSBCVM. This ensures various traffic to be physically separated on SBC.
    • Assign 1 physical NIC port (1 Gbps) to each vSwitch if physical NIC redundancy is not needed, otherwise assign 2 physical NIC ports (in active-standby mode using NIC team feature) to each vSwitch.

      The same physical NIC port cannot be associated with different vSwitches.

  • Use four different virtual networking labels, each with different VLANs or subnets.
  • Always run active and standbySBC VMs on different physical servers
  • Disable VM logging