Close Advertisement

SPECIAL FEATURE

10/40 Gbit Ethernet Switching Poses Unique Board Design Hurdles

High-speed board level protocols bring some tricky design challenges. Understanding the details of the associated PCB design issues is the key crafting solutions.

THIERRY WASTIAUX, SENIOR VICE PRESIDENT INTERFACE CONCEPT

Keywords in this Article:

No Keywords

  • Page 1 of 1
    Bookmark and Share

Article Media

 

In the key military fields of high end SIGINT, radar, EW, Search and Track applications defense budgets are constantly increasing. But the computing architectures for such systems are particularly demanding. To get the maximum benefits from the processing power of the latest generation of processors and FPGAs, system developers have to take into consideration the speed and the flexibility of the interconnect technologies that link them together. In High Performance Embedded Computing systems (HPEC) the VITA 65 OpenVPX standard has emerged as the best standard to allow very high speed data transmission between different boards of the systems. This data transmission is carried out through the Data Plane of the interconnect as specified in the VITA65 norm.

Two Protocols Dominate

The PCI Express protocol continues to see constant improvement in the last couple years reaching the current Gen3 version thanks to the steady investments of Intel that is natively using it on all its processors. This protocol is particularly robust and power efficient. In its Gen3 implementation the 128B/130B encoding limits the coding overhead allowing a throughput of 7.88 Gbits/s-close to the 8 Gbit/s limit of the raw Gen3 throughput. In addition a Fat Pipe PCIe x 4 reaches 31.5 Gbits/s. Today that is sufficient in most cases and can be supported by the current VPX backplane technology.

All that said, there are some drawbacks. This protocol is based on a Root Complex /End Point architecture where the Root Complex is the master in its PCIe kingdom using software to find, identify and configure all the End points. This means that getting communication between several processors requires a dedicated software package. Interface Concept has developed the Multiware for this purpose. The architecture of the PCIe Data Plane HPEC systems may use a central switch in VPX 3U with the Comeht4410a for instance. In the VPX 6U HPEC systems, a decentralized switching is preferred.

By design, PCIe is a point to point link protocol. When an important processing power is required with a high speed Data Plane over 10 GbE, the PCIe protocol becomes more cumbersome to use as many connections are necessary between the different Root Complex. In addition reaching 10 Gbits/s on a differential pair will only be possible with the PCIe Gen4 standard currently under specification. This PCIe Gen4 standard is expected to be released in 2017 and its silicon implementation will take some time. Today 10 Gbits/s cannot be achieved on an Ultra Thin pipe in PCIe Gen3 and a PCIe Fat Pipe Gen3 cannot reach 40 Gbits/s.

IEEE 802.3 Standard for Ethernet Sections 4 and 6 from 2012 specifies new standards including 10GBASE-R and 40GBASE-R with their Physical Layer implementations for backplane communication based on 64B/66B code, 10GBASE-KR and40GBASE-KR4.The 64B/66B code of the Physical Coding Sublayer (PCS) allows robust error detection. Its encoding ensures that sufficient transitions are present in the PHY bit stream to make clock recovery possible at the receiver. The Physical Medium Dependent Sublayer (PMD) of 10GBASE-KR allows transmission on one lane at 10.325 Gbit/s. The PMD Sublayer of 40GBASE-KR4 allows transmission on four lanes at the same rate.

Benefits of RDMA

So when many Digital Processing Boards are gathered in a HPEC Systems a centralized switching approach using 10GBASE-R and 40BASE-KR4 is preferred as it brings back simplicity. That is the reason why Interface Concept has developed the Cometh4580a (3U VPX) and Cometh4510a (6U VPX) switches. Behind this central Ethernet switching strategy, there is the assumption of the forthcoming adoption of the Remote Direct Memory Access (RDMA) norm in HPEC systems. In computing, Remote Direct Memory Access (RDMA) corresponds to the direct memory access from the memory of one processor into that of another without involving either one's operating system. This allows high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters.

New standards such as iWARP enable Ethernet RDMA implementation at the physical layer using TCP/IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standard-based solution. New signal integrity and thermal design methodologies for fast Data Plane implementation. To be able to reach the required speed on the differential pairs supporting the high data rate protocols being PCIe Gen3/4v or 10GBASE-R and 40BASE-KR4, the hardware engineers are facing some challenges in terms of signal integrity. The table in Figure 1 shows a list of the most important ones.

Figure 1
High data rate protocols create some challenges in terms of signal integrity. The table shows the most important ones.

To help the designers in overcoming these difficulties, Interface Concept is using the W2211BP version of KEYSIGHT ADS dedicated to high speed signal to ensure that the designers stay within the limits allowed by the VITA68 norm in term of attenuation, reflection and crosstalk. The data supplied by the component manufacturers are usually used under the IBIS-AMI or HSPICE format.

Signal Integrity Analysis

In a first step, designers are carrying out a pre-layout signal integrity analysis through simulation using ADS (stacking, tracks and vias) leading to the constraints and specifications of the PCB design. These include material, size of the stack, tracks and vias, anti-pads, stubs and spacing between the tracks. In a second step, the designers perform Post-layout signal integrity verifications. This includes electromagnetic simulation of the designed PCB, extraction of the Scattering Parameters and verification of the compliancy with the VITA68. Also involved is simulation in the temporal domain for Channel Simulation. Special lab equipment and high performance oscilloscope are required to achieve all this process. The backdrilling process allows the elimination all the unwanted stubs as shown in Figure 2 in a connection between layer M1 and M3 of the PCB.

Figure 2
The backdrilling process allows the elimination all the unwanted stubs as shown in a connection between layer M1 and M3 of the PCB.

 

In addition to the Signal Integrity challenges, mechanical solutions have to fulfil many requirements, including protection against shocks and vibrations and most importantly operation under wide temperature range. This means in particular that the design must provide low enough thermal resistance (degrees C/W) from semiconductor junction to ambient environment-air flow in air cooled and thermal edge in conduction cooled.

Simulations are used to compute the physical temperature fields and predict thermal path and thermal resistance. Fluid dynamics equations and heat equations are solved using the 6SigmaET software leading to the design of the best thermal solution with low enough components temperature rise and limited weight. The thermal behaviors of the designs are then experimentally investigated and validated.

Board-Level Example

An example board that employed all those design processes is the Cometh4510a-a 10/40 GbE switch that can be used in central switched low latency RDMA VPX 6U architectures (Figure 3). The Data Plane of this OpenVPX Switch is compliant with the switch profile MOD6-SWH-16U16F-12.4.5-4. It uses the last generation of Marvell Prestera CX platforms as the Control Plane uses the well proven Ethernet packet processors of the Cometh434xa switch family.

Figure 3
The Cometh4510a is a 10/40 GbE switch that can be used in central switched low latency RDMA VPX 6U architectures.

It features 16 ports 1000BASE-KX on the Control Plane and 16 40GBASE-KR4 ports or 48 10GBASE-KR ports on the Data Plane, thus offering a huge switching bandwidth. A multicore PowerPC management processor running the IC proven Switchware package offers two out of band 1000BASE-T ports and allows the traffic log recording on NAND flash. In addition a custom mezzanine can bring either two 10GBASE-T ports at the front and two 10GBASE-KX4 ports on P6, or 4 10GBASE-T ports at the front. This switch is the keystone for building multiprocessing VPX6U HPEC systems allowing the huge processing capacity of up to 48 DSP boards in 10 GbE and up to 16 DSP boards in 40GbE.

Thanks to new design methodologies and tools, designers can bring to the HPEC systems the best technologies deployed in the commercial High Performance Computing environments allowing high speed Data Plane transmission between the computing nodes.

Interface Concept
Quimper, France.
+33 (0)2 98 57 30 30.
www.interfaceconcept.com

 

Discuss

LEAVE A COMMENT