Close Advertisement


Internet Transport Protocols Contend for Military Interconnect Role

Internet technologies face some tradeoffs for interfacing real-time systems like radar. But there’s a strong argument to be made for an UDP-based architecture solution.


Keywords in this Article:

No Keywords

  • Page 1 of 1
    Bookmark and Share

Article Media


Large ground-based systems such as phased-array radars, SDR, and satellite ground stations generate massive amounts of data processed in real time using multicomputers. With hundreds or thousands of potentially heterogeneous compute nodes inter-communicating with each other, military systems engineers and engineering managers are accustomed to the challenge of finding the correct balance between potentially conflicting objectives. Throughput, latency, energy, standards, weight-of-implementation, ability to tech-refresh, and cost are just some of their concerns.

On a SWaP constrained platform, the less-significant communication concerns thin out very quickly. In such systems a specialty interconnection network task-suited for platform may be the only way to achieve demanding mission objectives. However, for ground-based, non-SWaP applications, tech evolution is well on its way to disrupt the assumption that internet transport layer protocols cannot be used for messaging.

Outside of the particular case of SWaP constrained systems, there's a lot of disruptive interconnect changes afoot. With that in mind, this article showcases a real-life phased array radar system implementation example of what the future of military interconnect fabric might look like. What is infeasible today may become practical tomorrow through iterative refinements.

Costs of Comms under Constraint

Historically, to achieve higher system performance, emphasis has been on placed on individual computational blocks which comprise a system. For example, a typical Radar signal processing chain might be comprised of the these six sequential steps: (a.) Pre-Processing; (b.) Coherent Processing / Slow-Time Filter; (c.) Beamforming; (d.) Doppler Processing; (e.) Pulse Compression; and (f.) CFAR Detection (Figure 1).

Figure 1
Shown here are the six sequential steps of a typical radar signal processing chain.

Great emphasis has been placed on optimizing the computational efficiency of these kernels, as the weakest link can ultimately limit system performance. This is obvious. What is more subtle is that the communication, which includes data reorganization, has often been a second-class citizen "dictated-to" by the way in which each computational step consumes and produces data.

Systems engineers in a non-SWaP constrained space live this problem every day. Not just balancing computation and communication; but also effectively managing the impact of memory access patterns. Steps (b), (c), (d), and (e) above don't simply need to communicate; they require a transpose in the 3D data set moved between them. To meet these challenges under tight SWaP constraints all but requires the use of lightweight, application-specific communication patterns.

Expense and Limited Reuse

The result is that significant engineering expense in invested, often with limited reuse, in these application-specific communication patterns. This situation is not unique to radar (non-SWaP examples such as SDR systems also use custom protocols to transport data between servers). As a consequence, these high-performance specialized interconnects drive manufacturers to reengineer their multicomputer interconnect fabrics at each tech refresh. This increases R&D expenses, systems upgrade costs; and it slows down the pace at which new products and features may be brought to market.

That's not to say you can't have standards-based interconnect under SWaP. It's just more difficult. Our industry has watched circuit-switched standards become displaced by packet-switched standards RapidIO, Infiniband and Ethernet. At the end of the day, even well-vetted protocols like JESD204 and VITA49 raise eyebrows if they adversely impact SWaP. Besides SWaP, there are other factors that favor a specialized, as opposed to a commoditized, interconnect. Latency is perhaps chief among them. In application domains such as EW, getting a correct answer clock cycles ahead of an adversary is of obvious value.

Internet Protocol: A Path Forward

Despite the aforementioned caveats on SWaP, and in some cases latency, there is terrific pressure to standardize on communication that can both scale-up and scale-out. COTS vendors are eager to translate the economics of convergence-seen in the broader Internet in ubiquitous datacenter use-case-to domain- and mission- specialized applications. For the reasons given, developers have had the Internet Protocol (IP) Transport Layer 4 in their sights for some time. Migrating toward L4 would provide access to hierarchical scalability and redundancy (via the internet), robust multicast capabilities (via IGMP), and rich COTS component availability in silicon, switches and software.

However, migrating the interconnect architecture toward L4 has not been a practical solution. On one hand, Transmission Control Protocol (TCP) did not have the efficiency and throughput. On the other hand, User Datagram Protocol (UDP) did, but was simply not reliable enough. Fast forward to today, TCP still is fundamentally a low throughput protocol. However, modern transceivers Bit Error Rates (BER) have fallen dramatically. Forward Error Correction (FEC) can reduce the BER from 1 error every 100 seconds to 1 error every 3 years (Figure 2). This technology evolution disrupts the legacy assumption that UDP can be summarily dismissed for multicomputer messaging. Two factors, falling point-to-point BER and Software Defined Networking (SDN) lead the charge to make UDP stand up to the task.

Figure 2
In modern transceivers Forward Error Correction (FEC) can reduce BER from 1 error every 100 seconds to 1 error every 3 years.

Fueled in no small part by industry standards like IEEE 100 GbE and PCI Express Gen 4, chip manufacturers, especially FPGA vendors, have been pressed to make exceptional SERDES transceivers. While PCIe Gen 4 "only" presses the SERDES to 16 Gbps; the CAUI-4 flavor of 100 GbE demands 25 Gbps across four physical lanes. These facts on the ground have essentially made 25 Gb/lane signaling a commodity on many devices.

For chip to chip applications, power can be reduced to maintain a target BER. For backplane and chassis applications, direct-attached copper remains a low-cost option to optics. Few things come "for free" however, and with the extended reach the SERDES require increased drive power-significant when you may have hundreds or thousands of nodes). Designers of engineered Ethernet interconnects have the option of incorporating two different styles of Forward Error Correction (FEC) to reduce BER as well.

The Case for UDP

The simplicity of UDP and its stateless transmission of datagrams comes at a cost beyond the aforementioned discussion of bit error rate. UDP datagrams, unlike TCP, have no inherent handshaking and therefore, no guarantee of delivery. A datagram may arrive out of order, late, replicated, or not at all. What you can count on, owing to frame-level Frame Check Sequence (FCS) protection and IP-level checksum protection, is that when a datagram arrives, it has arrived intact, that is to say its contents are not corrupt.

It's not by mistake then that a wide range of networking services and applications are built on top of UDP. However some of these services and applications are not satisfied by the "good enough" of a particular BER or guaranteed Service-Level Agreement (SLA) for packet loss. This is an important point of divergence for System Architects. Without UDP, for example with a lower-level Xilinx Aurora or Altera/Intel SerialLite interface, you are pretty much on your own to invent whatever protocol you need. This double-edged sword should on one hand fulfill mission requirements, but essentially punts the leverage of using COTS infrastructure for switches, routers, and software.

An alternative is to use UDP, optionally with a supplemental scheme to deal with an imperfect channel. The old adage here is that once you begin this task, you are doomed to re-invent TCP. But this isn't true. There exists a universe of protocols that can be run on top of UDP to build a reliable service on top of one that is, by definition, without a delivery guarantee. The trade-space needs to be watched closely as the added gates to realize this capability will likely add latency, area, and power. But in some protocols, for example the Paxos family of consensus protocols, this is entirely the point: to find consensus across potentially unreliable nodes connected by imperfect edges (channels).

A Perfect Storm

Since 2014, Atomic Rules has been working on a UDP Offload Engine (UOE) core for FPGA and ASIC. Before discussing a specific-use case in the next section, it may be insightful to understand the facts on the ground leading up to the development of this product. Back in 2010, a significant FPGA vendor was doing technical planning conference calls for their 20 nm product line and asked Atomic Rules "What would you do with a 28 Gbps SERDES?". Our from-the-hip, instant on-the-phone response was "We don't know, is there a 28 Gb Ethernet?". A few years later leading FPGA vendors announced 28 Gbit SERDES and the 25/50 GbE Consortium was formed to fill-in-the-blanks of the as-of-yet not-standardized IEEE 802 details.

In2014, Atomic Rules stood shoulder to shoulder with the 25/50 GbE Consortium members and knew that 25/50 GbE was on the way. Atomic Rules heard that a product from BittWare, the XUSPS3 was coming and it would have sixteen of these 28 Gbit SERDES on four QSFP28s facing the panel (Figure 3).

Figure 3
The XUSPS3 board serves up sixteen 28 Gbit SERDES on four QSFP28s facing the panel.

It was clear at that point that the future would include a world with up to 16 parallel, independent, full-duplex 25 GbE UDP lanes occupying a modest area on an FPGA. Even early in 2015, however, 25 GbE was considered too "early", "risky", or "aggressive". We devised a plan to under-clock the 400 MHz 25 GbE core at 156.25 MHz for operation at 10 GbE. The result is detailed in presentation at a San Jose Xilinx event in 2015 (see online version for link). The bottom-line on AR-UOE development is that Atomic Rules brought together two key concepts to address a military market need: (1.) The standards-based ubiquity of UDP as a Transport Protocol; and (2.) The 10/25/50/100 GbE capabilities of contemporary FPGA SERDES.

UDP Implementation Use Case

Illustrating the potential impact of what we have been discussing so far we can talk in broad terms of where UDP may or may not make sense. It's helpful now to go back to the phased array radar example. It's a common practice to aggregate multiple channels of I/Q onto a single card. Assuming contemporary JESD204 convertors, there would be little utility for UDP in that role. But just a level or two up the processing chain where the baseband becomes Pulse Descriptor Words (PDWs) or other relatively high-throughput construct, UDP has a place for several reasons (Figure 4).

Figure 4
This UDP use case shows deployed UDP-based radar architecture.

The isochrony of the time-aligned I/Q domain can now be represented asynchronously. UDP is an excellent transport; and although packets can have latency jitter, they may still be timestamped to microsecond precision. UDP datagrams can easily be multicast. That is to say that a "one to many" data distribution plan can easily and efficiently be created using the IGMP "join" and "leave" commands. For example, imagine that any number of nodes wish to subscribe to a particular multicast channel. The datagram consumer UDP core simply "join" that channel and the connected switch or router does the replication work. UDP multicast is the backbone of how trading markets disseminate the stream of asset prices - it's reasonable to consider reusing this well-traveled technology in a Radar or SIGINT domain.

UDP Here to Stay

In the end, each system architect will need to judge for themselves if UDP is ready for their application. At the performance extremes, the case can almost always be made for a proprietary protocol. But when costs and time-to-deployment are key concerns, it is difficult to overlook how UDP-a thirty year old protocol-continues to thrive at 100 GbE and beyond.

Atomic Rules is currently working on a free, one page checklist to help military systems engineer find out if UDP would be a good fit for their application. Please send an email to if you would like to access the checklist.

Atomic Rules
Auburn, NH
(603) 483-0994

Concord, NH
(603) 226-0404