Networking Report (NWR) Subscribe

Netronome 100GbE NIC Targets SDN

New FlowNIC-6xxx Offloads Open vSwitch at 200Gbps

April 28, 2014

By Bob Wheeler


Has the time come for programmable Ethernet adapters? Attendees at the Linley Tech Data Center Conference in February heard not one but three talks on intelligent PCIe NICs. All three vendors—Cavium, Netronome, and Tilera—discussed Open vSwitch (OVS) offload as an application of their respective offerings. Demand for these products is coming from both cloud-service providers and carriers as they search for ways to improve the performance of SDN and NFV implementations.

Now, Netronome has announced a new family of NICs that scale to two 100G Ethernet (100GbE) ports, extending the spectrum of intelligent adapters. When they become available in the middle of this year, the company’s FlowNIC-6xxx adapters may also be the industry’s first 100GbE NICs of any kind.

The FlowNIC-6xxx design uses the NFP-6xxx network processor, which Netronome announced nearly two years ago (see MPR 6/18/12, “Netronome Goes With the Flow”). Following the inevitable schedule slips for a device of this complexity, NFP-6xxx silicon is finally exiting Intel’s fab and making its way to the startup. The FlowNIC-6xxx family includes cards with 2x100GbE ports and 4x40GbE ports, as well as SKUs with fewer ports and significantly lower power dissipation.

In conjunction with its NIC announcement, Netronome also respun its software to embrace open-source code such as OVS as well as new SDN protocols including OpenFlow. The new FlowEnvironment software supports the company’s shipping NFE-32xx adapters in addition to the FlowNIC-6xxx family. Although it supplies both source code and production-ready binaries, Netronome says more customers are opting for the latter and programming only through APIs.

As noted above, Netronome is not alone in offering programmable Ethernet adapters. Tilera is shipping NICs based on its Tile-Gx72 processor with up to 8x10GbE ports, and it is targeting the same network-appliance designs as Netronome. By contrast, Cavium’s LiquidIO NIC occupies the low-cost end of the landscape, providing a single-port 10GbE variant aimed at high-volume applications in cloud servers.

Before the advent of OVS, programmable-NIC vendors and their customers often had to implement new features and protocols from scratch. Now, they need only port the latest OVS version to deliver advanced protocols like OpenFlow, VxLAN, and NVGRE. Thus, OVS cuts development costs and time to market for smart-NIC software.

In the new era of SDN and NFV, end customers need system architectures that evolve at the speed of software instead of the turtle’s pace of ASICs. As an ingredient of such designs, intelligent NICs could grow from a niche market to one that proves lucrative for small vendors like Netronome and Tilera.

Smart Hardware Can Be Simple

Thanks to the high level of integration afforded by Intel’s 22nm FinFET process, Netronome’s new NICs are surprisingly simple. Figure 1 shows the 2x100GbE FlowNIC design, which combines the NFP-6xxx NPU, memories, and CXP cages for the 100GbE ports. The most unusual aspect of the design is its four separate PCI Express Gen3 x8 interfaces, each of which delivers a maximum bandwidth of 64Gbps. In aggregate, they provide enough throughput to transfer 200Gbps of traffic to the host processors, although the NIC can also forward packets between its network ports.

Figure 1. Block diagram of Netronome 2x100GbE FlowNIC-6xxx. Transferring 200Gbps of traffic to the host requires an unusual PCIe configuration using cables routed to adjacent slots.

One PCIe interface links with the card’s edge connector, whereas the other three terminate in cable connectors. Three cables connect with three passive PCIe cards, which reside in adjacent slots. In a four-socket (4S) design such as a Xeon E5-4600 server, this arrangement enables each PCIe interface to connect to a separate processor, assuming the PCIe slots also connect to the processor complex in such a fashion. When PCIe traffic is terminated locally, it need not traverse QPI links between sockets, reducing latency and QPI loading.

FlowNIC also provides a connector for an Interlaken cable, which enables a direct connection between the data paths of two cards. For established flows, the NICs can then forward up to 400Gbps of traffic without any host intervention.

The remainder of the NIC design is relatively straightforward. The NFP’s CAUI ports connect directly to the two CXP modules. Flash memory configures the NFP and stores code for the integrated ARM11 management CPU. The only other memory is up to 24GB of commodity DDR3 SDRAM, which primarily stores flow tables. Because the NFP-6xxx integrates more than 30MB of on-chip memory, FlowNIC does not require TCAM for flow classification.

The 4x40GbE FlowNIC uses a similar design but integrates four QSFP+ cages in place of two CXPs. Netronome also offers 1x100GbE and 2x40GbE “low-end” FlowNICs, which use a half-performance NFP-6xxx to reduce power dissipation to 75W and support only two PCIe interfaces instead of four. Both high-end and low-end 40GbE FlowNICs support breakout cables from QSFP+ to 4xSFP+, enabling the cards to handle up to 16x10GbE and 8x10GbE ports, respectively.

It’s All in the Code

For environments that don’t support OVS, such as VMware and Windows Hyper-V, Netronome provides a standard NIC driver along with APIs to manage rules and flows. Because FlowNIC offers SR-IOV with 64 virtual functions per PCIe interface, virtual machines (VMs) can directly access the NIC, bypassing the hypervisor. The NIC handles load balancing and flow affinity across VMs, as well as packet encapsulations for NVGRE and VxLAN.

Netronome’s flow API enables an application to establish cut-through processing for known flows, meaning packets from those flows are not sent to the host. A security appliance, for example, can allow known flows to pass through the NIC, whereas packets from unknown flows are passed to the host processor.

OVS environments are a different animal (for background, see sidebar “Open vSwitch History and Porting”). In Netronome’s design, OVS resides on the x86 host processor and FlowNIC acts more like an Ethernet switch. The company has produced two OVS-offload variants, one more unusual than the other.

In its first option, Netronome ports the ofproto provider, which is the same approach an Ethernet switch vendor would use. In this case, OpenFlow commands program FlowNIC’s wildcard-match table, enabling flow classification in the data path. By handling this function in the FlowNIC, the company estimates an OVS flow-setup rate of 15 million flows per second, compared with only 66,000 flows per second for OVS running solely on an x86 server. The downside of this option is that Netronome’s ofproto provider may lack new features added to OVS versions released after the module was ported.

Netronome’s second option, which it calls “selective offload,” runs the complete OVS kernel module on the x86 host. The company inserts hooks in the standard kernel module that enable offload of selected exact-match flow entries. The OpenFlow controller then decides which flows are offloaded to the NIC and which are handled by the standard OVS data path. Packets associated with new or unknown flows are forwarded across the PCIe interface to the ovs-vswitchd module on the host, which subsequently populates the OVS kernel flow table that is mirrored to the FlowNIC.

The simpler selective-offload port enables Netronome to track OVS feature additions more quickly than the full ofproto provider approach, but the flow-setup rate suffers because FlowNIC does not classify packets from unknown flows.

Unlike an Ethernet switch, which forwards packets only between network ports, FlowNIC can also forward packets to VMs. Netronome provides a DPDK poll-mode driver for the VM (for more information on Intel’s DPDK, see NWR 1/6/14, “Highland Forest Pumps More Packets”). Using this driver and SR-IOV, VM traffic bypasses the hypervisor. Efficient VM access to the data path is attractive for network-functions virtualization (NFV), where services running on VMs replace dedicated network equipment.

Exposing Design Tradeoffs

To achieve its extreme performance level, Netronome’s FlowNIC-6xxx uses a very different processor architecture relative to the Cavium and Tilera intelligent NICs shown in Table 1. Those competing cards are built around their respective multicore processors (see NWR 6/10/13, “Cavium 10G Ethernet NICs Get Smart,” and NWR 11/4/13, “Tilera Adapters Scale to 72 CPUs”).

 

Table 1. Comparison of intelligent NICs with OVS offload. The available products span a wide range of performance and power points. The Netronome and Tilera NICs shown are the highest-performance variants from each vendor. *In addition to OVS-native protocols; †requires third-party stack. (Source: vendors)

The Cavium and Tilera embedded processors run SMP Linux and support Gnu tools, enabling OVS to run entirely on the NIC. This approach means these companies can use the simpler dpif provider port type. It also means customers wishing to write or modify code for the NICs can use familiar tools.

By contrast, Netronome’s NFP-6xxx NPU is purpose-built for data-plane processing, does not run an operating system (except for management), and uses a proprietary tool chain. This optimized design yields about 60% greater throughput per watt than Tilera’s NIC, which in turn is more efficient than Cavium’s. Although a 150W NIC may sound radical, servers for high-performance computing integrate Nvidia Tesla and Intel Xeon Phi PCIe coprocessor cards that dissipate 225W.

Although OVS offload has become the common theme for all three vendors, these NICs can also offload protocols not handled by OVS. The processors in the smart NICs include crypto coprocessors, enabling all three to terminate IPSec VPNs—an important function in security appliances.

Netronome is unique in supporting flow-based dynamic load balancing to VMs as well as RDMA over Converged Ethernet (RoCE), which Microsoft supports in its SMB Direct storage protocol. Overall, Netronome’s FlowNIC provides the best integration with virtualized servers, whereas the Cavium and Tilera NICs give end customers a friendlier programming environment.

Servers as Network Equipment

Readers that have followed Netronome will recall that its NPU technology has roots in the Intel IXP line. Rather than addressing traditional NPU applications in carrier equipment, however, the startup developed a unique heterogeneous architecture where its NPU works alongside Xeon processors. When Netronome’s founder and CEO, Niel Viljoen, started evangelizing this approach, he faced an uphill battle. As is often the case with visionaries, it now appears he was simply ahead of the market.

Today, mega data centers are adopting SDN and buying white-box switches. Through the NFV initiative, service providers are demanding commodity hardware that delivers the same virtualization benefits that public-cloud providers enjoy. Despite Intel’s huge progress, however, Xeon platforms can’t simultaneously deliver 100Gbps+ throughputs and required services. Thus, intelligent NICs could prove to be an enabling technology for networks built using standard servers.

Netronome is also the first company to announce a 100GbE NIC, although QLogic and Tilera are both promising products this year. As it increasingly supplies production-ready software, Netronome will compete with traditional NIC vendors like Emulex, Mellanox, and QLogic, two of which are already shipping 40GbE NICs. Thus far, however, none of the traditional NIC vendors offers OVS offload or OpenFlow support. If those features become paramount, one of these vendors might acquire Netronome rather than rearchitecting its internal design

Still, it’s too early to declare the FlowNIC-6xxx the best NIC for 40GbE and 100GbE SDN or NFV. Designs based on multicore processors offer a simpler approach to OVS porting, and customers can more easily customize the NIC’s embedded software. Neither Netronome nor Tilera has sampled the processors that will serve in their respective 40GbE and 100GbE NICs, so the underlying silicon for these designs remains unproven.

Open vSwitch changes the face of intelligent NICs from proprietary oddballs to accelerators for open-source software. This fundamental shift should create broader-based demand than these products have ever experienced. The extent to which demand materializes will determine whether smart NICs become only a profitable niche for startups or a market that attracts larger vendors.

Open vSwitch History and Porting

Open vSwitch (OVS) is an open-source multilayer switch for virtualization environments including Citrix XenServer as well as open-source Xen and Linux KVM. It has roots in the OpenFlow project and was initially developed by Nicira, which was acquired by VMware in 2012. The lead developer of the OVS project is Ben Pfaff, who remains at VMWare. The project began in mid-2009, and OVS 1.0 was released in May 2010. In March 2012, OVS was merged into Linux kernel 3.3.

OVS is a strong example of an open-source project, garnering contributions from a broad set of companies spanning silicon vendors like Broadcom and Marvell, OEMs like Cisco and HP, and cloud-service providers like Amazon, Google, and NTT. The result is OVS v2.0, released in October 2013, which supports not only OpenFlow but also a comprehensive list of Layer 2 and Layer 3 protocols. It fully supports OpenFlow v1.0 but includes “experimental” support for v1.1, v1.2, and v1.3. It also handles the newest tunneling protocols for network virtualization, including VxLAN and various GRE variants.

Although OVS was initially developed for software implementation, it can also be ported to hardware such as an Ethernet switch. In the latter case, the hardware implements the data path, whereas the OVS control path remains in software. Fortunately for software engineers doing ports, OVS is a modular design with cleanly separated user-space and kernel-space processes.

For established flows, the OVS kernel module handles all packet processing. The first packet of a flow, however, is sent to the ovs-vswitchd module, which resides in user space. OVS v2.0 adds multithreading to this module, improving flow-setup performance. The module implements the OpenFlow client function, which communicates with external OpenFlow controllers.

When the OVS data path runs primarily in software, it communicates with the NIC through a low-level module called a dpif provider. In this case, OVS implements Layer 2 features such as link aggregation and VLANs. To take advantage of hardware-based flow tables, however, vendors must implement a higher-level module called an ofproto provider. This type of port requires them to implement the Layer 2 functions provided by the OVS-native ofproto provider.

Price and Availability

The FlowNIC-6xxx family is scheduled for general availability in 3Q14. Netronome has not announced pricing, but we expect the NICs will offer leading price/performance. For more information, access http://www.netronome.com/product/flownics/.

Events

Linley Cloud Hardware Conference 2017
Covers system design for cloud computing and networking
February 8, 2017
Hyatt Regency Hotel, Santa Clara, CA
Register Now!
More Events »

Newsletter

Linley Newsletter
Analysis of new developments in microprocessors and other semiconductor products
Subscribe to our Newsletter »