• Home
  • About

    Fibre Channel over Ethernet (FCoE): Hype vs. Reality

    Posted by Casey Quillin Sameh Boujelbene

    It’s been a bit of a bumpy ride for FCoE, which started out with more promise than it was able to deliver. In theory, the benefits of a single converged LAN/SAN network are fairly easy to see. The problem was, as is often the case with new technology, that most of the theoretical benefit was not available on the initial product release. The idea that storage traffic was no longer confined to expensive SANs, but instead could run on the more commoditized and easier-to-administer IP equipment was intriguing, however, new 10 Gbps Enhanced Ethernet switches were not exactly inexpensive with few products supporting FCoE initially, and those that did, did not play nicely with products from other vendors.

    Keeping FCoE “On the Single-Hop”?

    The adoption of FCoE to date has been almost exclusively “single-hop”, meaning that FCoE is being deployed to provide connectivity between the server and the Top of Rack switch. Consequently, traffic continues to be broken out one way for IP, and another way for FC. Breaking out the traffic makes sense—by consolidating network adapters and cables, it adds value on the server access side.

    A significant portion of FCoE switch ports come from Cisco’s UCS platform, which runs FCoE inside the chassis. In terms of a complete end-to-end FCoE solution, there continues to be very little multi-hop FCoE happening, or ports shipping on storage arrays.

    In addition, FCoE connections are more prevalent on blade servers than on stand-alone servers for various reasons.

    • First, blades are used more in a virtualized environment where different types of traffic can travel on the same link.
    • Second, the migration to 10 Gbps has been very slow so far on stand-alone servers; about 80% of these servers are actually still connected with 1 Gbps, which cannot support FCoE.

    What portion of FCoE-enabled server ports are actually running storage traffic?

    FCoE-enabled ports comprise about a third of total 10 Gbps controller and adapter ports shipped on servers. However, we would like to bring to readers’ attention the wide difference between the portion of 10 Gbps ports that is FCoE-enabled and the portion that is actually running storage traffic. We currently believe less than a third of the FCoE-enabled ports are being used to carry storage traffic. That’s because the FCoE port, in many cases, is provided by default with the server. That’s the case with HP blade servers as well as Cisco’s UCS servers, which together are responsible for around 80% of the FCoE-enabled ports. We believe, however, that in the event that users buy separate adapters they will most likely use that adapter to run storage traffic—but they will need to pay an additional premium for this – about 50% to 100% – for the FCoE license.

    The Outlook

    That said, whether FCoE-enabled ports are used to carry storage traffic or not, we believe they are being introduced at the expense of some FC adapters. If users deploy a server with an FCoE-enabled port, they most likely will not buy a FC adapter to carry storage traffic. Additionally, as Ethernet speeds reach 40 Gbps, the differential over FC will be too great and FC will be less likely to keep pace.

    About the Authors

    Casey Quillin is a Senior Analyst, Storage Area Network & Data Center Appliance Market Research with the Dell’Oro Group

    Sameh Boujelbene is a Senior Analyst, Server and Controller & Adapter Market Research with the Dell’Oro Group

    3 responses to “Fibre Channel over Ethernet (FCoE): Hype vs. Reality”

    1. Howard Marks says:

      You left out what seems to me to be the main driver for FCoE in blade environments and that’s the limited number of mezzanine slots on most blades. In fact the first generation Cisco UCS blades had just one mezzanine slot and no LAN on Motherboard (LOM) so that slot had to be used for a 10Gbps Ethernet card. If the customer had any desire at all to integrate UCS with Fibre Channel storage they could either use FCoE or the significantly less desireable iSCSI-FC bridge appliance approach.

    2. Casey Quillin Sameh Boujelbene Casey Quillin Sameh Boujelbene says:

      Thanks. That’s a good point. We should probably also mention the fact that blade servers are closed systems which exposes them to less interoperability issues, unlike rack servers where you have to connect servers and switches from different vendors.

    3. Rahul Shah says:

      I am not sure I understand your comment “We believe, however, that in the event that users buy separate adapters they will most likely use that adapter to run storage traffic”…

      Do you mean one NIC-only and one Converged Network Adapter (CNA)?

      That assumes, to Howard’s comment above, that dual mezzanine slots are available (which is likely to be the case with latest generation blades).

      Thank you.