Optical Networking, Storage
Fiber Channel over Ethernet (FCoE) technology frequently asked questions
Fiber Channel over Ethernet (FCoE), which compresses Fiber Channel storage data to Ethernet LANs, eliminates the management and cost burden of separate storage networks in the data center. What are the hardware requirements for deploying FCoE? How should we optimize FCoE deployments?
Q: What are the hardware requirements for deploying FCoE? What are your recommendations and suggestions for optimizing FCoE deployments?
A: In general, FCoE requires a switch that enables data center bridging (DCB) to extend traditional Ethernet networks to fit the delivery of storage traffic without losing data. Some (but not all) of the Ethernet vendors support DCB functionality for 10 GbE switches. Meanwhile, the converged network adapter (CNA) is suitable for FCoE. The CNAs are for use with traditional Ethernet and Fiber Channel host bus adapters (HBAs) and support Ethernet and Fibre Channel on the same cable at the same time. These CNAs are 10 Gbps over both Ethernet and Fiber Channel.
Q: What software is needed to deploy FCoE and how well are these software stacks adapted to FCoE?
A: We have been running FCoE since 2008, first on Windows. FCoE supports the latest version of Linux, Solaris and other operating systems. Each CAN adapter has a driver that works in the right environment. And many CNA vendors use the same suite of drivers for both native Fiber Channel and FCoE. FCoE also supports VMware. Now, some people are trying to make FCoE iSCSI in the operating system, but this depends on how many people buy it account.
Q: Does FCoE have storage subsystems that meet specific needs? What are the features of Fiber Channel subsystems?
A: FCoE can support the storage subsystem locally, and some vendors have stated it publicly. NetApp has also implemented native FCoE support as well as other vendors. The FCoE fabric must be interoperable with the native Fiber Channel fabric and the FCoE must support all Fiber Channel features. We tested the servers that let FCoE CAN connect to DCB / FCoE switches (their local Fiber Channel ports are also connected to local Fiber Channel storage systems) to see if they were working as expected. At the storage system interface level, FCoE is equivalent to Fiber Channel running at 10 Gbps. The only difference is that it connects the DCB / FCoE switch instead of the local Fiber Channel switch.
Q: What’s the right management tool for FCoE storage? What is the level of support for third-party data center management tools by FCoE? Also, are management tools and storage subsystems together better?
A: The DCB / FCoE switches have their own partitioning interface, but it is up to the vendor, which is similar to their corresponding Fiber Channel interface. HBA / CAN Providers Storage providers that support FCoE use the same management interface that was previously used on adapters, and make the management interface almost identical to Fiber Channel. Although we did not test much third-party storage management software, FCoE and Fibre Channel should be similar to this management software. The main difference is that FCoE storage will connect with different switches instead of local Fiber Channel storage.
Q: Is there any suggestions for the practice of FCoE deployment?
A: FCoE, which I call “Slow Technology,” should be considered when planning a new data center, new server, or storage expansion. The big problem is that during the FCoE deployment, Ethernet workers are required to know about the storage network and Ethernet because until now the rules for both are different. The fiber cabling also needs to be considered. For example, OM3 and OM4 cabling are suitable for FCoE and 10 GbE, while Fiber Channel is suitable for faster rate.