• Home
  • About
  • Tags Info
  •  

    Deploying SQL Server with iSCSI – Answers to your questions

    Posted by Gary Gumanow

    by: Gary Gumanow

    Last Wednesday (2/24/11), I hosted an Ethernet Storage Forum iSCSI SIG webinar with representatives from Emulex and NetApp to discuss the benefits of iSCSI storage networks in SQL application environments. You can catch a recording of the webcast on BrightTalk here.

    The webinar was well attended, and while we received so many great questions during the webinar we just didn’t have time to answer all of them. Which brings us to this blogpost. We have included answers to these unanswered questions in our blog below.
    We’ll be hosting another webinar real soon, so please check back for upcoming ESF iSCSI SIG topics. You’ll be able to register for this event shortly on BrightTalk.com.

    Let’s get to the questions. We took the liberty of editing the questions for clarity. Please feel free to comment if we misinterpreted the question.

    Question: Is TRILL needed in the data center to avoid pausing of traffic while extending the number of links that can be used?

    Answer: The Internet Engineering Task Force (IETF) has developed a new shortest path frame Layer 2 (L2) routing protocol for multi-hop environments. The new protocol is called Transparent Interconnection of Lots of Links, or TRILL. TRILL will enable multipathing for L2 networks and remove the restrictions placed on data center environments by STP single-path networks.

    Although TRILL may serve as an alternative to STP, it doesn’t require that STP be removed from an Ethernet infrastructure. Hybrid solutions that use both STP and TRILL are not only possible but also will be the norm for at least the near-term future. TRILL will also not automatically eliminate the risk of a single point of failure, especially in hybrid environments.

    Another area where TRILL is not expected to play a role is the routing of traffic across L3 routers. TRILL is expected to operate within a single subnet. While the IETF draft standard document mentions the potential for tunneling data, it is unlikely that TRILL will evolve in a way that will expand its role to cover cross-L3 router traffic. Existing and well-established protocols such as Multiprotocol Label Switching (MPLS) and Virtual Private LAN Service (VPLS) cover these areas and are expected to continue to do so.

    In summary, TRILL will help multipathing for L2 networks.

    Question: How do you calculate bandwidth when you only have IOPS?
    Answer:
    The mathematical formula to calculate bandwidth is a function of IOPS and I/O size. The formula is simply IOP x I/O size. Example: 10,000 IOPS x 4k block size (4096 bytes) = 40.9 MB/sec.

    Question: When deploying FCoE, must all 10GbE switches support Data Center Bridging (DCB) and FCoE? Or can some pass through FCoE?
    Answer:
    Today, in order to deploy FCoE, all switches in the data path must support both FCoE forwarding and DCB. Future standards include proposals to allow pass through of FCoE commands without having to support Fibre Channel services. This will allow for more cost effective networks where not all switch layers are needed to support the FCoE storage protocol.
    Question: iSCSI performance is comparable to FC and FCoE. Do you expect to see iSCSI overtake FC in the near future?
    Answer:
    FCoE deployments are still very small compared to traditional Fibre Channel and iSCSI. However, industry projections by several analyst firms indicate that Ethernet storage protocols, such as iSCSI and FCoE, will overtake traditional Fibre Channel due to increased focus on shared data center infrastructures to address applications, such as private and public clouds. But, even the most aggressive forecasts don’t show this cross over for several years from now.
    Customers looking to deploy new data centers are more likely today to consider iSCSI than in the past. Customers with existing Fibre Channel investments are likely to transition to FCoE in order to extend the investment of their existing FC storage assets. In either case, transitioning to 10Gb Ethernet with DCB capability offers the flexibility to do both.

    Question: With 16Gb/s FC ratified, what product considerations would be considered by disk manufacturers?
    Answer:
    We can’t speak to what disk manufacturers will or won’t do regarding 16Gb/s disks. But, the current trend is to move away from Fibre Channel disk drives in favor of Serial Attached SCSI (SAS) and SATA disks as well as SSDs. 16Gb Fibre Channel will be a reality and will play in the data center. But, the prediction of some vendors is that the adoption rate will be much slower than previous generations.
    Question: Why move to 10GbE if you have 8Gb Fibre Channel? The price is about the same, right?
    Answer:
    If your only network requirement is block storage, then Fibre Channel provides a high performance network to address that requirement. However, if you have a mixture of networking needs, such as NAS, block storage, and LAN, then moving to 10GbE provides sufficient bandwidth and flexibility to support multiple traffic types with fewer resources and with lower overall cost.
    Question: Is the representation of number of links accurate when comparing Ethernet to Fibre Channel. Your overall bandwidth of the wire may be close, but when including protocol overheads, the real bandwidth isn’t an accurate comparison. Example: FC protocol overhead is only 5% vs TCP at 25%. iSCSI framing adds another 4%. So your math on how many FC cables equal 10 Gbps cables is not a fair comparison.

    Answer: As pointed out in the question, comparing protocol performance requires more than just a comparison of wire rates of the physical transports. Based upon protocol efficiency, one could conclude that the comparison between FC and TCP/IP is unfair as designed because Fibre Channel should have produced greater data throughput from a comparable wire rate. However, the data in this case shows that iSCSI offers comparable performance in a real world application environment, rather than just a benchmark test. The focus of the presentation was iSCSI. FCoE and FC were only meant to provide a reference points. The comparisons were not intended to be exact nor precise. 10GbE and iSCSI offers the performance to satisfy business critical performance requirements. Customers looking to deploy a storage network should consider a proof of concept to ensure that a new solution can satisfy their specific application requirements.

    Question: Two FC switches were used during this testing. Was it to solve an operation risk of no single point of failure?
    Answer:
    The use of two switches was due to hardware limitation. Each switch had 8-ports and the test required 8 ports at the target and the host. Since this was a lab setup, we weren’t configuring for HA. However, the recommendation for any production environment would be to use redundant switches. This would apply for iSCSI storage networks as well.
    Question: How can iSCSI match all the distributed management and security capabilities of Fibre Channel / FCoE such as FLOGI, integrated name server, zoning etc?
    Answer:
    The feature lists between the two protocols don’t match exactly. The point of this presentation was to point out that iSCSI is closing the performance gap and has enough high-end features to make it enterprise-ready.
    Question: How strong is the possibility that 40G Ethernet will be bypassed, with a move directly from 10G to 100G?
    Answer: Vendors are shipping products today that support 40Gb Ethernet so it seems clear that there will be a 40GbE. Time will tell if customers bypass 40GbE and wait for 100GbE.

    Thanks again for checking out our blog. We hope to have you on our next webinar live, but if not, we’ll be updating this blog frequently.

    Gary Gumanow – iSCSI SIG Co-chairman, ESF Marketing Chair

    Leave a Reply

    Your email address will not be published. Required fields are marked *