Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Design

Virtualizing I/O


Today we're speaking to Jon Toor, VP Marketing at Xsigo Systems, which recently unveiled the I/O Director, a software and hardware combination that virtualizes I/O connectivity to network and storage resources.

DDJ: Jon, what's the importance of the I/O Director?

JT: It basically redefines how servers are connected to storage and networks. What we're really about is consolidating server connectivity. Today's servers have a lot of connection coming out to storage and Ethernet networks. The first problem with this is complexity--you have so many connections going on. And two, the fixed nature of all these connections, you've got fixed cables, fixed cards--it's very difficult to repurpose a server from one thing to another. It's also very difficult to take full advantage of virtualization. With all the different cables coming out, you've a fixed asset that's linking up to a virtual asset. It creates a mismatch in term of capabilities.

So Xsigo is about virtualizing I/O, consolidating the multiple cables coming out of every server down to one cable coming out of each server that shares both networking and storage traffic. And then secondly, replacing all the fixed assets within the server, the fixed NICs and HBAs making storage and network connectivity. Replacing them with virtual NICs and HBAs. Just as you virtualize the processors and you can have many apps running on a single processor, what we do is we provide you with virtual I/O. You've got multiple types of I/O running on one single card. And you can launch different kinds of I/O within that environment in real time.

DDJ: And the I/O director is the traffic cop for all that connectivity?

JT: Exactly. The I/O Director becomes that shared resource, all the servers connect into the I/O Director by a single cable. Within the I/O Director that's taking the traffic and putting it out onto the SAN or onto the LAN, whichever network you require. You can have a dozen different networks coming into the I/O director. You have virtual connectivity in each of your servers, and you can direct that connectivity into any of those networks.

DDJ: Are there bottleneck issues that arise, or are the networks typically underused?

JT: Well both statements are on point. Certainly network connections tend to be underutilized, just as processors do. We designed this solution to avoid bottlenecks. You've got 10 gigabits out of every server, which is more bandwidth than most servers can saturate today. And that 10 gigabits is dynamically shared between network and storage traffic. So can have the 10 Gigs for storage when that's needed and 10 for network when that's needed, which you can't do with traditionally connectivity. That 10 Gig goes into a device that has a high-speed fabric that provides 780 gigabits of total bandwidth, so it's a completely non-locking fabric that then connects into the respective I/O modules that connect to the outside world. And each of those is designed with custom silicon that itself ensures it's a line-rate, non-locking environment. So from start to finish, from the connectivity coming out of the server, to the fabric within the box, to the card going to the outside world, everything is being done at line rate or better to ensure that we don't become a bottleneck.

A lot of users find that they actually enhance performance, because for server-to-server connectivity--say you have a server that's backing up other server connected in the Xsigo environment. For that requirement from one server to another, you've actually got far better bandwidth and lower latency than you normally would. You have better performance for backup, for interprocess communications in Oracle environments, anytime you have to move data from one server to another performance is enhanced.

DDJ: Won't companies with an existing investment in hardware be reluctant to adopt this new system?

JT: Probably the most common deployment scenarios is where the user is making a change to a more consolidated environment with virtual machines, or a change to a blade environment. Or they're moving to a utility computing model. That's when they really encounter the limitations of fixed I/O. As long as you're running one application per server and the app stays on the server forever, fixed I/O works okay. But when you start thinking about running more applications on that server or repurposing that server on a time-of-day or seasonal basis, you've got a new set of requirements that change the game.

At VMWorld, one thing we heard very often from people was, "We're moving to blades, but I can't accommodate the limited I/O capabilities that come with blades." Traditionally you've got a fixed number of I/O slots available on each blade. Once those slots are expended, you're done. You can't add more I/O to a blade. Particularly as people are adding virtualization, they find that they need more I/O. So virtual I/O says you no longer have a fixed limit on your I/O per blade. You can deploy as many NICs and HBAs as you need, you can move them from blade to blade. The limitations are gone.

DDJ: Are there potential benefits on energy consumption?

JT: This really provides the last stepping stone to utility computing. Being able to dynamically bring servers online and offline as needed, the server has to be a completely stateless resource. To do that you need to have I/O that can be flexibly deployed and redeployed. The whole vision of virtualization is you're consolidating the number of servers, but only 2% of servers right now are running virtualization. One of the impediments of getting to much higher virtualization is how do I bring more virtual apps into my environment. Two things you need to do is guarantee security and performance, and virtual I/O provides that. You end up saving cost, energy, and management resources. You're employing fewer servers to do more work.

DDJ: Where can readers go for more information?

JT: Our web site is at www.xsigo.com.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.