Blade server: Difference between revisions
imported>Howard C. Berkowitz No edit summary |
imported>Howard C. Berkowitz No edit summary |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
{{subpages}} | {{subpages}} | ||
'''Blade servers''' are computer servers arranged for high-density installations, such as [[data center]]s. | '''Blade servers''' are computer servers arranged for high-density installations, such as [[data center]]s. They are one of several [[form factor]] alternatives for increasing density over the AT-style PC tower. Blade servers, which themselves have form factor alternatives, put several server processors into a an industry-standard 19" wide rack. The standard rack is 42 Rack Units (RU), each 1.75" high; shorter racks are available.<ref>{{citation | ||
| journal = MCPc Blog | |||
| title = Choosing the Appropriate x86 Server Form Factor | |||
| date = 21 January 2010 | |||
| url = http://blog.mcpc.com/bid/23600/Choosing-the-Appropriate-x86-Server-Form-Factor}}</ref> | |||
To deal with the concentrated heat load, the chassis is designed for efficient cooling, which can be forced air, or chilled water pumped through pipes. | When more than one server is required at a location, and there are no pressing reasons for individual cases such as separate ownership, the "tower" case for AT-style PC cards is considered a very inefficient use of space. One of the first high-density approaches to server installation was to mount the processor horizontally, into a 1 RU chassis commonly called a "pizza box". Another approach, often desirable when the server needs many internal disks or interface cards, is to put mounting hardware, on a tower case, which can bolt horizontally or vertically into a rack, still getting multiple servers into the floor footprint of a single rack. Several free-standing towers quickly use up the floor space of a nominal 25" wide, 24" deep rack. | ||
==Blade server concept== | |||
The first blade servers put up to eight processors in a 6 RU high rack-mounted chassis. A blade server chassis has integral network interfaces and management tools, although it tends not to have much local disk capacity, assuming [[network attached storage]] or a [[storage area network]]. To deal with the concentrated heat load, the chassis is designed for efficient cooling, which can be forced air, or chilled water pumped through pipes. | |||
Also improving the density, the chassis may contain integrated routing and switching not needing that interconnect through the board sockets. It also may be equipped with redundant, fault-tolerant shared power supplies, rather than a power supply per server. | |||
==Form factor alternatives== | |||
Using the full-sized form factor for blade servers, the break-even point when compared to standalone servers is often considered in the range of 5 to 6 servers per blade chassis. Mini blade server, as with products from [[MDS Micro]], put two or four servers into, respectively, 1 or 2 RU. When compared to a full-size solution, the mini alternative can put 12 servers into the same rack space as with full-sized servers, at a lower initial cost but with more overhead for per-chassis power and management. | |||
The MDS products also can have 40 Gbps [[Infiniband]] storage area network connectivity, but, if that is not needed, they have two [[Gigabit Ethernet]] interfaces.<ref>{{citation | |||
| title = Pros and cons of mini blades vs. full blades | |||
| date = 27 July 2010 | |||
| author = Rick Vanover | journal = Tech Republic | |||
| url = http://blogs.techrepublic.com.com/networking/?p=3231&tag=nl.e040}}</ref> Other blade servers may use the evolving 40 Gigabit or 100 Gigabit Ethernet interfaces rather than Infiniband or [[Fibre Channel]] "fabric" interfaces. They lack the network switching and some management that has been standard on full-sized blade servers, but are intended to connect to networked storage via a device that "mixes and matches" Gigabit Ethernet, 10 Gigabit Ethernet and FibreChannel speeds, such as the Xsigo I/O Director<ref>{{citation | |||
| url = http://xsigo.com/products/products_overview.php | |||
| title = Virtual I/O overview | |||
| publisher = Xsigo Systems | |||
}}</ref> or [[Cisco]] [[Cisco FabricPath|FabricPath]].<ref>{{citation | |||
| title = Scaling Data Centers with FabricPath and the Cisco FabricPath Switching System | |||
| url = http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-605488.html | |||
| publisher = [[Cisco Systems]]}}</ref> | |||
==References== | |||
{{reflist|2}} |
Latest revision as of 10:15, 28 July 2010
Blade servers are computer servers arranged for high-density installations, such as data centers. They are one of several form factor alternatives for increasing density over the AT-style PC tower. Blade servers, which themselves have form factor alternatives, put several server processors into a an industry-standard 19" wide rack. The standard rack is 42 Rack Units (RU), each 1.75" high; shorter racks are available.[1]
When more than one server is required at a location, and there are no pressing reasons for individual cases such as separate ownership, the "tower" case for AT-style PC cards is considered a very inefficient use of space. One of the first high-density approaches to server installation was to mount the processor horizontally, into a 1 RU chassis commonly called a "pizza box". Another approach, often desirable when the server needs many internal disks or interface cards, is to put mounting hardware, on a tower case, which can bolt horizontally or vertically into a rack, still getting multiple servers into the floor footprint of a single rack. Several free-standing towers quickly use up the floor space of a nominal 25" wide, 24" deep rack.
Blade server concept
The first blade servers put up to eight processors in a 6 RU high rack-mounted chassis. A blade server chassis has integral network interfaces and management tools, although it tends not to have much local disk capacity, assuming network attached storage or a storage area network. To deal with the concentrated heat load, the chassis is designed for efficient cooling, which can be forced air, or chilled water pumped through pipes.
Also improving the density, the chassis may contain integrated routing and switching not needing that interconnect through the board sockets. It also may be equipped with redundant, fault-tolerant shared power supplies, rather than a power supply per server.
Form factor alternatives
Using the full-sized form factor for blade servers, the break-even point when compared to standalone servers is often considered in the range of 5 to 6 servers per blade chassis. Mini blade server, as with products from MDS Micro, put two or four servers into, respectively, 1 or 2 RU. When compared to a full-size solution, the mini alternative can put 12 servers into the same rack space as with full-sized servers, at a lower initial cost but with more overhead for per-chassis power and management.
The MDS products also can have 40 Gbps Infiniband storage area network connectivity, but, if that is not needed, they have two Gigabit Ethernet interfaces.[2] Other blade servers may use the evolving 40 Gigabit or 100 Gigabit Ethernet interfaces rather than Infiniband or Fibre Channel "fabric" interfaces. They lack the network switching and some management that has been standard on full-sized blade servers, but are intended to connect to networked storage via a device that "mixes and matches" Gigabit Ethernet, 10 Gigabit Ethernet and FibreChannel speeds, such as the Xsigo I/O Director[3] or Cisco FabricPath.[4]
References
- ↑ "Choosing the Appropriate x86 Server Form Factor", MCPc Blog, 21 January 2010
- ↑ Rick Vanover (27 July 2010), "Pros and cons of mini blades vs. full blades", Tech Republic
- ↑ Virtual I/O overview, Xsigo Systems
- ↑ Scaling Data Centers with FabricPath and the Cisco FabricPath Switching System, Cisco Systems