Overview of the hottest technologies analysis on t

  • Detail

Technical overview: analysis of the transmission speed of the storage link

according to the required storage type, joint storage may include several technologies. The most common is the fibre channel storage area network (SAN). Most large enterprises have deployed this well proven and reliable high-performance block level storage network. Over the years, the fibre channel technology has experienced several evolutions, and the link speed has increased from 1Gbps to 2G; Now, some first-time beta products can support 4G links. In addition to being used as a connection between storage devices, switches, and servers, fibre channel links can also be used to connect blade servers in a blade server system together and to storage devices. Among these blade server systems, there is a second set of links. They use Ethernet to carry non block level transmission, such as file level transmission and system management transmission. Two connection technologies and system redundancy requirements mean that there are two redundant switching fabrics in the blade system. In the range of 1G to 4G link speeds, from an economic point of view, the cost of HbA or controller chips and optical switch ports can support this architecture. Of course, compared with adopting a connection technology, this architecture can not simplify the management of multi structure systems

another multi technology storage scheme uses network attached storage (NAS) devices, also known as external memory; These devices are better suited to handle file level storage transfers than block level. Until a year ago, all external storage devices were only equipped with Ethernet connections at the front end. The internal disk storage of the external storage can be expanded through the back-end Fibre Channel ports. Like all internal storage of the external memory, all extensions are block level (the storage data displayed by the external memory is a file level view), so the external memory manufacturer adds the front-end block level access function through the front-end fibre channel port, or adds the iSCSI access function through the iSCSI target mode software supported by the front-end Ethernet port of the external memory. Like a blade server, when the link speed is in the range of 1G to 4G, the external memory can support integrated block/file storage through multiple HbAS

how high will the next generation link speed be? 10g or 8g

fibre channel 10g port first appeared in the prototype demonstration of host bus adapter in 2002; In september2003, it demonstrated that the annual consumption of the first optical fiber plastic bottle has been steadily increasing channel 10g switch ports. In the demonstration, switch ports can operate in two modes - acting as ISLs (inter switch links) between large fibre channel switches, or with FC HbAS for switching interoperability in host connected mode. The assumption at that time was that the next migration after 2G FC would reach 10g. However, several problems were found, which doubled this assumption, namely 4G. First, the FC drive side has changed, and the drive interface has been migrated to 4G. The 4G SerDes chip core has been developed to enable the functional components of other 4G infrastructure of switches and HbAS to reach 4G. The second problem is that 10g is not backward compatible with 1g and 2G. During the migration from 1g to 2G and 4G, the optical infrastructure remains unchanged, and the existing 50/125 and 62.5/125 micron multimode optical cables are used to implement short-distance applications (up to 550 meters). However, when the speed reaches 10g, the distance of this kind of optical cable cannot exceed 100 meters, and the shortest distance is only 33 meters. In order to achieve the best 10g multimode performance, people have formulated a new optical cable specification, 850 nm, laser optimized, 50/125 micron optical fiber, with an effective mode bandwidth of 2000 mhz*km. This fiber can reach a distance of 300 meters using 10gbase-s interface. For a more detailed discussion, see 10G Ethernet Union Station (). This station discussed some very important considerations for optical cable infrastructure when deploying 10g with traditional 1g and 100m links

therefore, despite the successful operation of 10G Optical Channel technology, the high cost of 10G links (the current price of 10G Optical Transceivers is about $1000) and the fact that few people need this bandwidth make FC SAN 10g applications limited to ISL in the near future; This may even last for many years

10g Ethernet? In terms of Ethernet, 10GbE ports have been on the market for two years and are mostly used as ISLs. If the FC SAN does not need 10g bandwidth in the near future, will the development trend of Ethernet/IP storage promote the application of 10G host connectivity? Yes, of course. As we discussed, in the past few years, "converged storage" has been initially deployed - mainly including devices that combine block level and file level storage. Considering the cost of 10G Optical port, it is difficult to adopt a 10g link for a traffic type (data block or file). However, if multiple traffic types can be supported on a single HbA and a single 10g link is allowed to connect converged storage devices, the cost of 10G can be solved. We can call this single flow pipeline "unified wire". The first device that may use this pipeline may be a NAS filer with additional block storage support. An obvious advantage of this approach is that it eliminates separate file and block storage networks. If iSCSI block connectivity is used, it eliminates separate fibre channel and Ethernet infrastructure. Why does converged storage use Ethernet instead of fibre channel? Because fibre channel can only support block traffic. Fibre channel can support the tunneling technology of tcp/ip traffic, but it can not accelerate the IP stack in HAB

why must tcp/ip traffic be handled in hardware at 10g link speed? If we look at the current implementation of iSCSI, the software iSCSI stack can be used for most operating systems and hardware HbAS with all iSCSI functions implemented in the card. If the link speed is 1g, the performance of the software and hardware implementation will be very similar - the difference is the host CPU consumed by running the stack. If you use hardware, you only need 5% CPU to run the stack, while the software may consume 40%. The requirements for it are getting higher and higher, or even higher - depending on the size of the data block. In most cases, software is enough

at present, it is not so easy to develop a driver stack to handle block and file acceleration traffic. The problem of handling drivers with multiple traffic types is very complex. However, some emerging enterprises are already developing chips and designing drivers for this HAB. Standards bodies are also exploring several models to find the best model to achieve this goal. The following is the schematic diagram of various methods:

Ethernet single line HbA and SW architecture

today, several manufacturers can provide iSCSI HbAS. At the beginning of 2005, the first combined integrated line HbA and ASIC will appear in the market. In this case, as different methods are discussed and tested, the flexibility of the solution will become crucial. RDMA is introduced in the above architecture diagram. With RDMA (remote data memory access), data can be directly put into the buffer of the application, thus eliminating multiple copies of data required for RDMA data transmission. Will the approach being discussed by IETF to handle all traffic on RDMA be of interest? Or will RDMA win for file traffic (non block traffic)? Only time can answer this question. There is also a controversial statement that only traditional large fibre channel sans will transfer 10g fibre channel to isl. The 8g fibre channel link may be the last fibre channel high-speed host connection. The 10g host connection may only be used in the Ethernet domain

whatever the outcome, the recent years will be exciting; All these new technologies will be introduced and adopted. The storage field will not be lonely

10g Ethernet surpasses copper sidebar

maintenance of oil source in the market: there are many technologies that can realize 10g signal transmission based on copper cable. One is developed by Infiniband, which is called CX4

· working principle 10GBASE should first take the resource-saving development path - cx4

like the known 802.3ak, 10GBASE CX4 technology extends the XAUI interface. It uses pre weighted, averaging and dual axis cables, and is designed for chip to chip communication within a distance of up to 50 feet

1. This technology uses the same 10g bit Mac, XGMII interface and XAUI encoder/decoder specified in 802.3 to decompose the signal into four different paths at a baud rate of 3.125g

2. Transmission pre emphasis focuses on high frequency components to compensate for the loss of PC components. Connector and cable assembly

3. Although the connector and cable assembly are designed for infinband, they are specified by 802.3ak to meet the requirements of balancer

4. The signal weakened by the cable assembly is boosted by the receiving equalizer for the last time

cx4 may be applicable to inter rack systems; Because the distance it supports is too short to be used in a broader data center. Another disadvantage is that the CX4 shielded cable is too large, making cabling more difficult. Since many switch manufacturers have not made extensive commitments, supporting CX4 in 10G Ethernet switches is still under discussion

what is the prospect of 10G on unshielded twisted pair (UTP) cables? So far, each Ethernet 10x has been affected by the speed. The key to the wide adoption of this technology lies in the support ability of UTP. Some emerging enterprises have started to develop UTP 10g technology and demonstrated several prototypes. At present, it seems impossible to support 10g on Category 5 UTP; However, it is generally believed that it is possible to support 10g on category 6e or category 7 cables within a distance of 35 meters

10g optical sidebar

current 10g solutions use xpak or x2 optical components. These devices support the XAUI 4 narrow interface and can send 10g signals to the optical components with good shielding. But these components are very expensive. In the future, we will see that 10G Optical ports will be used in smaller XFP components; This component uses LC size connectors, similar to today's 2G and 4G optical components. The input device of these optical ports is a single 10g serial signal on the XFI interface PC board. Yes, it is possible to do so, and there are already relevant demonstrations. It is expected that the cost of XFP optical port will be greatly reduced. (end)

Copyright © 2011 JIN SHI