View
5
Download
0
Category
Preview:
Citation preview
ADAPTER CARDS
– OneadapterforInfiniBand,10GigabitEthernetorDataCenterBridgingfabrics
– World-classclusterperformance– High-performancenetworkingandstorageaccess
– Guaranteedbandwidthandlow-latencyservices
– Reliabletransport– I/Oconsolidation– Virtualizationacceleration– Scalestotens-of-thousandsofnodes
ConnectX-2adaptercardswithVirtual Protocol Interconnect (VPI)supportingInfiniBandandEthernetconnectivityprovidethehighestperformingandmostflexibleinterconnectsolutionforEnterpriseDataCenters,High-PerformanceComputing,andEmbeddedenvironments.Clustereddatabases,parallelprocessing,transactionalservicesandhigh-performanceembeddedI/Oapplicationswillachievesignificantperformanceimprovementsresultinginreducedcompletiontimeandlowercostperoperation.ConnectX-2withVPIalsosimplifiesnetworkdeploymentbyconsolidatingcablesandenhancingperformanceinvirtualizedserverenvironments.
Virtual Protocol InterconnectVPI-enabledadaptersmakeitpossibleforanystandardnetworking,clustering,storage,andmanagementprotocoltoseamlesslyoperateoveranyconvergednetworkleveragingaconsoli-datedsoftwarestack.Withauto-sensecapability,eachConnectX-2portcanidentifyandoperateonInfiniBand,Ethernet,orDataCenterBridging(DCB)fabrics.FlexBoot™providesadditionalflexibilitybyenablingserverstobootfromremoteInfiniBandorLANstoragetargets.ConnectX-2withVPIandFlexBootsimplifiesI/OsystemdesignandmakesiteasierforITmanagerstodeployinfrastructurethatmeetsthechallengesofadynamicdatacenter.
World-Class PerformanceInfiniBand—ConnectX-2deliverslowlatency,highbandwidth,andcomputingefficiencyforperformance-drivenserverandstorageclusteringapplications.EfficientcomputingisachievedbyoffloadingfromtheCPUroutineactivitieswhichallowsmoreprocessorpowerfortheapplication.NetworkprotocolprocessinganddatamovementoverheadsuchasInfiniBandRDMAandSend/ReceivesemanticsarecompletedintheadapterwithoutCPUintervention.CORE-Direct™bringsthenextlevelofperformanceimprovementbyoffloadingapplicationoverhead(e.g.MPIcollectivesoperations),suchasdatabroadcastingandgatheringaswellasglobalsynchronizationcommuni-cationroutines.GPUcommunicationaccelerationprovidesadditionalefficienciesbyeliminatingunnecessaryinternaldatacopiestosignificantlyreduceapplicationruntime.ConnectX-2advancedaccelerationtechnologyenableshigherclusterefficiencyandlargescalabilitytotensofthousandsofnodes.
RDMA over Converged Ethernet —ConnectX-2utilizingIBTARoCEtechnologydeliverssimilarlow-latencyandhigh-performanceoverEthernetnetworks.LeveragingDataCenterBridgingcapabilities,RoCEprovidesefficientlowlatencyRDMAservicesoverLayer2Ethernet.TheRoCEsoftwarestackmaintainsexistingandfuturecompatibilitywithbandwidthandlatencysensitiveapplications.Withlink-levelinteroperabilityinexistingEthernetinfrastructure,NetworkAdminis-tratorscanleverageexistingdatacenterfabricmanagementsolutions
TCP/UDP/IP Acceleration —ApplicationsutilizingTCP/UDP/IPtransportcanachieveindustry-leadingthroughputoverInfiniBandor10GigE.Thehardware-basedstatelessoffloadenginesinConnectX-2reducetheCPUoverheadofIPpackettransport,freeingmoreprocessorcyclestoworkontheapplication.
ConnectX®-2 VPI with CORE-Direct™ TechnologySingle/Dual-Port Adapters with Virtual Protocol Interconnect
– VirtualProtocolInterconnect– 1usMPIpinglatency– Selectable10,20,or40Gb/sInfiniBandor10GigabitEthernetperport
– Single-andDual-Portoptionsavailable– PCIExpress2.0(upto5GT/s)– CPUoffloadoftransportoperations– CORE-Directapplicationoffload– GPUcommunicationacceleration– End-to-endQoSandcongestioncontrol– Hardware-basedI/Ovirtualization– FibreChannelencapsulation(FCoIBorFCoE)– RoHS-R6
BENEFITS
KEY FEATURES
ConnectX-2 VPI Adapter Cards
®
2ConnectX-2® VPI Single/Dual-PortAdapterswithVirtual Protocol Interconnect™
©Copyright2010.MellanoxTechnologies.Allrightsreserved.Mellanox, BridgeX, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. CORE-Direct, FabricIT, and PhyX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com
I/O Virtualization—ConnectX-2withVirtualIntelligentQueuing(Virtual-IQ)technologyprovidesdedicatedadapterresourcesandguaranteedisolationandprotectionforvirtualmachines(VM)withintheserver.I/OvirtualizationwithConnectX-2givesdatacentermanagersbetterserverutilizationandLANandSANunificationwhilereducingcost,power,andcablecomplexity.
Storage Accelerated —Aconsolidatedcomputeandstoragenetworkachievessignificantcost-performanceadvantagesovermulti-fabricnetworks.StandardblockandfileaccessprotocolscanleverageInfiniBandRDMAforhigh-performancestorageaccess.T11compliantencapsula-tion(FCoIBorFCoE)withfullhardwareoffloadssimplifiesthestoragenetworkwhilekeepingexistingFibreChanneltargets.
Software Support AllMellanoxadaptercardsaresupportedbyafullsuiteofdriversforMicrosoftWindows,Linuxdistributions,VMware,andCitrixXENServer.ConnectX-2VPIadapterssupportOpenFabrics-basedRDMAprotocolsandsoftware.StatelessoffloadarefullyinteroperablewithstandardTCP/UDP/IPstacks.ConnectX-2VPIadaptersarecompatiblewithconfigurationandmanage-menttoolsfromOEMsandoperatingsystemvendors.
COMPATIBILITY
INFINIBAND– IBTASpecification1.2.1compliant– RDMA,Send/Receivesemantics– Hardware-basedcongestioncontrol– Atomicoperations– 16millionI/Ochannels– 256to4KbyteMTU,1Gbytemessages– 9virtuallanes:8data+1management
ENHANCED INFINIBAND– Hardware-basedreliabletransport– Collectiveoperationsoffloads– GPUcommunicationacceleration– Hardware-basedreliablemulticast– ExtendedReliableConnectedtransport– EnhancedAtomicoperations
ETHERNET– IEEE802.3ae10GigabitEthernet– IEEE802.3adLinkAggregationandFailover– IEEE802.1Q,.1pVLANtagsandpriority– IEEEP802.1auD2.0CongestionNotification– IEEEP802.1azD0.2ETS– IEEEP802.1bbD1.0Priority-basedFlowControl– Jumboframesupport(10KB)– 128MAC/VLANaddressesperport
HARDWARE-BASED I/O VIRTUALIZATION– SingleRootIOV– Addresstranslationandprotection– Dedicatedadapterresources– Multiplequeuespervirtualmachine– EnhancedQoSforvNICs– VMwareNetQueuesupport
ADDITIONAL CPU OFFLOADS– RDMAoverConvergedEthernet– TCP/UDP/IPstatelessoffload– Intelligentinterruptcoalescence
STORAGE SUPPORT– T11.3FC-BB-5FCoE
FLEXBOOT™ TECHNOLOGY– RemotebootoverInfiniBand– RemotebootoverEthernet– RemotebootoveriSCSI
PCI EXPRESS INTERFACE– PCIeBase2.0compliant,1.1compatible– 2.5GT/sor5.0GT/slinkratex8(20+20Gb/sor40+40Gb/sbidirectionalbandwidth)
– Auto-negotiatestox8,x4,x2,orx1– SupportforMSI/MSI-Xmechanisms
CONNECTIVITY– InteroperablewithInfiniBandor10GigEswitches
– microGiGaCNorQSFPconnectors– 20m+(10Gb/s),10m+(20Gb/s)or7m+(40Gb/sInfiniBandor10GigE)ofpassivecoppercable
– Externalopticalmediaadapterandactivecablesupport
– QSFPtoSFP+connectivitythroughQSAmodule
MANAGEMENT AND TOOLSInfiniBand– OpenSM– Interoperablewiththird-partysubnetmanagers
– Firmwareanddebugtools(MFT,IBDIAG)Ethernet– MIB,MIB-II,MIB-IIExtensions,RMON,RMON2
– Configurationanddiagnostictools
OPERATING SYSTEMS/DISTRIBUTIONS– NovellSLES,RedHatEnterpriseLinux(RHEL),Fedora,andotherLinuxdistributions
– MicrosoftWindowsServer2003/2008/CCS2003
– OpenFabricsEnterpriseDistribution(OFED)– OpenFabricsWindowsDistribution(WinOF)– VMwareESXServer3.5/vSphere4.0
PROTOCOL SUPPORT– OpenMPI,OSUMVAPICH,IntelMPI,MSMPI,PlatformMPI
– TCP/UDP,EoIB,IPoIB,SDP,RDS– SRP,iSER,NFSRDMA,FCoIB,FCoE– uDAPL
2770PBRev3.5
Ordering Part Number
Ports Power (Typ)
MHGH19B-XTR Single4X20Gb/sInfiniBandor10GigE 6.7W
MHRH19B-XTR Single4XQSFP20Gb/sInfiniBand 6.7W
MHQH19B-XTR Single4XQSFP40Gb/sInfiniBand 7.0W
Ordering Part Number
Ports Power (Typ)
MHGH29B-XTR Dual4X20Gb/sInfiniBandor10GigE 8.1W(bothports)
MHRH29B-XTR Dual4XQSFP20Gb/sInfiniBand 8.1W(bothports)
MHQH29B-XTR Dual4XQSFP40Gb/sInfiniBand 8.8W(bothports)
MHZH29-XTR 4XQSFP40Gb/sInfiniBand,SFP+10GigE 8.0W(bothports)
Adapter Cards
www.iol.unh.edu
FEATURE SUMMARY
Recommended