badge icon

This article was automatically translated from the original Turkish version.

Article
594e8d1e-dea5-462b-a9e6-21edb0c22972.png
InfiniBand
Name
InfiniBand
Basic Features
Low latency (sub-microsecond)high bandwidth (HDR: 200 Gbps NDR: 400 Gbps XDR: 800 Gbps)RDMA (Remote Direct Memory Access) supportQoS (Quality of Service) and error correction mechanisms
Advantages
Low CPU usagehigh data transfer rate
Disadvantages
High setup costspecialized hardware requirement

InfiniBand is a system interconnect architecture that provides high bandwidth, low latency, and high reliability. It was initially developed in the early 2000s as a merger of the Future I/O and Next Generation I/O initiatives, designed to overcome the limitations of traditional bus technologies such as PCI, PCI-X, and AGP. It is widely used in data centers, high-performance computing (HPC), supercomputers, and artificial intelligence infrastructure to enable high-speed data transfer between servers, storage systems, and network infrastructure.

Architecture

InfiniBand architecture has a layered structure similar to OSI model. This structure consists of five fundamental layers:

  1. Physical Layer: Provides transmission over copper or optical cables. Data can be transmitted in parallel over multiple paths known as lanes.
  2. Link Layer: Responsible for the transmission of data frames (packets). It includes mechanisms for error detection, flow control, and security.
  3. Network Layer: Handles packet routing and network topology management.
  4. Transport Layer: Supports high-performance data transfer methods such as RDMA and send/receive operations.
  5. Upper Layer Protocols: Enables communication with high-level applications such as MPI (Message Passing Interface).

InfiniBand can operate not only in point-to-point connections but also in topologies such as fat-tree, mesh, and torus, which are supported by switches. Configuration is managed through the Subnet Manager (SM).

Performance and Technical Specifications

InfiniBand stands out due to its high-speed data transfer and low latency. Different speeds are achieved through lane configurations of 1X, 4X, 8X, and 12X. Starting with SDR (Single Data Rate – 2.5 Gbps), InfiniBand technology now supports advanced standards such as HDR (High Data Rate – 200 Gbps), NDR (Next Data Rate – 400 Gbps), and the planned XDR (eXtreme Data Rate – 800 Gbps).

  • Latency: Measured at sub-microsecond levels.
  • Bandwidth: Can reach hundreds of Gbps.
  • RDMA: Enables direct memory-to-memory data transfer without CPU involvement.
  • QoS: Provides quality of service and prioritization.
  • Data Integrity: Includes CRC, error detection, and correction mechanisms.

Due to these features, InfiniBand is particularly preferred in HPC and data-intensive applications.

InfiniBand vs Ethernet

InfiniBand is preferred in systems requiring low latency and high throughput, while Ethernet is used for broader, more cost-effective solutions.

Applications

InfiniBand technology is widely used in the following areas:

  • High-Performance Computing (HPC): Serves as the primary interconnect in supercomputer clusters.
  • Data Centers: Enables high-speed data transfer in storage networks (SAN/NAS) and blade server systems.
  • Machine Learning and Artificial Intelligence: Provides high bandwidth between GPU clusters for training on large datasets.
  • Financial Systems: Preferred in high-frequency trading applications due to its low latency.
  • Universities and Research Institutions: Used for scientific simulations and parallel computing applications.

Author Information

Avatar
AuthorCihat DemirelDecember 8, 2025 at 1:25 PM

Tags

Discussions

No Discussion Added Yet

Start discussion for "InfiniBand" article

View Discussions

Contents

  • Architecture

  • Performance and Technical Specifications

  • InfiniBand vs Ethernet

  • Applications

Ask to Küre