This article was automatically translated from the original Turkish version.

InfiniBand is a system interconnect architecture that provides high bandwidth, low latency, and high reliability. It was initially developed in the early 2000s as a merger of the Future I/O and Next Generation I/O initiatives, designed to overcome the limitations of traditional bus technologies such as PCI, PCI-X, and AGP. It is widely used in data centers, high-performance computing (HPC), supercomputers, and artificial intelligence infrastructure to enable high-speed data transfer between servers, storage systems, and network infrastructure.
InfiniBand architecture has a layered structure similar to OSI model. This structure consists of five fundamental layers:
InfiniBand can operate not only in point-to-point connections but also in topologies such as fat-tree, mesh, and torus, which are supported by switches. Configuration is managed through the Subnet Manager (SM).
InfiniBand stands out due to its high-speed data transfer and low latency. Different speeds are achieved through lane configurations of 1X, 4X, 8X, and 12X. Starting with SDR (Single Data Rate – 2.5 Gbps), InfiniBand technology now supports advanced standards such as HDR (High Data Rate – 200 Gbps), NDR (Next Data Rate – 400 Gbps), and the planned XDR (eXtreme Data Rate – 800 Gbps).
Due to these features, InfiniBand is particularly preferred in HPC and data-intensive applications.
InfiniBand is preferred in systems requiring low latency and high throughput, while Ethernet is used for broader, more cost-effective solutions.
InfiniBand technology is widely used in the following areas:

Architecture
Performance and Technical Specifications
InfiniBand vs Ethernet
Applications