Design and Implementation of an InfiniBand System Interconnect for High-Performance Cluster Systems


The KIPS Transactions:PartA, Vol. 10, No. 4, pp. 389-396, Oct. 2003
10.3745/KIPSTA.2003.10.4.389,   PDF Download:

Abstract

InfiniBand technology is being accepted as the future system interconnect to serve as the high-end enterprise fabric for cluster computing. This paper presents the design and implementation of the InfiniBand system interconnect, focusing on an InfiniBand host channel adapter (HCA) based on dual ARM9 processor cores. The HCA is an SoC called KinCA which connects a host node onto the InfiniBand network both in hardware and in software. Since the ARM9 processor core does not provide necessary features for multiprocessor configuration, novel inter-processor communication and interrupt mechanisms between the two processors were designed and embedded within the KinCA chip. KinCA was fabricated as a 564-pin enhanced BGA (Ball Grid Array) device using 0.18%㎛ CMOS technology. Mounted on host nodes, it provides 10 Gbps outbound and inbound channels for transmit and receive, respectively, resulting in a high-performance cluster system.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
M. S. Man, P. Gyeong, K. S. Nam, K. M. Jun, I. G. Ug, "Design and Implementation of an InfiniBand System Interconnect for High-Performance Cluster Systems," The KIPS Transactions:PartA, vol. 10, no. 4, pp. 389-396, 2003. DOI: 10.3745/KIPSTA.2003.10.4.389.

[ACM Style]
Mo Sang Man, Park Gyeong, Kim Seong Nam, Kim Myeong Jun, and Im Gi Ug. 2003. Design and Implementation of an InfiniBand System Interconnect for High-Performance Cluster Systems. The KIPS Transactions:PartA, 10, 4, (2003), 389-396. DOI: 10.3745/KIPSTA.2003.10.4.389.