Eliminating the NFS Hotspot Bottlenecks in your Cluster
Over the years technical computing workloads have see an unceasing demand for Compute + Data increasing the complexity and scalability requirements. Add to that the rise of shared services and convergence of Compute + Data, the result is that traditional HPC cluster design are no longer capable of delivering against these escalating needs.
We must think beyond 'Compute'--- we must recognise that technical computing requires storage and software drivers to enable data-centric architectures.
The challenge of the evolution of HPC is the broader consideration of the other components that enable the workloads and the delivery system that makes it available to the right users. Everyone, faces this challenge to move forward into the future of data centricity and cloud delivery
This presentation compares the Network File System (NFS) and IBM General Parallel File System (GPFS) in HPC markets and applications. We begin with a brief introduction to the IBM General Parallel File System (GPFS) and its clustered architecture, and discuss High Performance Computing (HPC) vertical markets that commonly use the Network File System (NFS). We will then contrast the filer architecture and stateless NFS protocol to GPFS. We will discuss the pros and cons of each technology, focusing on ways to scale NFS for an HPC cluster.
We will also be a discuss how GPFS can be use to share data across sites in an expedient manner using GPFS as a Caching Solution.
April Neoh's Biography |