Introduction to GPFS Distributed file storage

GPFS-Storage

What exactly is GPFS?

IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), is a high-performance clustered file system software that IBM evolved in 1998. It was initially intended to support high-streaming media as well as entertainment applications. It can be used in shared-disk, shared-nothing, or a correlation between the two distributed parallel mechanisms. The most common deployment is a shared-disk solution, with SAN block storage used for persistent storage.

Data accessibility

GPFS: A shared disk file system for large computing clusters

GPFS is fault-tolerant and can be configured to allow data access even if cluster nodes or storage systems fail. This is achieved through robust clustering features and data replication assistance.

GPFS continually monitors the file system components’ health. When failures are detected, the appropriate recovery action is automatically taken. Extensive logging and recovery capabilities are provided when application nodes holding locks or performing services fail to maintain metadata consistency. Journal logs, metadata, and data can all be replicated. Replication enables continuous operation even if a disc path or the disc itself fails.

Combining these features with a high-availability infrastructure ensures a dependable enterprise storage solution.

Locking and distributed management in GPFS

GPFS architecture. | Download Scientific Diagram

Like many others, the GPFS distributed lock manager employs a centralized global lock manager operating on one of the cluster nodes in collaboration with the local locks managers in every file system node. The lock manager syncs lock amongst the local lock managers by circulating lock tokens, showing the right to grant dispersed locks without requiring a separate communication protocol each time a lock is procured or launched. Repeated accesses from the same node to the same disc object require only a single message to obtain the right to purchase a lock on the object (the lock token).

After a node obtains the token from the worldwide lock manager (also known as the token manager or token server), subsequent operations issued by the same node can obtain a lock on the same object without additional messages. Only when a procedure on some other node necessitates the revocation of a conflicting lock on the same object are supplemental messages required to revoke the lock token from the first node and grant it to the other node. Lock tokens are also used to maintain cache consistency between nodes. A token enables a node to cache data interpreted from a disc because the data cannot be altered elsewhere unless the token is first revoked.

  • GPFS supports flock () from multiple nodes on the same file parallel. You can also use maxFcntlRangePerFile to clarify the total amount of fcntl() locks per file.
  • It supports byte scope locking, so the entire file is not barricaded. • Distributed management
  • Customers share data and metadata utilizing POSIX semantics.
  • Sharing and synchronization are managed by distributed locking, allowing byte ranges.
  • The management interface is the GPFS daemon on every node.
  • The file manager controls token metadata management.
  • Nodes need tokens for reading/writing operations.
  • The token supervisor coordinates requests, which include conflict resolution.

Keeping parallelism and consistency in check

PDF] File Systems and Storage on Making Gpfs Truly General the Basics Where He Serves as a Technical Leader of the Parallel File Systems Group and Principal Architect of Ibm's General Parallel

There are two options. The first is distributed locking, which involves consulting all other nodes, and the second is centralized management, which involves consulting a single node. GPFS is a hybrid solution. Every node has a local lock manager, and a global lock manager manages them by handing out tokens/locks.

To create a synchronization between Read and Write for filing data, GPFS employs byte-range locking/tokens. This enables parallel applications to write simultaneously to different parts of the same file. (If the end-user is the writer, it will obtain a lock on the entire file for efficiency reasons.) Only when other clients are interested do we use finer granularity locks.)

Access to metadata must be synchronized.

pNFS - IBM

There isn’t a centralized metadata server. Each node stores metadata for the files over which it has authority and is known as the metadata coordinator (metanode). The metanode for a file is dynamically selected (the paper does not go into detail), and only the metanode reads and writes the inode for the file from/to disc. The metanode syncs access to the files’ metadata by offering other nodes a shared-write lock.

Fault-tolerance

When a node fails, GPFS restores any metadata updated by the failed node, releases any tokens held by the failed node, and appoints replacements for this node for metanode allocation manager roles. Because GPFS stores log on shared discs, any node can perform log recovery on the failed node’s behalf.

Information Management

The storage helps in pooling several disks in the file system. The administrator can make storage tiers by grouping the disks based on performance, reliability characters, and localities. For instance, a single pool can give you a high-performance channel of fiber disks and less expensive SATA storage.

Leave a Reply

Your email address will not be published. Required fields are marked *