Spatial Technologies-Clustering Computing and Storage
31 Jul, 2005 By: James L. SipesClustering technologies offer significant gains in performance.
ONE OF THE REASONS that GIS has been integrated into so many professions is the availability of massive amounts of geospatial data. One concern, though, is how to store and organize all of this data in a meaningful way to allow quick and easy access. The demands of data-intensive applications such as GIS have been the impetus for the development of new computing architectures and new ways to store data. At the forefront of these new technologies are Linux computing clusters and OSA (object storage architecture).
What is Cluster Computing?
The emergence of Linux clusters has had a significant impact on large-scale computing systems. Larry Jones, vice-president of product marketing at Panasas, says that researchers with NASA initiated the impetus toward object storage. One goal of NASA's ESS (Earth and Space Sciences) project was to determine potential uses of parallel computers for addressing scientific problems. The result was the development of Beowulf clusters, a collection of computers running in parallel. A cluster is a group of individual servers, or nodes, that work together to function as a single system. A typical cluster comprises processor nodes, a cluster interconnect and shared storage. Collectively they provide a level of computing power typically found only in high-end minicomputers.
![]() Figure 1. The scalability and parallel processing of Linux clusters can minimize potential bottlenecks and result in a higher level of performance. |
The movement to computing clusters is due in large part to the increased power of microprocessors, as well as improvements in high-speed network technology and networking and database applications that make it easier to manage clusters. Most of the original Beowulf clusters were constructed using commodity hardware components with Linux as the operating system. "They chose Linux as their operating system because it was cheap," says Jones, "and cluster computing still evolves around Linux."
Linux computing clusters are the dominant computing architecture for technical computing, and they are one of the fastest growing areas in the world of digital technology. According to Cheryl Hall, an associate with McGrath/Power Public Relations, the number of users making the transition to Linux clusters has grown by 44% annually. In the good old days, the only way to obtain the type of processing power needed for data-intensive applications was to use a supercomputer or minicomputer.
"Anywhere there is a need for high performance computing, traditional minicomputers such as those from Sun, IBM and HP are being replaced with Linux clusters," says Jones. Cluster computing systems can reduce the cost of shared memory systems by combining the capabilities of numerous servers to replace the supercomputers. Hall said this trend is driven by the ten-to-one increase in performance in relationship to price that Linux computing clusters offer.
![]() In This Article |
The Need for Storage
Many storage systems use a traditional configuration with a master server that controls multiple connected slave servers. "One way to think about it is that there is a master in the cluster, and it has the task of handing out pieces of the work to the clusters," says Jones. "It lets all of the clusters work in parallel on a problem and solve that problem much more quickly than before." All slave servers can be turned on and off from the master server. Additional slave servers can be easily added to the cluster.
DAS (direct attached storage), SAN (storage area network), and NAS (network attached storage) are technologies that have traditionally been used to address storage issues. DAS is a storage device that connects directly to a single server. DAS is a simple, inexpensive alternative that is best implemented on a dedicated file server. SAN is a high-speed network that uses fiber channel switches to connect data storage devices and servers. A SAN is more complicated and more expensive than a DAS, but provides a much greater level of flexibility. NAS is similar to a SAN, except that the disk arrays and servers typically use gigabit Ethernet or other optical fiber instead of fiber channel. Using gigabit Ethernet helps lower the cost while maintaining performance. NAS is a popular storage solution for SQL clusters.
iSCSI is Internet SCSI (Small Computer System Interface), an IP (internet protocol)-based storage networking standard used to transfer and manage data. A number of vendors have introduced iSCSI-based products that can be used to increase functionality and performance of SAN and NAS.
![]() Figure 2. Unlike conventional storage systems, which store data in files or blocks, object based disks use objects as the fundamental unit of data storage in this system. |
NFS (network filing system) is an established protocol for enabling all of the nodes in a Linux cluster to work with the same data set. The node in a cluster requests information from an NFS server, which finds the information and sends it to a cluster. This works well if you have one compute server and one file server, but can't keep up with the demands of cluster computing. "NFS has become a bottleneck that prevents [top performance] because the server requests the information serially. Think of it as a two-lane road, with one lane going in and one lane going out. It works well with two cars or four cars, but not 32 cars."
Metadata management capacity is the typical bottleneck for any shared storage system, because it's the one part of the system that all nodes need to share. "Approximately 90% of the work being done by an NFS server involves keeping track of blocks and files. If you can distribute that work, you can greatly increase speed and performance," says Jones. One big advantage of OSA is that it can distribute metadata for the system, and this helps eliminate that bottleneck.
Object Storage Architecture
Hardware and software vendors combine to offer a variety of systems that address storage issues. Storage solutions available include fully hardware-based solutions, software-only solutions and hybrid solutions that combine the two. Hardware-based solutions offer the greatest level of reliability, but they are too expensive for many users. Software-only clusters are less expensive, but not as reliable or as robust as hardware-only systems. OSA (object storage architecture) is one of the hybrid approaches. OSA was developed to provide scalable, easy-to-manage and cost-effective storage alternatives to keep pace with Linux clusters. "Object storage architecture basically mirrors the architecture of the Linux cluster," says Jones. OSA systems are gaining in popularity because they combine reliability more in keeping with that of hardware-based solutions with the flexibility and lower cost of software solutions.
![]() Figure 3. With traditional storage devices, accessing and managing metadata can frequently be a bottleneck in the system. |
Digital storage capacity has expanded to keep up with the most demanding computer applications. The capacity of hard drives has doubled every year since the mid-1990s. With the advent of new technologies such as ballistic magnetoresistance and nanoscale devices, one-petabyte (1 million gigabytes) disks are expected to be available within the next four or five years. This is pretty amazing, considering that the entire hard drive production in the mid-1990s was around 20 petabytes, according to an article last year on ZDNET. Just imagine—soon you'll be able to purchase a hard drive with storage capacity equal to that of the entire world ten years ago.
OSD (object storage devices) use object-based technology to provide the performance, scalability, security and file-sharing capability needed for data-intensive applications. An OSD contains a disk, a processor, RAM memory and a network interface. It stores and manages data, manages metadata associated with objects that are being stored and provides security to protect data.
Unlike conventional storage systems, in which all data is stored in files or blocks, OSDs use objects as the fundamental unit of data storage. The objects are a combination of file data that includes attributes about that specific data. An OSD keeps track of how objects and blocks are stored on the disk. When a server asks for a piece of information, it asks for an object ID, not a block. This approach makes it faster and easier to sort through massive amounts of information.
![]() Glossary |
Typical SCSI drives store information as blocks, and it is hard to keep track of which blocks are associated with specific files, especially when you're talking about the number of files required by an enterprise GIS. "What we have done is make the disk drive smarter," says Jones, in reference to the Panasas ActiveScale Storage Cluster, which is the first product to support object storage architecture. The Panasas Storage Cluster is an integrated hardware/software product that combines a parallel file system with object-based storage. The system uses a single integrated architecture that can be scaled to meet demands for storage capacity, network bandwidth and input and output performance. Its single storage cluster eliminates the need for staging and de-staging that can affect performance. Panasas shipped its first object storage devices in late 2003 and has now shipped more than a petabyte of storage to about 60 or 70 customers. Costs range from about $15 to $18 per gigabyte.
Disk Failure
One of the biggest fears for computer users is data failure. Data has to be reliable and recoverable. As with most equipment, moving parts are the ones most likely to fail, and a spinning disk is one of the least reliable components in a disk storage system.
The biggest strength of cluster storage is that the system continues to function at a high level of performance even when some of its components fail. This ability to isolate faults helps make sure your system stays operational. The failure of one node does not affect other nodes. Using an Oracle cluster, for example, you can add or delete a node without shutting down the system. In a worst-case scenario, you lose only the data associated with one particular node; the rest of the cluster functions normally.
In cluster storage, redundancy is a good thing. The idea is that if the largest or smallest problem occurs within your storage system, it's covered by another hardware component that ensures your data is safe and accessible. An active failover system uses a second master node while a passive failover system allows one of the slave nodes to take over if a master node fails. The benefit of an active system is that the secondary master node takes over immediately and there is no down time. Oracle Data Guard is an example of a passive failover environment. It uses a single system to run applications, but when a failure occurs, the backup system takes over for the primary system.
Using separate power sources for separate nodes is one way to prevent a single power source from crashing your entire system. A backup system physically located in a different site can help minimize the possibility that earthquake or fire damage will bring down a system. Redundant power supplies for each component can also help minimize the possibility of system failure.
Scalability
There are several advantages to using a cluster of smaller nodes vs. one large node. A cluster may include hundreds or thousands of servers, and the ability of these servers to share data is critical. Some applications require access to many individual files, while others work with massive databases that can create input and output bottlenecks. The system you use should be able to match the scalability of a particular computer cluster in terms of both performance and capacity. Scalability in computing clusters is achieved by distributing the workload to other intelligent subsystems.
Any storage system needs to give users the flexibility to add nodes to the cluster as demand for capacity increases. Being able to incrementally add nodes helps reduce capital investment costs and eliminates the need to replace smaller drives with larger ones. As capacity expands, performance should not lag behind.
Performance
OSA allows computer nodes to access storage devices directly and in parallel. This combination allows a very high level of performance as well as data sharing. The CPU, motherboard chipset, memory and network interconnect significantly affect a cluster's performance. To improve the performance of a storage system, you can add more storage nodes per cluster, more memory per storage node, or more CPUs.
Object-based architecture enables information to be organized into smart data objects instead of being saved in standard data blocks or files. An object-based design eliminates the need to manually move data between discrete volumes either within a system or between systems.
With most cluster systems, connections to the outside world are limited. The master node controls all slave nodes and provides one central administration point for the entire system. Some systems use more than one master node to provide redundancy, but many use slave nodes for this function.
Cluster storage frequently is housed on warehouse-style racks or rack-mount solutions that use a special server chassis and server racks. Power switches, Ethernet switches and other equipment are mounted onto the racks, and cables are hidden so they are not as obtrusive.
Future Directions
What can you expect from OSA in the next few years? Scalable networked storage architectures are predicted to continue to make cluster computing more powerful and accessible. "One of the things that we are going to see is that object storage devices will start to come prepackaged from vendors," says Jones. Seagate has already built an integrated object-based system, and you can expect others to follow suit. Efforts are currently underway to establish standards for the next generation of a parallel version of NFS. You can also expect to see the continued development of OSA standards from organizations such as SNIA (Storage Networking Industry Association) and the ANSI X3 T10 (SCSI) standards body.
As storage capacity and reliability increase, so too does the cost of building and maintaining such a system. Anyone working with large amounts of data, or who needs a lot of computing power, can benefit from using object storage architecture, and these users will continue to demand the most cost-effective computing cluster and storage system. Larry Jones notes that as we start using Linux clusters more and more, "we will continue to try to eliminate any bottleneck so that we can run even the most demanding application."
James L. Sipes is the founding principal of Sand County Studios in Seattle, Washington. Reach him at jsipes@sandcountystudios.com.
For Mold Designers! Cadalyst has an area of our site focused on technologies and resources specific to the mold design professional. Sponsored by Siemens NX. Visit the Equipped Mold Designer here!
For Architects! Cadalyst has an area of our site focused on technologies and resources specific to the building design professional. Sponsored by HP. Visit the Equipped Architect here!