Storage Management in Data Centers: Understanding, Exploiting, Tuning, and Troubleshooting Veritas Storage Foundation
Volker Herminghaus, Albrecht Scriba
Format: PDF / Kindle (mobi) / ePub
Storage Management in Data Centers helps administrators tackle the complexity of data center mass storage. It shows how to exploit the potential of Veritas Storage Foundation by conveying information about the design concepts of the software as well as its architectural background. Rather than merely showing how to use Storage Foundation, it explains why to use it in a particular way, along with what goes on inside. Chapters are split into three sections: An introductory part for the novice user, a full-featured part for the experienced, and a technical deep dive for the seasoned expert. An extensive troubleshooting section shows how to fix problems with volumes, plexes, disks and disk groups. A snapshot chapter gives detailed instructions on how to use the most advanced point-in-time copies. A tuning chapter will help you speed up and benchmark your volumes. And a special chapter on split data centers discusses latency issues as well as remote mirroring mechanisms and cross-site volume maintenance. All topics are covered with the technical know how gathered from an aggregate thirty years of experience in consulting and training in data centers all over the world.
Damaged Let's start with a completely and irreversibly damaged disk. In case you want to simulate that, look at the procedure and commands mentioned above. Do not forget to stop vxrelocd or vxsparecheck at first! Be sure to "destroy" the disk which is common to both volumes! Check the results by looking at the output of the vxdisk and vxprint commands. 1. The damaged disk must be replaced as the first step (in our virtual disk troubleshooting scenario, we do not need to do anything). 2. Since.
The minus sign as its argument, now reading from STDIN): # vxprint -rmg
Would remove, destroy or invalidate data on all three mirrors. We need a frozen copy of the current data, but nevertheless capable to boot from and to recover the volume to the frozen data set. We already discussed thoroughly several techniques provided by VxVM to establish a snapshot of a volume in chapter 9. The current chapter demonstrates another way to handle a bootable snapshot based on a procedure specific to OS volumes: They provide two independent drivers accessing the same data set, the.
On two levels: One is making sure the extents that are allocated for the files are as contiguous as possible, i.e. files do not consist of hundreds of little, non-sequential snippets, but rather of a single, large block. The VxFS file system is very good at allocating contiguously when a file is written, as has been discussed in the appropriate section on page 436 of the file system chapter. But during the lifetime of a file system extents are constantly being rewritten, new extents allocated,.
494 Reducing VxVM's Footprint 693 /opt/VRTSobc/pal33/bin/vxpal -a VAILAgent -x 797 /opt/VRTSobc/pal33/bin/vxpal -a StorageAgent -x 877 /opt/VRTSsmf/bin/vxsmf.bin -p ICS -c /etc/vx/VxSMF/VxSMF.cfg --parentversion 1. 1000 /sbin/sh - /usr/lib/vxvm/bin/vxcached root 1001 vxnotify -C -w 15 916 /sbin/sh - /usr/lib/vxvm/bin/vxcached root 1650 vxnotify 1022 vxnotify -f -w 15 Here's the more structured output from a ptree command: # ptree | grep vx 52 vxconfigd -x syslog -m boot 204 /sbin/vxesd 693.