Using the Oracle ASM Cluster File System (Oracle ACFS) on Linux, Part Three
This article continues on with an exploration of ACFS snapshots and managing ACFS and then concludes this three part series on using ACFS on Linux.
ACFS Snapshots
Oracle ASM Cluster File System includes a feature called snapshots. An Oracle ACFS snapshot is an online, read-only, point in time copy of an Oracle ACFS file system. The snapshot process uses Copy-On-Write functionality which makes efficient use of disk space. Note that snapshots work at the block level instead of the file level. Before an Oracle ACFS file extent is modified or deleted, its current value is copied to the snapshot to maintain the point-in-time view of the file system. (Note: When a file is modified, only the changed blocks are copied to the snapshot location which helps conserve disk space.)
Once an Oracle ACFS snapshot is created, all snapshot files are immediately available for use. Snapshots are always available as long as the file system is mounted. This provides support for online recovery of files inadvertently modified or deleted from a file system. You can have up to 63 snapshot views supported for each file system. This provides for a flexible online file recovery solution which can span multiple views. You can also use an Oracle ACFS snapshot as the source of a file system backup, as it can be created on demand to deliver a current, consistent, online view of an active file system. Once the Oracle ACFS snapshot is created, simply backup the snapshot to another disk or tape location to create a consistent backup set of the files. (Note: Oracle ACFS snapshots can be created and deleted on demand without the need to take the file system offline. ACFS snapshots provide a point-in-time consistent view of the entire file system which can be used to restore deleted or modified files and to perform backups.)
All storage for Oracle ACFS snapshots are maintained within the file system which eliminates the need for separate storage pools for file systems and snapshots. As shown in the next section, Oracle ACFS file systems can be dynamically re-sized to accommodate addition file and snapshot storage requirements.
Oracle ACFS snapshots are administered with the acfsutil snap command. This section will provide an overview on how to create and retrieve Oracle ACFS snapshots.
Oracle ACFS Snapshot Location
Whenever you create an Oracle ACFS file system, a hidden directory is created as a sub-directory to the Oracle ACFS file system named .ACFS. (Note that hidden files and directories in Linux start with leading period.)
[oracle@racnode1 ~]$ ls -lFA /documents3total 2851148drwxr-xr-x 5 root root 4096 Nov 26 17:57 .ACFS/-rw-r--r-- 1 oracle oinstall 1239269270 Nov 27 16:02 linux.x64_11gR2_database_1of2.zip -rw-r--r-- 1 oracle oinstall 1111416131 Nov 27 16:03 linux.x64_11gR2_database_2of2.zip -rw-r--r-- 1 oracle oinstall 555366950 Nov 27 16:03 linux.x64_11gR2_examples.zip drwx------ 2 root root 65536 Nov 26 17:57 lost+found/
If you don't have the ORACLE_HOME environment variable set to the Oracle grid infrastructure home as explained in the prerequisites section to this guide, the mount command will fail as shown above. In order to mount the new cluster file system, the Oracle ASM ACFS binaries need access to certain shared libraries in the ORACLE_HOME for grid infrastructure. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for grid infrastructure in the file /sbin/mount.acfs on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as follows:
Found in the .ACFS are two directories named repl and snaps. All Oracle ACFS snapshots are stored in the snaps directory.
[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFStotal 12 drwx------ 2 root root 4096 Nov 26 17:57 .fileid/ drwx------ 6 root root 4096 Nov 26 17:57 repl/drwxr-xr-x 2 root root 4096 Nov 27 15:53 snaps/
Since no Oracle ACFS snapshots exist, the snaps directory is empty. [oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snapstotal 0
Create Oracle ACFS Snapshot
Let's start by creating an Oracle ACFS snapshot named snap1 for the Oracle ACFS mounted on /documents3. This operation should be performed as root or the Oracle grid infrastructure owner:
[root@racnode1 ~]# /sbin/acfsutil snap create snap1 /documents3acfsutil snap create: Snapshot operation is complete.
The data for the new snap1 snapshot will be stored in /documents3/.ACFS/snaps/snap1. Once the snapshot is created, any existing files and/or directories in the file system are automatically accessible from the snapshot directory. For example, when I created the snap1 snapshot, the three Oracle ZIP files were made available from the snapshot /documents3/.ACFS/snaps/snap1:
[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snaps/snap1total 2851084 drwxr-xr-x 5 root root 4096 Nov 26 17:57 .ACFS/-rw-r--r-- 1 oracle oinstall 1239269270 Nov 27 16:02 linux.x64_11gR2_database_1of2.zip -rw-r--r-- 1 oracle oinstall 1111416131 Nov 27 16:03 linux.x64_11gR2_database_2of2.zip -rw-r--r-- 1 oracle oinstall 555366950 Nov 27 16:03 linux.x64_11gR2_examples.zip?--------- ? ? ? ? ? lost+found
It is important to note that when the snapshot gets created, nothing is actually stored in the snapshot directory, so there is no additional space consumption. The snapshot directory will only contain modified file blocks when a file is updated or deleted.
Restore Files From an Oracle ACFS Snapshot
When a file is deleted (or modified), this triggers an automatic backup of all modified file blocks to the snapshot. For example, if I delete the file /documents3/linux.x64_11gR2_examples.zip, the previous p_w_picpaths of the file blocks are copied to the snap1 snapshot where it can be restored from at a later time if necessary:
[oracle@racnode1 ~]$ rm /documents3/linux.x64_11gR2_examples.zip
If you were looking for functionality in Oracle ACFS to perform a rollback of the current file system to a snapshot, then I have bad news; one doesn't exist. Hopefully this will be a feature introduced in future versions!
In the case where you accidentally deleted a file from the current file system, it can be restored by copying it from the snapshot, back to the the current file system:
[oracle@racnode1 ~]$ cp /documents3/.ACFS/snaps/snap1/linux.x64_11gR2_examples.zip /documents3
Display Oracle ACFS Snapshot Information
The '/sbin/acfsutil info fs' command can provide file system information as well as limited information on any Oracle ACFS snapshots:
[oracle@racnode1 ~]$ /sbin/acfsutil info fs /documents3/documents3 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Sat Nov 27 03:07:50 2010 volumes: 1 total size: 26843545600 total free: 23191826432 primary volume: /dev/asm/docsvol3-300 label: DOCSVOL3 flags: Primary,Available on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153603 size: 26843545600 free: 23191826432 number of snapshots: 1 snapshot space usage: 560463872
From the example above, you can see that I have only one active snapshot that is consuming approximately 560MB of disk space. This coincides with the size of the file I removed earlier (/documents3/linux.x64_11gR2_examples.zip) which triggered a back up of all modified file p_w_picpath blocks.
To query all snapshots, simply list the directories under '<ACFS_MOUNT_POINT>/.ACFS/snaps'. Each directory under the snaps directory is an Oracle ACFS snapshot.
Another useful technique used to obtain information about Oracle ACFS snapshots is to query the view V$ASM_ACFSSNAPSHOTS from the Oracle ASM instance:
column snap_name format a15 heading "Snapshot Name" column fs_name format a15 heading "File System" column vol_device format a25 heading "Volume Device" column create_time format a20 heading "Create Time" ====================================================================== SQL> select snap_name, fs_name, vol_device, 2 to_char(create_time, 'DD-MON-YYYY HH24:MI:SS') as create_time 3 from v$asm_acfssnapshots 4 order by snap_name;Snapshot Name File System Volume Device Create Time --------------- --------------- ------------------------- -------------------- snap1 /documents3 /dev/asm/docsvol3-300 27-NOV-2010 16:11:29
Delete Oracle ACFS Snapshot
Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:
[root@racnode1 ~]# /sbin/acfsutil snap delete snap1 /documents3acfsutil snap delete: Snapshot operation is complete.
Managing ACFS
Oracle ACFS and Dismount or Shutdown Operations
If you take anything away from this article, know and understand the importance of dismounting any active file system configured with an Oracle ASM Dynamic Volume Manager (ADVM) volume device, BEFORE shutting down an Oracle ASM instance or dismounting a disk group! Failure to do so will result in I/O failures and very angry users!
After the file system(s) have been dismounted, all open references to Oracle ASM files are removed and associated disk groups can then be dismounted or the Oracle ASM instance shut down.
If the Oracle ASM instance or disk group is forcibly shut down or fails while an associated Oracle ACFS is active, the file system is placed into an offline error state. When the file system is placed in an offline error state, applications will start to encounter I/O failures and any Oracle ACFS user data and metadata being wrien at the time of the termination may not be flushed to ASM storage before it is fenced. If a SHUTDOWN ABORT operation on the Oracle ASM instance is required and you are not able to dismount the file system, issue two sync command to flush any cached file system data and metadata to persistent storage:
[root@racnode1 ~]# sync[root@racnode1 ~]# sync
Using a two-node Oracle RAC, I forced an Oracle ASM instance shutdown on node 1 to simulate a failure: (Note: This should go without saying, but I'll say it anyway. DO NOT attempt the following on a production environment.)
SQL> shutdown abortASM instance shutdown
Any subsequent attempt to access an offline file system on that node will result in an I/O error:
[oracle@racnode1 ~]$ ls -l /documents3ls: /documents3: Input/output error[oracle@racnode1 ~]$ df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 22459396 115383364 17% / /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 0 2019256 0% /dev/shmdf: `/documents1': Input/output error df: `/documents2': Input/output error df: `/documents3': Input/output errordomo:PUBLIC 4799457152 1901758592 2897698560 40% /domo
Recovering a file system from an offline error state requires dismounting and remounting the Oracle ACFS file system. Dismounting an active file system, even one that is offline, requires stopping all applications using the file system, including any shell references. For example, I had a shell session that previously changed directory (cd) into the /documents3 file system before the forced shutdown:
[root@racnode1 ~]# umount /documents1[root@racnode1 ~]# umount /documents2[root@racnode1 ~]# umount /documents3umount: /documents3: device is busy umount: /documents3: device is busy
Use the Linux fuser or lsof command to identify processes and kill if necessary:
[root@racnode1 ~]# fuser /documents3/documents3: 16263c [root@racnode1 ~]# kill -9 16263[root@racnode1 ~]# umount /documents3
Restart the Oracle ASM instance (or in my case, all Oracle grid infrastructure services were stopped as a result of me terminating the Oracle ASM instance):
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
All of my Oracle ACFS volumes were added to the Oracle ACFS mount registry and will therefore automatically mount when Oracle grid infrastructure starts. If you need to manually mount the file system, verify the volume is enabled before attempting to mount:
[root@racnode1 ~]# mount/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121)/dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw) /dev/asm/docsvol3-300 on /documents3 type acfs (rw)
Resize File System
With Oracle ACFS, as long as there exists free space within the ASM disk group, any of the ASM volumes can be dynamically expanded which means the file system gets expanded as a result. Note that if you are using another file system other than Oracle ACFS, as long as that file system can support online resizing, they too can be dynamically re-sized. The one exception to 3rd party file systems is online shrinking. Ext3, for example, supports online resizing but does not support online shrinking.
Use the following syntax to add space to an Oracle ACFS on the fly without the need to take any type of outage.
First, verify there is enough space in the current Oracle ASM disk group to extend the volume:
SQL> select name, total_mb, free_mb, round((free_mb/total_mb)*100,2) pct_free 2 from v$asm_diskgroup 3 where total_mb != 0 4 order by name;Disk Group Total (MB) Free (MB) % Free --------------- ------------ ------------ ------- CRS 2,205 1,809 82.04DOCSDG1 98,303 12,187 12.40FRA 33,887 22,795 67.27 RACDB_DATA 33,887 30,584 90.25
The same task can be accomplished using the ASMCMD command-line utility:
[grid@racnode1 ~]$ asmcmd lsdg
From the 12GB of free space in the DOCSDG1 ASM disk group, let's extend the file system (volume) by another 5GB. Note that this can be performed while the file system is online and accessible by clients no outage is required:
[root@racnode1 ~]# /sbin/acfsutil size +5G /documents3acfsutil size: new file system size: 26843545600 (25600MB)
Verify the new size of the file system from all Oracle RAC nodes:
[root@racnode1 ~]# df -kFilesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 21952712 115890048 16% / /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 1135852 883404 57% /dev/shm domo:PUBLIC 4799457152 1901103872 2898353280 40% /domo /dev/asm/docsvol1-300 33554432 197668 33356764 1% /documents1 /dev/asm/docsvol2-300 33554432 197668 33356764 1% /documents2/dev/asm/docsvol3-300 26214400 183108 26031292 1% /documents3[root@racnode2 ~]# df -kFilesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 13803084 124039676 11% / /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 1135852 883404 57% /dev/shm domo:Public 4799457152 1901103872 2898353280 40% /domo /dev/asm/docsvol1-300 33554432 197668 33356764 1% /documents1 /dev/asm/docsvol2-300 33554432 197668 33356764 1% /documents2/dev/asm/docsvol3-300 26214400 183108 26031292 1% /documents3
Useful ACFS Commands
This section contains several useful commands that can be used to administer Oracle ACFS. Note that many of the commands described in this section have already been discussed throughout this guide.
ASM Volume Driver
Load the Oracle ASM volume driver:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
Unload the Oracle ASM volume driver:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload stop
Check if Oracle ASM volume driver is loaded:
[root@racnode1 ~]# lsmod | grep oracleoracleacfs 877320 4 oracleadvm 221760 8 oracleoks 276880 2 oracleacfs,oracleadvm oracleasm 84136 1
ASM Volume Management
Create new Oracle ASM volume using ASMCMD
[grid@racnode1 ~]$ asmcmd volcreate -G docsdg1 -s 20G --redundancy unprotected docsvol3
Resize Oracle ACFS file system (add 5GB):
[root@racnode1 ~]# /sbin/acfsutil size +5G /documents3acfsutil size: new file system size: 26843545600 (25600MB)
Delete Oracle ASM volume using ASMCMD:
[grid@racnode1 ~]$ asmcmd voldelete -G docsdg1 docsvol3
Disk Group / File System / Volume Information
Get detailed Oracle ASM disk group information:
[grid@racnode1 ~]$ asmcmd lsdg
Format an Oracle ASM cluster file system:
[grid@racnode1 ~]$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3"mkfs.acfs: version = 11.2.0.1.0.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/docsvol3-300 mkfs.acfs: volume size = 21474836480 mkfs.acfs: Format complete.
Get detailed file system information: :
[root@racnode1 ~]# /sbin/acfsutil info fs/documents1 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 34359738368 total free: 34157326336 primary volume: /dev/asm/docsvol1-300 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153601 size: 34359738368 free: 34157326336 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0 /documents2 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 34359738368 total free: 34157326336 primary volume: /dev/asm/docsvol2-300 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153602 size: 34359738368 free: 34157326336 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0
Get ASM volume information:
[grid@racnode1 ~]$ asmcmd volinfo -aDiskgroup Name: DOCSDG1 Volume Name: DOCSVOL1 Volume Device: /dev/asm/docsvol1-300 State: ENABLED Size (MB): 32768 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents1 Volume Name: DOCSVOL2 Volume Device: /dev/asm/docsvol2-300 State: ENABLED Size (MB): 32768 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents2 Volume Name: DOCSVOL3 Volume Device: /dev/asm/docsvol3-300 State: ENABLED Size (MB): 25600 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents3
Get volume status using ASMCMD command:
[grid@racnode1 ~]$ asmcmd volstatDISKGROUP NUMBER / NAME: 2 / DOCSDG1 --------------------------------------- VOLUME_NAME READS BYTES_READ READ_TIME READ_ERRS WRITES BYTES_WRIEN WRITE_TIME WRITE_ERRS ------------------------------------------------------------- DOCSVOL1 517 408576 1618 0 17007 69280768 63456 0 DOCSVOL2 512 406016 2547 0 17007 69280768 66147 0 DOCSVOL3 13961 54525952 172007 0 10956 54410240 41749 0
Enable a volume using the ASMCMD command:
[grid@racnode1 ~]$ asmcmd volenable -G docsdg1 docsvol3
Disable a volume using the ASMCMD command
[root@racnode1 ~]# umount /documents3[root@racnode2 ~]# umount /documents3[grid@racnode1 ~]$ asmcmd voldisable -G docsdg1 docsvol3
Mount Commands
Mount single Oracle ACFS volume on the local node:
[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3
Unmount single Oracle ACFS volume on the local node:
[root@racnode1 ~]# umount /documents3
Mount all Oracle ACFS volumes on the local node using the metadata found in the Oracle ACFS mount registry:
[root@racnode1 ~]# /sbin/mount.acfs -o all
Unmount all Oracle ACFS volumes on the local node using the metadata found in the Oracle ACFS mount registry:
[root@racnode1 ~]# /bin/umount -t acfs -a
Oracle ACFS Mount Registry
Register new mount point in the Oracle ACFS mount registry:
[root@racnode1 ~]# /sbin/acfsutil registry -f -a /dev/asm/docsvol3-300 /documents3acfsutil registry: mount point /documents3 successfully added to Oracle Registry
Query the Oracle ACFS mount registry:
[root@racnode1 ~]# /sbin/acfsutil registryMount Object: Device: /dev/asm/docsvol1-300 Mount Point: /documents1 Disk Group: DOCSDG1 Volume: DOCSVOL1 Options: none Nodes: all Mount Object: Device: /dev/asm/docsvol2-300 Mount Point: /documents2 Disk Group: DOCSDG1 Volume: DOCSVOL2 Options: none Nodes: allMount Object: Device: /dev/asm/docsvol3-300 Mount Point: /documents3 Disk Group: DOCSDG1 Volume: DOCSVOL3 Options: none Nodes: all
Unregister volume and mount point from the Oracle ACFS mount registry:
[root@racnode1 ~]# acfsutil registry -d /documents3acfsutil registry: successfully removed ACFS mount point /documents3 from Oracle Registry
Oracle ACFS Snapshots
Use the 'acfsutil snap create' command to create an Oracle ACFS snapshot named snap1 for an Oracle ACFS mounted on /documents3: :
[root@racnode1 ~]# /sbin/acfsutil snap create snap1 /documents3acfsutil snap create: Snapshot operation is complete.
Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:
[root@racnode1 ~]# /sbin/acfsutil snap delete snap1 /documents3acfsutil snap delete: Snapshot operation is complete.
Oracle ASM / ACFS Dynamic Views
This section contains information about using dynamic views to display Oracle Automatic Storage Management (Oracle ASM), Oracle Automatic Storage Management Cluster File System (Oracle ACFS), and Oracle ASM Dynamic Volume Manager (Oracle ADVM) information. These views are accessible from the Oracle ASM instance.
View Name | Description |
---|---|
V$ASM_ALIAS | Contains one row for every alias present in every disk group mounted by the Oracle ASM instance. |
V$ASM_ATTRIBUTE | Displays one row for each attribute defined. In addition to attributes specified by CREATE DISKGROUP and ALTER DISKGROUP statements, the view may show other attributes that are created automatically. Attributes are only displayed for disk groups where COMPATIBLE.ASM is set to 11.1 or higher. |
V$ASM_CLIENT | In an Oracle ASM instance, identifies databases using disk groups managed by the Oracle ASM instance. In a DB instance, contains information about the Oracle ASM instance if the database has any open Oracle ASM files. |
V$ASM_DISK | Contains one row for every disk discovered by the Oracle ASM instance, including disks that are not part of any disk group. This view performs disk discovery every time it is queried. |
V$ASM_DISK_IOSTAT | Displays information about disk I/O statistics for each Oracle ASM client. In a DB instance, only the rows for that instance are shown. |
V$ASM_DISK_STAT | Contains the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It only returns information about any disks that are part of mounted disk groups in the storage system. To see all disks, use V$ASM_DISK instead. |
V$ASM_DISKGROUP | Describes a disk group (number, name, size related info, state, and redundancy type). This view performs disk discovery every time it is queried. |
V$ASM_DISKGROUP_STAT | Contains the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It does not return information about any disks that are part of mounted disk groups in the storage system. To see all disks, use V$ASM_DISKGROUP instead. |
V$ASM_FILE | Contains one row for every Oracle ASM file in every disk group mounted by the Oracle ASM instance. |
V$ASM_OPERATION | In an Oracle ASM instance, contains one row for every active Oracle ASM long running operation executing in the Oracle ASM instance. In a DB instance, contains no rows. |
V$ASM_TEMPLATE | Contains one row for every template present in every disk group mounted by the Oracle ASM instance. |
V$ASM_USER | Contains the effective operating system user names of connected database instances and names of file owners. |
V$ASM_USERGROUP | Contains the creator for each Oracle ASM File Access Control group. |
V$ASM_USERGROUP_MEMBER | Contains the members for each Oracle ASM File Access Control group. |
View Name | Description |
---|---|
V$ASM_ACFSSNAPSHOTS | Contains snapshot information for every mounted Oracle ACFS file system. |
V$ASM_ACFSVOLUMES | Contains information about mounted Oracle ACFS volumes, correlated with V$ASM_FILESYSTEM. |
V$ASM_FILESYSTEM | Contains columns that display information for every mounted Oracle ACFS file system. |
V$ASM_VOLUME | Contains information about each Oracle ADVM volume that is a member of an Oracle ASM instance. |
V$ASM_VOLUME_STAT | Contains information about statistics for each Oracle ADVM volume. |
Use fsck to Check and Repair the Cluster File System
Use the regular Linux fsck command to check and repair the Oracle ACFS. This only needs to be performed from one of the Oracle RAC nodes:
[root@racnode1 ~]# /sbin/fsck -t acfs /dev/asm/docsvol3-300fsck 1.39 (29-May-2006) fsck.acfs: version = 11.2.0.1.0.0 fsck.acfs: ACFS-00511: /dev/asm/docsvol3-300 is mounted on at least one node of the cluster.fsck.acfs: ACFS-07656: Unable to continue
The fsck operating cannot be performed while the file system is online. Unmount the cluster file system from all Oracle RAC nodes:
[root@racnode1 ~]# umount /documents3[root@racnode2 ~]# umount /documents3
Now check the cluster file system with the file system unmounted:
[root@racnode1 ~]# /sbin/fsck -t acfs /dev/asm/docsvol3-300fsck 1.39 (29-May-2006) fsck.acfs: version = 11.2.0.1.0.0 Oracle ASM Cluster File System (ACFS) On-Disk Structure Version: 39.0 ***************************** ********** Pass 1: ********** ***************************** The ACFS volume was created at Fri Nov 26 17:20:27 2010 Checking primary file system... Files checked in primary file system: 100% Checking if any files are orphaned... 0 orphans found fsck.acfs: Checker completed with no errors.
Remount the cluster file system on all Oracle RAC nodes:
[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3[root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3
Drop ACFS / ASM Volume
Unmount the cluster file system from all Oracle RAC nodes:
[root@racnode1 ~]# umount /documents3[root@racnode2 ~]# umount /documents3
Log in to the ASM instance and drop the ASM dynamic volume from one of the Oracle RAC nodes:
[grid@racnode1 ~]$ sqlplus / as sysasmSQL> ALTER DISKGROUP docsdg1 DROP VOLUME docsvol3;Diskgroup altered.
The same task can be accomplished using the ASMCMD command-line utility:
[grid@racnode1 ~]$ asmcmd voldelete -G docsdg1 docsvol3
Unregister the volume and mount point from the Oracle ACFS mount registry from one of the Oracle RAC nodes:
[root@racnode1 ~]# acfsutil registry -d /documents3acfsutil registry: successfully removed ACFS mount point /documents3 from Oracle Registry
Finally, remove the mount point directory from all Oracle RAC nodes (if necessary):
[root@racnode1 ~]# rmdir /documents3[root@racnode2 ~]# rmdir /documents3