Today I wanted to create a VM with an Oracle SOA/BPM Suite 12c installation, since I'm to give a workshop on the installation of it. I used Oracle Linux 6 for my installations and the last few years I did play around quite a lot with it (for someone who is not a core systems administrator), to upgrade all my VM's to the latest update, remove obsolete kernels, add volumes to do installations, etc. I used Oracle database 11g, that in the last few monts I upgraded to the latest patch set of 11gR2 11.2.0.4.
I could do with a OL6U6 VM with that upgraded 11gR2 database, I did a upgrade of a quite clean VM only yesterday. But since OL7 is in the field already, and DB12c even for a few years. So I thought to try my luck with that.
However, I found that OL7 behaves quite different compared to OL6. Gnome is different, but tools like Logical Volume Manager are absent. I found that there is no graphical LVM available in OL7 apparently. Since I'm not the only one that sought for it in vain, I assume it's really not there. By the way: there is a disks tool, but that only allows you to format a bare disk, not to create LV's.
Luckily I found this great article on a new tool from Red Hat: the system storage manager (ssm). Apparently it is open source, since you can find it on sourceforge, and it is available for Oracle Linux as well.
Install ssm
Yep, you need to install it first:$ sudo yum install system-storage-managerOr do it as root (I'm didn't setup sudo for my one-user-virtual-course-environments):
[root@darlin-vce-db ~]# yum install system-storage-managerBy the way: system-config-lvm, the LVM in previous OL's, is apparently deprecated.
List volumes
First list the current devices and volumes using 'ssm list':[root@darlin-vce-db ~]# ssm list ----------------------------------------------------------- Device Free Used Total Pool Mount point ----------------------------------------------------------- /dev/sda 20.00 GB PARTITIONED /dev/sda1 500.00 MB /boot /dev/sda2 40.00 MB 19.47 GB 19.51 GB ol /dev/sdb 100.00 GB ----------------------------------------------------------- ------------------------------------------------- Pool Type Devices Free Used Total ------------------------------------------------- ol lvm 1 40.00 MB 19.47 GB 19.51 GB ------------------------------------------------- ------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ------------------------------------------------------------------------------- /dev/ol/root ol 17.47 GB xfs 17.46 GB 12.81 GB linear / /dev/ol/swap ol 2.00 GB linear /dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot -------------------------------------------------------------------------------As you can see, I added a new disk to my VM, that is listed as /dev/sdb. And you can't find it in the volumes, because I didn't do anything with it yet.
Add new LV mounted on /u01
In the past, you needed to perform quite some steps to create a volume: you had to prepare a disk, create a volume group, add a volume to it, assign space to the volume, make a filesystem on it, and mount it.Now, here's where ssm pays off. Let's first create a folder to use as a mount point.
[root@darlin-vce-db ~]# mkdir /u01
I picked up the name of this mountpoint during my Oracle days, with my first steps on Linux. But I can't remember what the story or rationale is behind 'u01'. However, it works for me, and it shows up in the Oracle doc, so I stick with it.
Now, lets create a volume called disk01, on a pool called pool01 with /dev/sdb assigned to it, and let's create the new default filesystem xfs on it. Oh, and my SDB was created with a size of 100GB:
[root@darlin-vce-db ~]# ssm create -s 100GB -n disk01 --fstype xfs -p pool01 /dev/sdb /u01 Not enough space (104853504.0 KB) in the pool 'pool01' to create volume! Adjust (N/y/q) ? Y Logical volume "disk01" created. meta-data=/dev/pool01/disk01 isize=256 agcount=4, agsize=6553344 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=26213376, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=12799, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Apparently this could be done in one go. Since the 100GB of the disk does not match exactly the 100GB asked for the volume, it asked to adjust it.
Now do a list again
[root@darlin-vce-db ~]# ssm list -------------------------------------------------------------- Device Free Used Total Pool Mount point -------------------------------------------------------------- /dev/sda 20.00 GB PARTITIONED /dev/sda1 500.00 MB /boot /dev/sda2 40.00 MB 19.47 GB 19.51 GB ol /dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01 -------------------------------------------------------------- ----------------------------------------------------- Pool Type Devices Free Used Total ----------------------------------------------------- ol lvm 1 40.00 MB 19.47 GB 19.51 GB pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB ----------------------------------------------------- --------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point --------------------------------------------------------------------------------------- /dev/ol/root ol 17.47 GB xfs 17.46 GB 12.81 GB linear / /dev/ol/swap ol 2.00 GB linear /dev/pool01/disk01 pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01 /dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot ---------------------------------------------------------------------------------------
Here you find that there is now a pool called 'pool01', with a volume named 'disk01', mounted on /u01.
To List filesystem on '/u01' issue the command 'df /u01':
[root@darlin-vce-db ~]# df /u01 Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/pool01-disk01 104802308 32928 104769380 1% /u01
I want to have it added to the /etc/fstab, to have it auto mounted. So edit the file as follows:
[root@darlin-vce-db u01]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Mon May 11 20:20:14 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/ol-root / xfs defaults 0 0 UUID=7a285d9f-1812-4d72-9bd2-12e50eddc855 /boot xfs defaults 0 0 /dev/mapper/ol-swap swap swap defaults 0 0 /dev/mapper/pool01-disk01 /u01 xfs defaults 0 0
I duplicated the first line, with /dev/mapper/ol-root, to the end of the file, and renamed the device according to the filesystem listing of /u01 above. And the mountpoint to /u01 of course.
Create group oinstall and add it to oracle
I want to use the new volume for my Oracle installations. So first lets create the group oinstall and add it to oracle:[root@darlin-vce-db u01]# groupadd oinstall [root@darlin-vce-db u01]# usermod oracle -G oinstall --a [root@darlin-vce-db u01]# groups oracle oracle : oracle oinstallThen add an app folder and make oracle owner of it
[root@darlin-vce-db ~]# cd /u01 [root@darlin-vce-db u01]# mkdir app [root@darlin-vce-db u01]# chown oracle:oinstall app
No comments:
Post a Comment