OCI – Patching a DB System

On this blog post I will write detailed steps on how to apply patches on bare metal/virtual machine DB systems and database homes by using DBCLI.

Yes, I enjoy the black screen 🙂

No, this is not the only option. You can also use Console and API’s.

This procedure is not applicable for Exadata DB systems.

Prerequisites

1) Access to the Oracle Cloud Infrastructure Object Storage service, including connectivity to the applicable Swift endpoint for Object Storage. Oracle recommend using Service Gateway to enable this access.

2) /u01 FS with at least 15Gb of free space

3) Clusterware running

4) All DB system nodes running

Backup

Backup your database prior to the patch event.

Non Prod first

Test this patch on non prod (or test) server first

Patching

1) Update the CLI

 cliadm update-dbcli

2) Wait the job to complete

dbcli list-jobs

3) Check for installed and available patches

dbcli describe-component

4) Display the latest patch versions available

dbcli describe-latestpatch

5) Run pre check

dbcli update-server --precheck

Example:

run describe-job to check job status:

dbcli describe-job -i <jobId>

Example:

6) Update the server components. This step will patch GI.

dbcli update-server

Example:

Once successfully completed proceed with DB home patching.

7) List db homes

dbcli list-dbhomes

8) Run update home on select OH

dbcli update-dbhome -i <Id>

Example:

And check status with describe-job command:

Logs

You can find the DCS Logs at:

/opt/oracle/dcs/log/

Under the hood dbcli relies on opatchauto so you can also check $ORACLE_HOME/cfgtoollogs/opatchauto directory for logs.

There is also a nice doc about troubleshooting in case something goes wrong.

DB system – OS patching

When using Oracle Cloud Database Services VM or Bare Metal, customer is responsible for OS updates and GI/DB patches so let’s first go through the steps to update an OL7 server.

This procedure is not applicable for Exadata DB systems.

General Recommendations

1) Don’t touch oraenv or .bash_profile

2) Don’t touch default local firewall rules

Backup

Backup your db prior to the OS update

NonProd first

Test this procedure on non prod server first

OS Update

1) Check if kernel is 4.1.12-124.27.1.el7uek then you need to change the bootefi label before updating the OS.

uname -r

To change bootefi label:

Edit /etc/fstab: Change the label bootefi to BOOTEFI (uppercase), reboot the server and run ls -l /etc/grub2-efi.cfg to check required link was created.

2) Run command below to identify the region server is running:

curl -s http://169.254.169.254/opc/v1/instance/ |grep region

3) Download the repo file and cp to yum dir:

wget https://swiftobjectstorage.<region>.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/oci_dbaas_ol7repo

and copy it to yum dir

cp /tmp/oci_dbaas_ol7repo /etc/yum.repos.d/ol7.repo

4) Download the version lock files and overwrite existing one:

wget https://swiftobjectstorage.<region>.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/versionlock.list

cp /etc/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/versionlock.list-`date+%Y%m%d`
cp /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list

5) run YUM update

yum update

6) reboot

7) check new kernel version

uname -r

Hopefully I can rely on OS Management to update this type of server next time 🙂 So stay tuned !

OCI – Unknown error when running DB system patch pre-check

This week I’ve been working on a DB system Patch for an Oracle 12.2 system.

Task prerequisites are as follow:

1) Access to the Oracle Cloud Infrastructure Object Storage service, including connectivity to the applicable Swift endpoint for Object Storage.

2) /u01 FS with at least 15Gb of free space

3) Clusterware running

4) All DB system nodes running

With all prerequisites checked, I then ran, at the console, a pre-check option to ensure patch can be successfully applied.

Example below:

But I’ve got an unknown error as you can see in the images from work requests page:

Suck an error does not help at all but if run the same pre-check command from dbcli you will get another and actually meaningful error:

[root@node ~]# dbcli update-server --precheck
DCS-10214:Minimum Kernel version required is: 4.1.12-124.20.3.el6uek.x86_64 Current Kernel version is: 4.1.12-112.16.7.el6uek.x86_64.

Much better, right ?

So in order to apply the DB system patch I will need to first update the OS kernel.

This information should be on the prerequisites list but it is not 😦

So from now on I would suggest you to run pre-check from dbcli instead of OCI console but default or at least when facing unexpected behaviour.

Full documentation on patch DB systems here and to update the kernel you can check OCI docs here.

Hope this helps !

OCI – Object storage cross-region copy

Let’s say you have your Oracle DB backups on Ashburn and for DR requirements you must have a copy of those files on object storage at Phoenix.

Is it possible to do it ?

Yes, it is but OCI doesn’t have a native replication solution like AWS S3 and it also does not allow you to run bulk copy.

Exactly, you have to copy object by object to the DR region at this time.

Which is very simple task when using oci cli and bash. A code example below:

#!/bin/bash 
export OCIHOME=/home/oracle/ocicli 
export OBUCKET=ashbucket
export DBUCKET=phobucket
export NAMESPACE=namespace 

# object store list 
$OCIHOME/oci os object list --bucket-name=$OBUCKET --all | grep name | awk '{ print $2 }' | sed 's/"//g' | sed 's/,//g' > /tmp/ostore.list 

#sintaxe: 
#oci os object copy --namespace-name <object_storage_namespace> --bucket-name <source_bucket_name> --source-object-name <source_object> --destination-namespace #<destination_namespace_string> --destination-region <destination_region> --destination-bucket <destination_bucket_name> --destination-object-name <destination_object_name> 

# copy to DR region 
     for id in `cat /tmp/ostore.list` 
        do $OCIHOME/oci os object copy --namespace-name=$NAMESPACE --bucket-name=$OBUCKET --source-object-name=$id --destination-namespace=$NAMESPACE --destination-region=us-phoenix-1 --destination-bucket=$DBUCKET 
        done

Object Storage is a regional service so you must authorize the Object Storage service for each region carrying out copy operations on your behalf.

In this example, you have to authorize the Object Storage service in region US East (Ashburn) to manage objects on your behalf.

To accomplish this, create the following a policy:

Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy

Hopefully OCI will soon deploy a native replication capability 🙂

For more information, Copying Objects documentation is here.

Region id’s can be found here.

Hope it helps,

Rogerio

Update 1: About two weeks after I’ve published this post OCI launched the object store bucket replication. Docs here and new blog post soon !

OCI – File storage snapshot management

OCI file storage snapshots are not managed automatically by OCI like block volumes backups are when using policy-managed backups.

Which means that you have to create and delete the snapshots by yourself.

On this blog post I will share a shell script to accomplish this task using oci cli.

Here is an example to create daily snapshots: fssnap.sh

# !/bin/bash
export CLI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export ENV=prod

# create snap
$CLI/oci fs snapshot create --file-system-id=$FSSOCID --name=$ENV.$TSTAMP

You can get the file storage ocid at OCI console:

Create a group to your and assign proper privileges like:

Allow group StorageAdmins to manage file-family in compartment PROD

Schedule this shell script to run on a regular internal that fits your needs.

But in my case I had to keep a limited amount of backups based on same information such as: environment (prod or dev) and customer retention policy (bronze, silver or gold).

So I wrote another simple shell to accomplish this: fscleanup.sh

!/bin/bash
export CI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export KEEP=42

# dump backups to tempfile
$CI/oci fs snapshot list --file-system-id=$FSSOCID | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g' > /tmp/fss.bkp

#count
CT=`cat /tmp/fss.bkp | wc -l`

#remove backups older then $KEEP
if [ "$CT" -gt $KEEP ]; then
    DIFF=$(expr $CT - $KEEP)
    for id in `tail -$DIFF /tmp/fss.bkp`
    do
       $CI/oci fs snapshot delete --force --snapshot-id $id
    done
else
    echo "less then KEEP: $KEEP"
fi

Please check OCI doc about managing snapshots for more info.

Let’s wait for the OCI native and automated way for doing this but until then this is the workaround.

OCI VCN – Don’t forget DNS

I’ve found an interesting situation when using different VCN configurations.

Let’s get started.

I’ve created two VCN’s:

VCN1:

oci network vcn create --cidr-block 10.2.0.0/16 --compartment-id ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa --display-name vcn1
 {
   "data": {
     "cidr-block": "10.2.0.0/16",
     "compartment-id": "ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa",
     "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.phx.aaaaaaaanuue6l7pre4vtyu6pp2ygucjbcwnvejmezuyfxxxmft76lroemmq",
     "default-route-table-id": "ocid1.routetable.oc1.phx.aaaaaaaaqybsv6xnwe6gbjo74zom4jtxequtnk5bwbe7qvkwyjjfwbusi7fq",
     "default-security-list-id": "ocid1.securitylist.oc1.phx.aaaaaaaactpod3l5kgukj7dkuq4gi2nhi4jojng4eetvhk5googboy3l5poq",
     "defined-tags": {},
     "display-name": "vcn1",
     "dns-label": null,
     "freeform-tags": {},
     "id": "ocid1.vcn.oc1.phx.aaaaaaaaklvs2bjyw3tx5fzfd76n2ab2fhbx2v4afgckksniqidudntoegwq",
     "ipv6-cidr-block": null,
     "ipv6-public-cidr-block": null,
     "lifecycle-state": "AVAILABLE",
     "time-created": "2019-11-13T14:50:57.323000+00:00",
     "vcn-domain-name": null
   },
   "etag": "8d3a4408"
 }

VCN2:

oci network vcn create --cidr-block 10.3.0.0/20 --compartment-id ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa --display-name vcn2 --dns-label vcn2
 {
   "data": {
     "cidr-block": "10.3.0.0/20",
     "compartment-id": "ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa",
     "default-dhcp-options-id": "ocid1.dhcpoptions.oc1.phx.aaaaaaaaka2ja2efstff2el3pw4t46co6jyjx3cq2xl46zr4cstle7s6mlya",
     "default-route-table-id": "ocid1.routetable.oc1.phx.aaaaaaaa2wqwmsiu32n3tc33pylycv7u75b66xuycn7ij2ilwwjeju5hofkq",
     "default-security-list-id": "ocid1.securitylist.oc1.phx.aaaaaaaaas3yhwpvilophj2nshwfdyp2g3o5vykgooj27xt2kqaqsghc5rjq",
     "defined-tags": {},
     "display-name": "vcn2",
     "dns-label": "vcn2",
     "freeform-tags": {},
     "id": "ocid1.vcn.oc1.phx.aaaaaaaauzosk3jx4mhxwwxngnvx5wco3ckoylqu4nioudm5zgrb5o6w6a7a",
     "ipv6-cidr-block": null,
     "ipv6-public-cidr-block": null,
     "lifecycle-state": "AVAILABLE",
     "time-created": "2019-11-13T15:47:09.602000+00:00",
     "vcn-domain-name": "vcn2.oraclevcn.com"
   },
   "etag": "d8b160ad"
 }

As you can see on VCN2 I’ve informed the parameter dns_label.

At OCI console it shows the VCN’s like this:

Let’s now create one subnet on each VCN.

oci network subnet create --cidr-block 10.2.1.0/24 --compartment-id ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa --vcn-id=ocid1.vcn.oc1.phx.aaaaaaaa7gr26rfluz43crfypp7qp3dscuqsibfrfq6iai7z5sxz5uhel4va --display-name=sub1pub --availability-domain="xbee:PHX-AD-1"
 {
   "data": {
     "availability-domain": "xbee:PHX-AD-1",
     "cidr-block": "10.2.1.0/24",
     "compartment-id": "ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa",
     "defined-tags": {},
     "dhcp-options-id": "ocid1.dhcpoptions.oc1.phx.aaaaaaaaba6d7pjjlkj6ectw4facyrxvsdfs5ppxdfpliuqgkzpz5q6j4z2a",
     "display-name": "sub1pub",
     "dns-label": null,
     "freeform-tags": {},
     "id": "ocid1.subnet.oc1.phx.aaaaaaaas3fh55h46kadophdob7tj3o26pbcjogxyyqeh2jisrhpqfrrbnqa",
     "ipv6-cidr-block": null,
     "ipv6-public-cidr-block": null,
     "ipv6-virtual-router-ip": null,
     "lifecycle-state": "AVAILABLE",
     "prohibit-public-ip-on-vnic": false,
     "route-table-id": "ocid1.routetable.oc1.phx.aaaaaaaaoycbxt5tkp3e5jei2bu74qnm7x2h3hvydwdtqzc4lciayal466wq",
     "security-list-ids": [
       "ocid1.securitylist.oc1.phx.aaaaaaaadjzgagusqdrppvtp3hrhl4coocyayjozwthirnpjpv2vnfyl4laq"
     ],
     "subnet-domain-name": null,
     "time-created": "2019-11-13T16:07:42.192000+00:00",
     "vcn-id": "ocid1.vcn.oc1.phx.aaaaaaaa7gr26rfluz43crfypp7qp3dscuqsibfrfq6iai7z5sxz5uhel4va",
     "virtual-router-ip": "10.2.1.1",
     "virtual-router-mac": "00:00:17:11:DA:D5"
   },
   "etag": "da7f5e26"
 }

oci network subnet create --cidr-block 10.3.1.0/24 --compartment-id ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa --vcn-id=ocid1.vcn.oc1.phx.aaaaaaaauzosk3jx4mhxwwxngnvx5wco3ckoylqu4nioudm5zgrb5o6w6a7a --display-name=sub1pub --availability-domain="xbee:PHX-AD-1"
 {
   "data": {
     "availability-domain": "xbee:PHX-AD-1",
     "cidr-block": "10.3.1.0/24",
     "compartment-id": "ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa",
     "defined-tags": {},
     "dhcp-options-id": "ocid1.dhcpoptions.oc1.phx.aaaaaaaaka2ja2efstff2el3pw4t46co6jyjx3cq2xl46zr4cstle7s6mlya",
     "display-name": "sub1pub",
     "dns-label": null,
     "freeform-tags": {},
     "id": "ocid1.subnet.oc1.phx.aaaaaaaa5jfcmbumzpebx6svtfp75yqsjtw2r34g3qnfbwvyxo6jps3fytea",
     "ipv6-cidr-block": null,
     "ipv6-public-cidr-block": null,
     "ipv6-virtual-router-ip": null,
     "lifecycle-state": "AVAILABLE",
     "prohibit-public-ip-on-vnic": false,
     "route-table-id": "ocid1.routetable.oc1.phx.aaaaaaaa2wqwmsiu32n3tc33pylycv7u75b66xuycn7ij2ilwwjeju5hofkq",
     "security-list-ids": [
       "ocid1.securitylist.oc1.phx.aaaaaaaaas3yhwpvilophj2nshwfdyp2g3o5vykgooj27xt2kqaqsghc5rjq"
     ],
     "subnet-domain-name": null,
     "time-created": "2019-11-13T16:08:31.031000+00:00",
     "vcn-id": "ocid1.vcn.oc1.phx.aaaaaaaauzosk3jx4mhxwwxngnvx5wco3ckoylqu4nioudm5zgrb5o6w6a7a",
     "virtual-router-ip": "10.3.1.1",
     "virtual-router-mac": "00:00:17:BB:57:17"
   },
   "etag": "16e1d3e4"
 }

Great. Now the interesting part: let’s try to launch a DB system on each VCN.

You have to inform a lot of parameters so I’m using a json file.

Attempt to launch a DB on VCN1 fails with error below:

oci db system launch --from-json file://db_19c_vcn1.json  ServiceError:  {      "code": "InvalidParameter",      "message": "domain name cannot be an empty string.",      "opc-request-id": "6B793936B1BB4D33BC0DDE6399C21B8B/6049601F5F54B75274E10E48542D39AD/D048ADC38DCDE42EA972F2D3E65FB9AE",      "status": 400  }

Now on VCN2, db launch works ! (output truncated):

reguchi@macpro bin % ./oci db system launch --from-json file://db_19c_vcn2.json
 {
   "data": {
     "availability-domain": "xbee:PHX-AD-1",
     "backup-network-nsg-ids": null,
     "backup-subnet-id": null,
     "cluster-name": "db19c",
     "compartment-id": "ocid1.compartment.oc1..aaaaaaaayjazkpkwmzys6xolc4kwncsj3p54iluporxw2iens4qutkjxatpa",
     "cpu-core-count": 1,
     "data-storage-percentage": 80,
     "data-storage-size-in-gbs": 256,
     "database-edition": "ENTERPRISE_EDITION_EXTREME_PERFORMANCE",
     "db-system-options": {
       "storage-management": "ASM"
     },
     "defined-tags": {},
     "disk-redundancy": "HIGH",
     "display-name": "myTestDB",
     "domain": "sub1pub.vcn2.oraclevcn.com",
     "fault-domains": [
       "FAULT-DOMAIN-1"
     ],
     "freeform-tags": {},
     "hostname": "db1",
     "id": "ocid1.dbsystem.oc1.phx.abyhqljtpqwao42mvywxggjk5xhk6sg6oez65vjopetg5fjcgvtpychuvcma",
     "iorm-config-cache": null,
     "last-patch-history-entry-id": null,
     "license-model": "BRING_YOUR_OWN_LICENSE",
     "lifecycle-details": null,
     "lifecycle-state": "PROVISIONING",
     "listener-port": 1521,
     "node-count": 1,
     "nsg-ids": null,
     "reco-storage-size-in-gb": 256,
     "scan-dns-record-id": null,
     "scan-ip-ids": null,
     "shape": "VM.Standard1.1",

Bottom line is: Don’t forget to define DNS while using oci cli for VCN provisioning. It is not required by default but you may miss this later and it is not possible to change DNS after VCN creation.

Reference: https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/network/vcn/create.html

If you rely on OCI console the DNS checkbox is checked and you are asked to define the dns_label, just like image below 🙂