OCI – Object storage cross-region copy

Let’s say you have your Oracle DB backups on Ashburn and for DR requirements you must have a copy of those files on object storage at Phoenix.

Is it possible to do it ?

Yes, it is but OCI doesn’t have a native replication solution like AWS S3 and it also does not allow you to run bulk copy.

Exactly, you have to copy object by object to the DR region at this time.

Which is very simple task when using oci cli and bash. A code example below:

#!/bin/bash 
export OCIHOME=/home/oracle/ocicli 
export OBUCKET=ashbucket
export DBUCKET=phobucket
export NAMESPACE=namespace 

# object store list 
$OCIHOME/oci os object list --bucket-name=$OBUCKET --all | grep name | awk '{ print $2 }' | sed 's/"//g' | sed 's/,//g' > /tmp/ostore.list 

#sintaxe: 
#oci os object copy --namespace-name <object_storage_namespace> --bucket-name <source_bucket_name> --source-object-name <source_object> --destination-namespace #<destination_namespace_string> --destination-region <destination_region> --destination-bucket <destination_bucket_name> --destination-object-name <destination_object_name> 

# copy to DR region 
     for id in `cat /tmp/ostore.list` 
        do $OCIHOME/oci os object copy --namespace-name=$NAMESPACE --bucket-name=$OBUCKET --source-object-name=$id --destination-namespace=$NAMESPACE --destination-region=us-phoenix-1 --destination-bucket=$DBUCKET 
        done

Object Storage is a regional service so you must authorize the Object Storage service for each region carrying out copy operations on your behalf.

In this example, you have to authorize the Object Storage service in region US East (Ashburn) to manage objects on your behalf.

To accomplish this, create the following a policy:

Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy

Hopefully OCI will soon deploy a native replication capability ūüôā

For more information, Copying Objects documentation is here.

Region id’s can be found here.

Hope it helps,

Rogerio

OCI – File storage snapshot management

OCI file storage snapshots are not managed automatically by OCI like block volumes backups are when using policy-managed backups.

Which means that you have to create and delete the snapshots by yourself.

On this blog post I will share a shell script to accomplish this task using oci cli.

Here is an example to create daily snapshots: fssnap.sh

# !/bin/bash
export CLI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export ENV=prod

# create snap
$CLI/oci fs snapshot create --file-system-id=$FSSOCID --name=$ENV.$TSTAMP

You can get the file storage ocid at OCI console:

Create a group to your and assign proper privileges like:

Allow group StorageAdmins to manage file-family in compartment PROD

Schedule this shell script to run on a regular internal that fits your needs.

But in my case I had to keep a limited amount of backups based on same information such as: environment (prod or dev) and customer retention policy (bronze, silver or gold).

So I wrote another simple shell to accomplish this: fscleanup.sh

!/bin/bash
export CI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export KEEP=42

# dump backups to tempfile
$CI/oci fs snapshot list --file-system-id=$FSSOCID | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g' > /tmp/fss.bkp

#count
CT=`cat /tmp/fss.bkp | wc -l`

#remove backups older then $KEEP
if [ "$CT" -gt $KEEP ]; then
    DIFF=$(expr $CT - $KEEP)
    for id in `tail -$DIFF /tmp/fss.bkp`
    do
       $CI/oci fs snapshot delete --force --snapshot-id $id
    done
else
    echo "less then KEEP: $KEEP"
fi

Please check OCI doc about managing snapshots for more info.

Let’s wait for the OCI native and automated way for doing this but until then this is the workaround.

OGB Appreciation Day: parallel ops #ThanksOGB

Time is money so let’s get things done faster with parallel operations like:

  • Expdp/Impdp:
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=impdpSCOTT.log

Must read note here.

  • RMAN
CONFIGURE DEVICE TYPE disk PARALLELISM 2;
CONFIGURE DEVICE TYPE sbt PARALLELISM 2;


RMAN always allocates the number of channels specified in PARALLELISM, using specifically configured channels if you have configured them and generic channels if you have not. Note that if you configure specific channels with numbers higher than the parallelism setting, RMAN will not use these channels.
  • Parallel Execution of SQL statments

Nice Oracle Paper here and of course Oracle docs.

  • Parallel Upgrade Utility (catctl.pl/dbupgrade)
$ORACLE_HOME/bin/dbupgrade -n 4

-n options specifies the number of processes to use for parallel operations.

Non-CDBs: The -n parameter specifies the number of SQL processes to use when upgrading the database.

Multitenant architecture databases (CDBs): The number of PDBs upgraded concurrently is controlled by the value of the -n parameter. Multiple PDB upgrades are processed together. Starting in Oracle Database 12c, the default value for multitenant architecture databases is the number of CPUs on your system. A cpu_count equal to 24 equates to a default value of 24 for -n.

Values for the -n parameter:

Non-CDBs: The maximum value for -n is 8. The minimum value is 1. The default value is 4.

Multitenant architecture databases (CDBs): The maximum value for -n is unlimited. The minimum value is 4. The maximum PDB upgrades running concurrently is the value of -n divided by the value of -N.

-N option specifies the number of SQL processors to use when upgrading PDBs.

For non-CDBs, this parameter is ignored.
For CDBs, the maximum value is 8. The minimum value is 1. The default value is 2.

So without specifying values for the parameters -n and -N (that is, accept the default value of -N, which is 2, and accept the default value of -n as the CPU_COUNT parameter value, which is 24). 

The following parallel processing occurs:

12 PDBs are upgraded in parallel (CPU_COUNT divided by 2)
2 parallel processes run for each PDB

That is what came from the top of my head so it should have more parallel options ūüôā

Thanks Tim to get the community together every year ! #ThanksOGB

GUOB TECH DAY / Oracle Groundbreakers LAD Tour 2019 – sobre o evento

Na minha opinião o evento foi, em todos os aspectos, melhor do que no ano passado. A organização leva muito a sério a questão de melhoria contínua e por isso é muito importante que todos preencham a ficha de avaliação para que no próximo ano o evento seja melhor ainda !

A localiza√ß√£o, as instala√ß√Ķes, os palestrantes e a audi√™ncia foram incr√≠veis !

Em todas as palestras que assisti (e também na que ministrei) a sala estava cheia e com gente interessada em aprender e trocar experiências.

Tivemos Product Managers da Oracle trazendo conte√ļdo exclusivo e super dispon√≠veis para tirar d√ļvidas. Eu mesmo fiz um monte de perguntas para o Ricardo Gonzalez (PM para Oracle Cloud Migration, ACFS & RHP) e acabei ganhando uma caneca do evento ūüôā

A √ļltima palestra foi do Alex Gorbachev (CTO da Pythian) sobre a evolu√ß√£o da carreira dele e como ele enxerga o futuro do DBA Oracle. Sim, vamos ter que mudar/evoluir muito !

Em resumo: o melhor evento de tecnologia Oracle do pa√≠s … melhor que o Oracle Open World Brasil de longe.

Perdeu o evento desse ano ? Faça o cadastro gratuito no site guob.com.br e veja as palestras disponíveis.

Mas n√£o perca o pr√≥ximo ano ūüôā

ORA-19563 on rman restore

I hit error ORA-19563 during rman restore last week.

I was restoring a database backup from Oracle 11.2 on ASM to a Oracle 11.2 on Filesystem.

Although not the same procedure, MOS RMAN Duplicate Fails With RMAN-06136 ORA-19563 (Doc ID 453123.1) helped me on this issue when it mentioned I could have duplicate filenames.

There was duplicated filenames indeed !

My restore script was:

run {
     set newname for database to '/oradata/%b';
     restore database;
     switch datafile all;
 }

Which means that datafile 55 was overwritten by datafile 56 hence rman error on switch step.

Quick fix:

  • remove datafile 56
  • restore only datafile 55 and 56 using set newname clause but specifying different filename:
run {
         set newname for datafile 55 to '/oradata/svm910t02.dbf';
         set newname for datafile 56 to '/oradata/svm910t03.dbf';
         restore datafile 55;
         restore datafile 56;
 }
  • run original script again which this time will just perform the switch step

How to prevent this from happening again ?

  • Check if you have duplicate filenames before restoring:
select substr ( file_name, instr( file_name, '/', -1)) file_name, count() from dba_data_files group by substr ( file_name,instr( file_name, '/', -1))  having count() > 1
/
  • If yes, run rman restore like this:
run {
         set newname for datafile 56 to '/oradata/svm910t03.dbf';
         set newname for database to '/oradata/%b';
         restore database;
         switch datafile all;
 }

More information can be found at Oracle 11.2 docs here.

Restart from a failed Oracle upgrade

Interesting situation this morning.

I was performing a database upgrade from 11.2 to 12.2 when my VPN crashed.

I realized that I forgot to start the upgrade process using linux screen terminal which means that my upgrade process was lost.

Well, Oracle 12.2 has the ability to resume a failed upgrade process from the failed step automatically !

From the Oracle Docs:

Oracle Database 12c release 2 (12.2) includes a new Resume option for Parallel Upgrade Utility. This option is available for both CDBs and Non-CDBs. You are not required to identify failed or incomplete phases when you rerun or restart the upgrade. When you use the Parallel Upgrade Utility using the resume option (-R), the utility automatically detects phases from the previous upgrade that are not completed successfully. The Parallel Upgrade Utility then reruns or restarts just these phases that did not complete successfully, so that the upgrade is completed. Bypassing steps that already completed successfully reduces the amount of time it takes to rerun the upgrade. 

So I just ran $ORACLE_HOME/bin/dbupgrade -n 4 -R -l $ORACLE_HOME/diagnostics and the upgrade process was restarted.

Cool new feature, right ?

More info about it here.

Data transfer from AWS Oracle RDS to S3

It looks simple, right ? It is but there are a lot to do before you can actually perform the copy.

Prerequisits:

  • Create an IAM policy that gives RDS read/write/list permissions to the S3 bucket
  • Create an IAM role that gives RDS access to the S3 bucket
  • Associate the IAM role to the DB instance
  • Create a new Option Group or associate the S3_INTEGRATION option to an existing one

You can check the details on how to perform theses steps here.

OK, now you can perform the data transfer running, for example, the command below:

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
       p_bucket_name    =>  'mybucket', 
       p_prefix         =>  '', 
       p_s3_prefix      =>  '', 
       p_directory_name =>  'DATA_PUMP_DIR') 
    AS TASK_ID FROM DUAL; 

It will copy all files on DATA_PUMP_DIR directory to S3 bucket mybucket

This command will provide a task_id that will be useful to monitor the transfer status.

You can rely on AWS RDS Events at the console or the store procedure below to monitor the transfer job.

 SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<task_id>.log')); 

Example:

Or at the AWS Console: