Variable substitution using new_val

This is not new but I wasn’t aware of this cool feature !

You can assign a value to a variable using new_val on SQLPlus.

From the doc: “Specifies a variable to hold a column value”

Example:

column PROPERTY_VALUE new_val temporary_tablespace

select PROPERTY_VALUE from database_properties where PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

This way I can easily assign the output from the select command to the variable temporary_tablespace.

You can check the variable’s value using define command:

define temporary_tablespace

Easy, right ?

Recommended reading here.

Uploading files to MOS

I want to share a very useful tip on how to upload file to MOS.

I think most people use the browser interface to upload to support.oracle.com website.

But if you want to upload the file from the Linux host itself or file is larger then 2Gb, go with curl instead.

Sintaxe is:

curl -T <file> -u <metalink_id> https://transport.oracle.com/upload/issue/<SR_number>/

example:

curl -T /var/tmp/sosreport-c0329121-3-23234203192-2020-06-02-edpizve.tar.xz -u rogerio.eguchi@velocitycloud.com https://transport.oracle.com/upload/issue/3-23223232323/

You just need to inform your password.

Relying on this approach will save you time as you don’t have to download the file(s) to the laptop then upload to the Oracle Support.

Easier, right ?

Fixing udev asm rules with kfed

I’ve been working on a very interesting project: DRaaS (Disaster Recovery as a service)

Yes, that is it. We are configuring a DR on AWS for a huge on-premises SAP system.

Regarding Oracle, we are relying on two technologies:

AWS CloudEndure

Oracle DataGuard

So during a test cutover using CloudEndure, we realised that the on-premises server was using udev rules and iSCSI protocol to access the storage.

On AWS EBS you don’t this option so a config change must be made for disks to be available to ASM.

Oracle ASM kfed to the rescue !

But what is kfed ?

It is an Oracle ASM tool to gather ASM disk header information when ASM diskgroups can not be mounted.

Script to fix the udev rules:

_kfed="$(grep ^+ASM /etc/oratab | cut -d\: -f 2)/bin/kfed"
for disk in `blkid | grep oracleasm | awk -F\: '{print $1}'`;do
device_name=`echo $disk | sed 's,/dev/,,g'`
echo -n "KERNEL==\"$device_name\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0664\", SYMLINK+=\"oracleasm/disks/"
$_kfed read $disk 2>/dev/null | grep dskname | awk '{print $2 "\""}'
done > /etc/udev/rules.d/99-asm.rules

After fixing udev asm rule, you must reload the rules with:

udevadm control --reload-rules
udevadm trigger

And check if Oracle ASM disks permissions are correct on /dev

For more information on ASM tools you can check MOS: ASM tools used by Support : KFOD, KFED, AMDU (Doc ID 1485597.1)

Oracle Gold Image

A gold image is a copy of a software-only, installed Oracle home. It is used to copy an image of an Oracle home to a new host on a new file system to serve as an active, usable Oracle home.

Why should you use it ?

It will allow you to have a “perfect” version of the OH you need to deploy.

I think until now most DBA’s usually installs the base Oracle version then applies the RU needed (or latest).

From now on you can just build an Oracle Gold Image and create your OH’s from it.

Very simple example: Creating a Gold Image for Oracle 19.7

First you need to install Oracle 19.3, then apply patch 30783556.

[oracle@server ~]$ $ORACLE_HOME/OPatch/opatch lspatches
30894985;OCW RELEASE UPDATE 19.7.0.0.0 (30894985)
30869156;Database Release Update : 19.7.0.0.200414 (30869156)

Finally to create the Gold Image:

[oracle@server ~]$ $ORACLE_HOME/runInstaller -silent -createGoldImage -destinationLocation /u01/soft
Launching Oracle Database Setup Wizard…
Successfully Setup Software.
Gold Image location: /u01/soft/db_home_2020-05-22_08-08-26PM.zip

So this zip file db_home_2020-05-22_08-08-26PM.zip will allow you to deploy Oracle 19.7

How ?

Just unzip it and execute runInstaller 🙂

Example:

[oracle@server ~]$ mkdir -p /u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1

[oracle@server ~]$ unzip -q /u01/soft/db_home_2020-05-22_08-08-26PM.zip -d /u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1

export ORACLE_HOME=/u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1
export ORA_INVENTORY=/u01/c0s6/app/oraInventory
export ORACLE_BASE=/u01/c0s6/app/oracle

[oracle@server ~]$ ${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \
-responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=${ORA_INVENTORY} \
SELECTED_LANGUAGES=en \
ORACLE_HOME=${ORACLE_HOME} \
ORACLE_BASE=${ORACLE_BASE} \
oracle.install.db.InstallEdition=EE \
oracle.install.db.OSDBA_GROUP=dba \
oracle.install.db.OSOPER_GROUP=dba \
oracle.install.db.OSBACKUPDBA_GROUP=dba \
oracle.install.db.OSDGDBA_GROUP=dba \
oracle.install.db.OSKMDBA_GROUP=dba \
oracle.install.db.OSRACDBA_GROUP=dba \
oracle.install.db.ConfigureAsContainerDB=false \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true
Launching Oracle Database Setup Wizard…

The response file for this session can be found at:
/u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1/install/response/db_2020-05-22_08-54-24PM.rsp
You can find the log of this install session at:
/u01/c0s6/app/oraInventory/logs/InstallActions2020-05-22_08-54-24PM/installActions2020-05-22_08-54-24PM.log
As a root user, execute the following script(s):
1. /u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1/root.sh
Execute /u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1/root.sh on the following nodes:
[server]
Successfully Setup Software.

[root@server tmp]# /u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1/root.sh
Check /u01/c0s6/app/oracle/product/19.7.0.0/dbhome_1/install/root_c0s64150_2020-05-22_20-56-10-578402868.log for the output of root script

[oracle@server ~]$ $ORACLE_HOME/OPatch/opatch lspatches
30894985;OCW RELEASE UPDATE 19.7.0.0.0 (30894985)
30869156;Database Release Update : 19.7.0.0.200414 (30869156)

How cool is that ?

And it gets better with Fleet Provisioning.

OCI – Object storage cross-region copy

Let’s say you have your Oracle DB backups on Ashburn and for DR requirements you must have a copy of those files on object storage at Phoenix.

Is it possible to do it ?

Yes, it is but OCI doesn’t have a native replication solution like AWS S3 and it also does not allow you to run bulk copy.

Exactly, you have to copy object by object to the DR region at this time.

Which is very simple task when using oci cli and bash. A code example below:

#!/bin/bash 
export OCIHOME=/home/oracle/ocicli 
export OBUCKET=ashbucket
export DBUCKET=phobucket
export NAMESPACE=namespace 

# object store list 
$OCIHOME/oci os object list --bucket-name=$OBUCKET --all | grep name | awk '{ print $2 }' | sed 's/"//g' | sed 's/,//g' > /tmp/ostore.list 

#sintaxe: 
#oci os object copy --namespace-name <object_storage_namespace> --bucket-name <source_bucket_name> --source-object-name <source_object> --destination-namespace #<destination_namespace_string> --destination-region <destination_region> --destination-bucket <destination_bucket_name> --destination-object-name <destination_object_name> 

# copy to DR region 
     for id in `cat /tmp/ostore.list` 
        do $OCIHOME/oci os object copy --namespace-name=$NAMESPACE --bucket-name=$OBUCKET --source-object-name=$id --destination-namespace=$NAMESPACE --destination-region=us-phoenix-1 --destination-bucket=$DBUCKET 
        done

Object Storage is a regional service so you must authorize the Object Storage service for each region carrying out copy operations on your behalf.

In this example, you have to authorize the Object Storage service in region US East (Ashburn) to manage objects on your behalf.

To accomplish this, create the following a policy:

Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy

Hopefully OCI will soon deploy a native replication capability 🙂

For more information, Copying Objects documentation is here.

Region id’s can be found here.

Hope it helps,

Rogerio

Update 1: About two weeks after I’ve published this post OCI launched the object store bucket replication. Docs here and new blog post soon !

OCI – File storage snapshot management

OCI file storage snapshots are not managed automatically by OCI like block volumes backups are when using policy-managed backups.

Which means that you have to create and delete the snapshots by yourself.

On this blog post I will share a shell script to accomplish this task using oci cli.

Here is an example to create daily snapshots: fssnap.sh

# !/bin/bash
export CLI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export ENV=prod

# create snap
$CLI/oci fs snapshot create --file-system-id=$FSSOCID --name=$ENV.$TSTAMP

You can get the file storage ocid at OCI console:

Create a user and group and assign the below policy to the group:

Allow group StorageAdmins to manage file-family in compartment PROD

Schedule this shell script to run on a regular internal that fits your needs.

But in my case I had to keep a limited amount of backups based on same information such as: environment (prod or dev) and customer retention policy (bronze, silver or gold).

So I wrote another simple shell to accomplish this: fscleanup.sh

!/bin/bash
export CI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export KEEP=42

# dump backups to tempfile
$CI/oci fs snapshot list --file-system-id=$FSSOCID | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g' > /tmp/fss.bkp

#count
CT=`cat /tmp/fss.bkp | wc -l`

#remove backups older then $KEEP
if [ "$CT" -gt $KEEP ]; then
    DIFF=$(expr $CT - $KEEP)
    for id in `tail -$DIFF /tmp/fss.bkp`
    do
       $CI/oci fs snapshot delete --force --snapshot-id $id
    done
else
    echo "less then KEEP: $KEEP"
fi

Please check OCI doc about managing snapshots for more info.

Let’s wait for the OCI native and automated way for doing this but until then this is the workaround.

Data transfer from AWS Oracle RDS to S3

It looks simple, right ? It is but there are a lot to do before you can actually perform the copy.

Prerequisits:

  • Create an IAM policy that gives RDS read/write/list permissions to the S3 bucket
  • Create an IAM role that gives RDS access to the S3 bucket
  • Associate the IAM role to the DB instance
  • Create a new Option Group or associate the S3_INTEGRATION option to an existing one

You can check the details on how to perform theses steps here.

OK, now you can perform the data transfer running, for example, the command below:

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
       p_bucket_name    =>  'mybucket', 
       p_prefix         =>  '', 
       p_s3_prefix      =>  '', 
       p_directory_name =>  'DATA_PUMP_DIR') 
    AS TASK_ID FROM DUAL; 

It will copy all files on DATA_PUMP_DIR directory to S3 bucket mybucket

This command will provide a task_id that will be useful to monitor the transfer status.

You can rely on AWS RDS Events at the console or the store procedure below to monitor the transfer job.

 SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<task_id>.log')); 

Example:

Or at the AWS Console:

ORA-1031 on Logical Standby

Comecei a estudar para a prova Oracle 12c Data Guard Administrator faz alguns dias e durante meus testes encontrei o erro ORA-1031 no logical standby.

I’ve started my studies for Oracle 12c Data Guard Administrator test a couple of days ago and during the tests on my LAB I got the error ORA-1031 on logical standby.

Na verdade eu nunca havia criado um Logical Standby então tudo aqui é novidade pra mim !

The true is everything related to logical standby is new to me since I have never had the opportunity to work with this technology.

Antes de descobrir o erro ORA-1031, o dg broker mostra o erro genérico ORA-16810:

Before error ORA-1031 I got the generic error ORA-16810:

logical_error

Você chega no erro de verdade de duas maneiras:

You will narrow this issue down using either of those two approaches:

  • Checking alert.log  from logical standby

alert_stdb_err

trace file from Apply process (chicago_as01_18532.trc) also shows error ORA-1031:

[oracle@oralab2 ~]$ tail -5 /u01/app/oracle/diag/rdbms/chicago/chicago/trace/chicago_as01_18532.trc
*** 2018-12-10 06:40:38.414
ORA-01031: insufficient privileges
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01031: insufficient privileges
  • Running dg broker command show database <db> ‘StatusReport’logical_ora1031

O erro começou quando eu executei o comando “grant sysdg to sysdg” uma vez que o processo de apply não é executado com privilégio de sysdba.

This issue was due to the command “grant sysdg to user” because the Logical Apply Processes does not run with sysdba privileges.

Para corrigir precisaremos criar uma procedure para “pular” erros de grant no schema desejado.

To fix it we need to build a procedure to skip grants errors on specific schemas.

Exmple:

CREATE OR REPLACE PROCEDURE sys.handle_error_ddl ( 
old_stmt IN VARCHAR2, 
stmt_type IN VARCHAR2, 
schema IN VARCHAR2, 
name IN VARCHAR2, 
xidusn IN NUMBER, 
xidslt IN NUMBER, 
xidsqn IN NUMBER, 
error IN VARCHAR2, 
new_stmt OUT VARCHAR2 
) AS 

BEGIN 
-- Default to what we already have 
new_stmt := old_stmt; 

-- Ignore any GRANT errors on SYS or HR schemas 
IF INSTR(UPPER(old_stmt),'GRANT') > 0 
THEN 
IF schema IS NULL 
OR (schema IS NOT NULL AND 
(UPPER(schema) = 'SYS' OR UPPER(schema) = 'SYSDG' ))
THEN 
new_stmt := NULL; 
-- record the fact that we just skipped an error on 'SYS' or 'HR' schemas 
-- code not shown here 
END IF; 
END IF; 

END handle_error_ddl; 
/
EXECUTE DBMS_LOGSTDBY.SKIP_ERROR (stmt => 'NON_SCHEMA_DDL', schema_name => NULL, object_name => NULL, proc_name => 'SYS.HANDLE_ERROR_DDL');

Depois basta iniciar o processo de apply novamente.

After that you have to restart the Logical Apply Process again.

Update 1:

Outra situação em que o erro ORA-1031 pode acontecer é quando o arquivo de senhas (orapw<sid>) não está sincronizado entre os bancos primary e standby.

Another situation when error ORA-1031 happens is when the password file gets out-of-sync.

 

 

Oracle Data Guard Database Compare

Uma nova procedure chamada DBMS_DBCOMP.DBCOMP (disponível na 12.2), compara os blocos de dados armazenados no Primary DB e seu standby físico. A ferramenta busca por erros de disco que não podem ser detectados por outras ferramentas como, por exemplo, o utilitário DBVERIFY.

Veja que o proprio DataGuard já realiza a validação de corrupção de dados quentes (dados que são lidos ou alterados) tanto no Primary quanto no Standby.

Mas a nova ferramenta vai além pois valida também os dados frios, permitindo ao DBA ter confiança de que seu ambiente está livre de corrupções físicas.

Modo de utilização:

DBMS_DBCOMP.DBCOMP (datafile IN varchar2, outputfile IN varchar2, block_dump IN boolean);

Onde,

datafile – pode ser o número ou nome do datafile. Utilize ‘ALL’ para comparar todos os datafiles.

outputfile – define um prefixo para o nome do arquivo de saída. Por padrão o arquivo é gerado no diretório $ORACLE_HOME/dbs. É gerado um arquivo de saída por datafile.

block_dump – Parâmetro boleano para especificar se o conteúdo do bloco deve ser gravado no arquivo de saída quando um bloco não idêntico entre primary/standby for encontrado. False por padrão.

Exemplo: Execute no Primary.

exec sys.dbms_dbcomp.dbcomp(‘ALL’,’standbypdbComparison’, true);

e utilize a view V$SESSION_LONGOPS para acompanhar o progresso.

Purging histograms

Oracle will automatically create histograms during night maintenance window. This might impact a query performance, so use dbms_stats to purge it. You can also change or disable the automatic collection statistics job.

The following command will purge a column histogram:

exec dbms_stats.delete_column_stats( ownname=>'&OWNER', tabname=>'&TABLE', colname=>'&COLUMN', col_stat_type=>'HISTOGRAM');

Or, if you need to purge all histograms from a table, use this one:

select 'exec dbms_stats.delete_column_stats('''||owner||''','''||table_name||''','''||column_name||''', col_stat_type=> ''HISTOGRAM''); '
from dba_tab_columns
where owner = '&OWNER' and table_name = '&TABLE_NAME' and histogram not in ('NONE')
/

Another good info at:

https://blogs.oracle.com/optimizer/how-do-i-drop-an-existing-histogram-on-a-column-and-stop-the-auto-stats-gathering-job-from-creating-it-in-the-future

https://blogs.oracle.com/optimizer/how-does-the-methodopt-parameter-work

http://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm
Continue reading “Purging histograms”