ORA-06508: PL/SQL: could not find program unit being called: “SYS.DBMS_CUBE_EXP”

If you get the error above when running expdp you most likely have OLAP objects remaining in data dictionary while OLAP is not installed or was de-installed.

In my situation, this have happened on Oracle 12.2 but it can also happen on 11.2

This can be confirmed if query below return rows:

SELECT * FROM SYS.EXPPKGACT$ WHERE PACKAGE = 'DBMS_CUBE_EXP';

To fix it, just run:

# backup exppkgact$_bck table
create table exppkgact$_bck as select * from exppkgact$;

# delete DBMS_CUBE_EXP from exppkgact$
delete from exppkgact$ where package = 'DBMS_CUBE_EXP' and schema = 'SYS';

commit;

And run expdp again.

OCI – Object storage cross-region copy

Let’s say you have your Oracle DB backups on Ashburn and for DR requirements you must have a copy of those files on object storage at Phoenix.

Is it possible to do it ?

Yes, it is but OCI doesn’t have a native replication solution like AWS S3 and it also does not allow you to run bulk copy.

Exactly, you have to copy object by object to the DR region at this time.

Which is very simple task when using oci cli and bash. A code example below:

#!/bin/bash 
export OCIHOME=/home/oracle/ocicli 
export OBUCKET=ashbucket
export DBUCKET=phobucket
export NAMESPACE=namespace 

# object store list 
$OCIHOME/oci os object list --bucket-name=$OBUCKET --all | grep name | awk '{ print $2 }' | sed 's/"//g' | sed 's/,//g' > /tmp/ostore.list 

#sintaxe: 
#oci os object copy --namespace-name <object_storage_namespace> --bucket-name <source_bucket_name> --source-object-name <source_object> --destination-namespace #<destination_namespace_string> --destination-region <destination_region> --destination-bucket <destination_bucket_name> --destination-object-name <destination_object_name> 

# copy to DR region 
     for id in `cat /tmp/ostore.list` 
        do $OCIHOME/oci os object copy --namespace-name=$NAMESPACE --bucket-name=$OBUCKET --source-object-name=$id --destination-namespace=$NAMESPACE --destination-region=us-phoenix-1 --destination-bucket=$DBUCKET 
        done

Object Storage is a regional service so you must authorize the Object Storage service for each region carrying out copy operations on your behalf.

In this example, you have to authorize the Object Storage service in region US East (Ashburn) to manage objects on your behalf.

To accomplish this, create the following a policy:

Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy

Hopefully OCI will soon deploy a native replication capability ūüôā

For more information, Copying Objects documentation is here.

Region id’s can be found here.

Hope it helps,

Rogerio

Update 1: About two weeks after I’ve published this post OCI launched the object store bucket replication. Docs here and new blog post soon !

OCI – File storage snapshot management

OCI file storage snapshots are not managed automatically by OCI like block volumes backups are when using policy-managed backups.

Which means that you have to create and delete the snapshots by yourself.

On this blog post I will share a shell script to accomplish this task using oci cli.

Here is an example to create daily snapshots: fssnap.sh

# !/bin/bash
export CLI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export ENV=prod

# create snap
$CLI/oci fs snapshot create --file-system-id=$FSSOCID --name=$ENV.$TSTAMP

You can get the file storage ocid at OCI console:

Create a group to your and assign proper privileges like:

Allow group StorageAdmins to manage file-family in compartment PROD

Schedule this shell script to run on a regular internal that fits your needs.

But in my case I had to keep a limited amount of backups based on same information such as: environment (prod or dev) and customer retention policy (bronze, silver or gold).

So I wrote another simple shell to accomplish this: fscleanup.sh

!/bin/bash
export CI=/home/oracle/ocicli
export FSSOCID=<file storage ocid>
export TSTAMP=`date +%Y%m%d`
export KEEP=42

# dump backups to tempfile
$CI/oci fs snapshot list --file-system-id=$FSSOCID | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g' > /tmp/fss.bkp

#count
CT=`cat /tmp/fss.bkp | wc -l`

#remove backups older then $KEEP
if [ "$CT" -gt $KEEP ]; then
    DIFF=$(expr $CT - $KEEP)
    for id in `tail -$DIFF /tmp/fss.bkp`
    do
       $CI/oci fs snapshot delete --force --snapshot-id $id
    done
else
    echo "less then KEEP: $KEEP"
fi

Please check OCI doc about managing snapshots for more info.

Let’s wait for the OCI native and automated way for doing this but until then this is the workaround.

ORA-19563 on rman restore

I hit error ORA-19563 during rman restore last week.

I was restoring a database backup from Oracle 11.2 on ASM to a Oracle 11.2 on Filesystem.

Although not the same procedure, MOS RMAN Duplicate Fails With RMAN-06136 ORA-19563 (Doc ID 453123.1) helped me on this issue when it mentioned I could have duplicate filenames.

There was duplicated filenames indeed !

My restore script was:

run {
     set newname for database to '/oradata/%b';
     restore database;
     switch datafile all;
 }

Which means that datafile 55 was overwritten by datafile 56 hence rman error on switch step.

Quick fix:

  • remove datafile 56
  • restore only datafile 55 and 56 using set newname clause but specifying different filename:
run {
         set newname for datafile 55 to '/oradata/svm910t02.dbf';
         set newname for datafile 56 to '/oradata/svm910t03.dbf';
         restore datafile 55;
         restore datafile 56;
 }
  • run original script again which this time will just perform the switch step

How to prevent this from happening again ?

  • Check if you have duplicate filenames before restoring:
select substr ( file_name, instr( file_name, '/', -1)) file_name, count() from dba_data_files group by substr ( file_name,instr( file_name, '/', -1))  having count() > 1
/
  • If yes, run rman restore like this:
run {
         set newname for datafile 56 to '/oradata/svm910t03.dbf';
         set newname for database to '/oradata/%b';
         restore database;
         switch datafile all;
 }

More information can be found at Oracle 11.2 docs here.

Data transfer from AWS Oracle RDS to S3

It looks simple, right ? It is but there are a lot to do before you can actually perform the copy.

Prerequisits:

  • Create an IAM policy that gives RDS read/write/list permissions to the S3 bucket
  • Create an IAM role that gives RDS access to the S3 bucket
  • Associate the IAM role to the DB instance
  • Create a new Option Group or associate the S3_INTEGRATION option to an existing one

You can check the details on how to perform theses steps here.

OK, now you can perform the data transfer running, for example, the command below:

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
       p_bucket_name    =>  'mybucket', 
       p_prefix         =>  '', 
       p_s3_prefix      =>  '', 
       p_directory_name =>  'DATA_PUMP_DIR') 
    AS TASK_ID FROM DUAL; 

It will copy all files on DATA_PUMP_DIR directory to S3 bucket mybucket

This command will provide a task_id that will be useful to monitor the transfer status.

You can rely on AWS RDS Events at the console or the store procedure below to monitor the transfer job.

 SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<task_id>.log')); 

Example:

Or at the AWS Console:

Impdp Slowdown due to StreamsAQ: enqueue blocked on low memory

Essa semana o processo de importa√ß√£o de dados demorou muito mais do que o normal. √ďtimo ! Mais um problema para resolver ūüôā

This week my impdp process took longer then expected. Great ! One more issue to play with ūüôā

Verificando o Grid Control ficou claro que a sess√£o de impdp estava “em espera” pelo evento Oracle¬†StreamsAQ: enqueue blocked on low memory.

Checking Grid Control it was clear that impdp process was in queuing state at Oracle event StreamsAQ: enqueue blocked on low memory

impdp_slowdown2

Os três notes abaixo trouxeram a solução:

With theses three documents below I could crack this issue:

  1. Datapump Expdp Or Impdp Slowdown Due To Frequent Waits On ‚ÄĚStreams AQ: Enqueue Blocked On Low Memory” (Doc ID 2386566.1)
  2. What DataPump And Oracle RDBMS Parameters And Features Can Significantly Affect DataPump Performance ? (Doc ID 1611373.1)
  3. Bug 17365043 : “STREAMS AQ: ENQUEUE BLOCKED ON LOW MEMORY” WHEN REDUCING STREAMS_POOL_SIZE

Que foi aumentar o valor do par√Ęmetro¬†STREAMS_POOL_SIZE para o valor da query:

Which was to increase the parameter STREAMS_POOL_SIZE to the value returned by the query:

select 'ALTER SYSTEM SET STREAMS_POOL_SIZE='||(max(to_number(trim(c.ksppstvl)))+67108864)||' SCOPE=SPFILE;'
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c 
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in ('__streams_pool_size','streams_pool_size');

Após a alteração, o processo de import voltou ao normal.

The impdp process came back to work after increasing the streams_pool_size.

impdp_slowdown1

 

Control File and Server Parameter File Autobackups

Encontrei uma situação interessante hoje no meu ambiente: os backups automáticos do control file e spfile estão se acumulando no FRA.

I have found a interesting situation at my env today: controlfile autobackup were pilling up.

Meus scripts de backup removem os backupsets expirados para liberar espaço e esses backups automaticos deveriam cair nessa situação, respeitando a politica de retenção, obviamente.

Since my backup scripts purge the expired backupsets to free space and theses auto backups should be marked as expired according to your retention police.

Bom, abri o MOS e comecei a procurar por essa situação e encontrei duas notas técnicas sobre esse problema:

I then started searching MOS and found two technical notes about this issue: 

1) autobackup of Spfile+controlfile was not reported as obsolete as expected (Doc ID 2365626.1)

2) Bug 25943271 Рrman report obsolete does not report controlfile backup as obsolete (Doc ID 25943271.8)

Corre√ß√£o: Aplicar o patch¬†25943271. O backport j√° est√° dispon√≠vel para diversas vers√Ķes.

Fix: Apply patch 25943271. Backport is available for several rdbms versions.

No meu caso eu apliquei, com sucesso, a seguinte solução de contorno via rman:

In my case I could successfully workaround this issue running the following rman command: 

DELETE OBSOLETE RECOVERY WINDOW OF 10 DAYS;

Links √ļteis /¬†Useful links:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/bradv/rman-backup-concepts.html#GUID-95840C84-1595-49AC-923D-310DA750676B

https://blog.dbi-services.com/oracle-12c-automatic-control-file-backups/