-ocmrf is no longer needed

I was patching an Oracle 12.1 Restart with latest bundle patch this week and realized that the -ocmrf option is no longer needed.

This enhancement started with Opatch 12.2.0.1.5 so now you have one more reason to update your Opatch ūüôā

Don’t even bother looking for emocmrsp binary … it is no longer there !

So things just got easier ! All you have to do is run:

opatchauto apply <UNZIPPED_PATCH_LOCATION>/<BUG NO> 

Example for 12.1 BP:

export PATH=$PATH:/u01/app/12.1.0.2/grid/OPatch/
opatchauto apply /home/oracle/stage/29176139/29141038

Reference: MOS note 2161861.1, 1591616.1 and readme.html for patch 29176139.

OPatch update for EMGC

Download the latest version from here.

For EMGC 13.1 and above , select the version as “13.9.0.0.0” (Release- OPatch 13.9.0.0.0). Unzip the file into a staging directory like /u01/stage/.

Stop OMS with $ORACLE_HOME/emctl stop oms -all where OH is the middleware home.

Run: java -jar <STAGE_DIR>/6880880/opatch_generic.jar -silent oracle_home=$ORACLE_HOME

To validate the installation run $ORACLE_HOME/OPatch/opatch version

The output should match the value from the readme file.



CRS crash after upgrade to Oracle 12.2

vector illustration of Crazy man cartoon

Cai num bug “interessante” na √ļltima semana ap√≥s a atualiza√ß√£o para o Oracle 12.2

I have hit an interesting bug this week after upgrade to Oracle 12.2.

A nota Doc ID 2460394.1 tem os detalhes do bug mas basicamente sua base será reiniciada pelo CRS de tempos em tempos.

MOS note Bug 28298447: cluster crashed due to mellanox driver related issue (Doc ID 2460394.1) has the details but basically your cluster will bounce the DB from time to time.

Ou seja, se você tem um Exadata, atualize o kernel para a versão 4.1.12-94.8.5 antes de realizar o upgrade do banco para 12.2

So if you running an Exadata, upgrade the kernel to version 4.1.12-94.8.5 before upgrading db to 12.2.

Essa corre√ß√£o de kernel tamb√©m est√° dispon√≠vel nas vers√Ķes 12.2.1.1.7 (ou superior) ou 18.1.5.0.0 (ou superior), ou seja, QFSDP abril/2018.

This kernel fix is also included on image version 12.2.1.1.7 (or higher) or 18.1.5.0.0 (or higher), or QFSDP from april/2018.

ORA-20003-Configuring job Load_opatch_ inventory_1on node and on instancefailed

Após realizar o upgrade para a versão Oracle RAC 12.2.0.1 o alert.log reportou o erro abaixo:

Just after an DB upgrade to Oracle RAC 12.2.0.1 the alert.log file have reported the error below:

Unable to obtain current patch information due to error: 20003, ORA-20003: Configuring job Load_opatch_inventory_1on node and on instancefailed
ORA-06512: at "SYS.DBMS_QOPATCH", line 777
ORA-06512: at "SYS.DBMS_QOPATCH", line 479
ORA-06512: at "SYS.DBMS_QOPATCH", line 455
ORA-06512: at "SYS.DBMS_QOPATCH", line 574
ORA-06512: at "SYS.DBMS_QOPATCH", line 2247

===========================================================
Dumping current patch information
===========================================================
Unable to obtain current patch information due to error: 20003
===========================================================

Novamente hora de abrir o MOS e iniciar algumas pesquisas.

Time to open MOS and start researching !

E alguns minutos depois descobrir mais um bug ūüė¶

And just a few minutes later find another Oracle bug ūüė¶

12.2 RAC Database Alert.log reports Unable to obtain current patch information due to error: 20003, ORA-20003: Configuring job Load_opatch_ inventory_1on node and on instancefailed (Doc ID 2364768.1)

O erro não tem impacto e pode ser ignorado. Se preferir patch 23333567 pode ser aplicado para corrigir esse erro.

This error is harmless and can be ignored or patch 23333567 can be applied to fix it.

 

Control File and Server Parameter File Autobackups

Encontrei uma situação interessante hoje no meu ambiente: os backups automáticos do control file e spfile estão se acumulando no FRA.

I have found a interesting situation at my env today: controlfile autobackup were pilling up.

Meus scripts de backup removem os backupsets expirados para liberar espaço e esses backups automaticos deveriam cair nessa situação, respeitando a politica de retenção, obviamente.

Since my backup scripts purge the expired backupsets to free space and theses auto backups should be marked as expired according to your retention police.

Bom, abri o MOS e comecei a procurar por essa situação e encontrei duas notas técnicas sobre esse problema:

I then started searching MOS and found two technical notes about this issue: 

1) autobackup of Spfile+controlfile was not reported as obsolete as expected (Doc ID 2365626.1)

2) Bug 25943271 Рrman report obsolete does not report controlfile backup as obsolete (Doc ID 25943271.8)

Corre√ß√£o: Aplicar o patch¬†25943271. O backport j√° est√° dispon√≠vel para diversas vers√Ķes.

Fix: Apply patch 25943271. Backport is available for several rdbms versions.

No meu caso eu apliquei, com sucesso, a seguinte solução de contorno via rman:

In my case I could successfully workaround this issue running the following rman command: 

DELETE OBSOLETE RECOVERY WINDOW OF 10 DAYS;

Links √ļteis /¬†Useful links:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/bradv/rman-backup-concepts.html#GUID-95840C84-1595-49AC-923D-310DA750676B

https://blog.dbi-services.com/oracle-12c-automatic-control-file-backups/

 

BUG 23300142 – Redo Transport Slave Process

A semana começou com o filesystem /u01 alarmando ocupação elevada. Iniciei então uma investigação para encontrar o que estava ocupando o disco.

My week started with a filesystem high occupation alarm. I then started looking for what was going on.

Utilizei o comando find abaixo para buscar por arquivos grandes:

I usually use the find command bellow to search for huge files:

find / -type f -size +800M -exec ls -lh {} \; | awk '{ print $NF ": " $5 }'

Bingo! encontrei o arquivo de trace do processo TT01 (Redo Transport Slave Process) consumindo 6Gb !

Yes ! found a trace file from process TT01 (Redo Transport Slave Process) consuming 6 Gb!

Olhando o conte√ļdo do arquivo, a mensagem:

Looking at the contents of the trace file the messages:

Error in determining current logfile for thread 1
ASYNC ignored current log: KCCLENAL clear thread open

era repetida continuamente.

was happening on an on.

Uma pesquisa rápida no MOS e confirmado mais um BUG para coleção: Bug 23300142 РTT background process trace file message: async ignored current log: kcclenal clear thread open (Doc ID 23300142.8)

A quick search on MOS and I have hit another BUG for my collection: Bug 23300142 РTT background process trace file message: async ignored current log: kcclenal clear thread open (Doc ID 23300142.8)

A correção está disponível nos patches:

Bug fix is included on patches:

12.2.0.1.170919 (Sep 2017) Database Release Update (DB RU)
12.1.0.2.180116 (Jan 2018) Database Proactive Bundle Patch
12.2.0.1.171017 (Oct 2017) Bundle Patch for Windows Platforms

 

Falha ao aplicar o OJVM 12.1.0.2.180116

Ao aplicar o  patch 27001733 (Oracle JavaVM Component 12.1.0.2.180116 Database PSU), ocorreu uma falha que não havia ocorrido ao aplicar o patch de outubro e tão pouco ocorreu ao aplicar o OJVM de JAN2018 da versão 12.2.

I got an error applying patch¬†27001733 (Oracle JavaVM Component 12.1.0.2.180116 Database PSU) which hasn’t happen on ojvm patch from oct/2017 neither on ojvm from jan/2018 on 12.2.

Você encontrará o log do opatch no diretório $ORACLE_HOME/cfgtoollogs/opatch.

You will find opatch logs at $ORACLE_HOME/cfgtoollogs/opatch.

O erro reportado no log foi:

The error reported on log was:

[Feb 8, 2018 2:56:00 PM] [WARNING] OUI-67200:Make failed to invoke "/usr/bin/make -f ins_rdbms.mk javavm_refresh patchset_opt_all ioracle ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db_1"....'make: perl: Command not found
 make: *** [javavm_refresh] Error 127
 '
[Feb 8, 2018 2:56:00 PM] [INFO] Re-link fails on target "javavm_refresh patchset_opt_all ioracle".
[Feb 8, 2018 2:56:00 PM] [INFO] --------------------------------------------------------------------------------
 Failed to run make commands. Please contact Oracle Support.
[Feb 8, 2018 2:56:00 PM] [SEVERE] OUI-67115:OPatch failed to restore OH '/u01/app/oracle/product/12.1.0.2/db_1'. Consult OPatch document to restore the home manually before proceeding.
[Feb 8, 2018 2:56:00 PM] [WARNING] OUI-67124:
 NApply was not able to restore the home. Please invoke the following scripts:
 - restore.[sh,bat]
 - make.txt (Unix only)
 to restore the ORACLE_HOME. They are located under
 "/u01/app/oracle/product/12.1.0.2/db_1/.patch_storage/NApply/2018-02-08_14-53-50PM"
[Feb 8, 2018 2:56:00 PM] [SEVERE] OUI-67073:UtilSession failed: Re-link fails on target "javavm_refresh patchset_opt_all ioracle".

A correção é simples, basta setar as variávies conforme abaixo:

To fix this issue you have to set those two variables below:

export PATH=$ORACLE_HOME/perl/bin:$PATH
export PERL5LIB = $ORACLE_HOME/perl/lib

e rodar novamente o opatch que agora a atualiza√ß√£o ser√° bem sucedida ūüôā

and then opatch will run successfully.