the definitive guide to oracle 19c out-of-place patching: zero downtime architecture


why out-of-place patching is the enterprise gold standard
in the high-stakes arena of enterprise database administration, executing a flawless maintenance window is not a luxury; it is a strict mandate. traditionally, database engineers relied on “in-place” patching—shutting down the active system, overwriting the live binaries with new patches, and hoping the database successfully restarted. this archaic method is fraught with peril. if an opatch process fails mid-execution or corrupts the central inventory, the production system is effectively destroyed, requiring hours of chaotic, high-stress restoration from tape backups while business operations hemorrhage revenue.

to mitigate this catastrophic risk, modern architecture demands the out-of-place switch home methodology. instead of modifying the active binaries, we construct a completely parallel, isolated oracle home directory (a “shadow” home). we deploy the base software into this shadow home, apply all required release updates, and meticulously verify the binary integrity—all while the primary database continues to serve live production traffic without interruption. only when the shadow home is perfectly staged do we perform a rapid, controlled cutover. while this guide utilizes release update 27 (ru27) as its primary reference framework, the underlying principles, scripts, and architectural concepts detailed here constitute a timeless, universal blueprint applicable to any oracle 19c release update lifecycle.

1. establishing the forensic baseline: environment verification

before a single binary is downloaded or a directory created, an architect must establish a mathematically precise baseline of the current production environment. in my extensive experience, over thirty percent of catastrophic patching failures originate not from defective oracle patches, but from applying valid patches to an environment that possessed an undocumented or misunderstood architectural state.

automating the inventory capture
we deploy a custom shell script, make_all_orainfo.sh, to generate a unified, immutable snapshot of the operating system kernel, the existing grid infrastructure home, and the database home. this automates the tedious execution of the opatch lspatches utility across multiple user contexts, guaranteeing we possess forensic proof of our starting coordinates.

-- deploy and execute the baseline capture script
[cdb1:oracle@tdb02t]$ cat > make_all_orainfo.sh <<'eof'
#!/bin/bash
echo "========================================"
echo "        oracle/grid os info summary"
echo "========================================"
echo -n "hostname     : "; hostname
echo -n "os name      : "; grep -i pretty_name /etc/os-release | cut -d= -f2 | tr -d '"'
echo -n "kernel ver   : "; uname -r

echo "========================================"
if [[ -n "$grid_home" ]]; then
    echo "[grid_home: $grid_home]"
    su - grid -c "$grid_home/opatch/opatch lspatches"
fi

echo "========================================"
if [[ -n "$db_home" ]]; then
    echo "[db_home: $db_home]"
    su - oracle -c "$db_home/opatch/opatch lspatches"
fi
eof

[cdb1:oracle@tdb02t]$ sh make_all_orainfo.sh
... [snip: captures exact kernel 5.4.17 and base 19.0.0.0 inventory] ...
    
field experience: the hidden component trap
examine the output of your baseline script closely. i must emphasize this: i once endured a maintenance window that ballooned by four hours simply because our team assumed the acfs (asm cluster file system) drivers had been updated in a previous cycle, when in reality, they had been skipped. our application fundamentally relied on specific acfs drivers that proved incompatible with the newly booted kernel. by executing this script, we confirm the exact, undeniable inventory state. always cross-reference your exact kernel version against the oracle support matrix to confirm compatibility with your target release update before proceeding.

2. strategic payload delivery: downloading and staging patches

the out-of-place methodology requires significantly more storage capacity than traditional patching because we must stage the massive 19.3 base software alongside the new release update patchsets. however, the most critical element in this phase is not disk space, but the rigorous management of staging directory permissions.

the opatch engine mandate
i cannot overstate this operational mandate: you must completely discard the opatch utility bundled within the base 19.3 zip file. it is obsolete technology from 2019 and lacks the xml parsing logic required to interpret modern patch payloads. you must independently download the latest opatch framework (patch 6880880) and inject it into your staging workflow.

-- execute as the grid os user
-- download base 19.3 binaries and the target release update
[+asm:grid@tdb02t]$ cp linux.x64_193000_grid_home.zip /tmp
[+asm:grid@tdb02t]$ cp p37641958_190000_linux-x86-64.zip /tmp

-- download the latest opatch engine
[+asm:grid@tdb02t]$ cp p6880880_190000_linux-x86-64.zip /tmp

-- crucial permission normalization
[+asm:grid@tdb02t]$ chmod 775 /tmp/linux.x64*
[+asm:grid@tdb02t]$ chmod 775 /tmp/p*.zip
    
field experience: resolving generic file errors
observe the explicit chmod 775 commands implemented above. throughout my career, i have witnessed countless maintenance windows violently derailed because an engineer uploaded the patch media using a root-level secure copy protocol (scp). this action inherently assigns restrictive '600' permissions to the files. when the oracle or grid user subsequently attempts to decompress the archives, the operating system throws generic "permission denied" or wildly misleading "file not found" errors. standardizing your staging area permissions immediately after upload is a five-second administrative task that universally prevents thirty minutes of frantic troubleshooting.

3. architectural prep: constructing the shadow grid home

we now begin the physical construction of the parallel universe. we define a completely new directory path (e.g., /u01/app/19cru27/grid) that will serve as the shadow home. into this void, we will extract the pristine 19.3 code base and immediately perform our preemptive opatch upgrade before initiating any configuration scripts.

establishing the new directory structure
the creation of this path requires root privileges to ensure correct baseline ownership, followed by an immediate assignment to the grid installation group. we then leverage temporary environment variables to carefully manage the extraction process without polluting the active system's paths.

-- 1. directory creation (execute as root)
[root@tdb02t]$ mkdir -p /u01/app/19cru27/grid
[root@tdb02t]$ chown -r grid:oinstall /u01/app/19cru27/grid
[root@tdb02t]$ chmod -r 775 /u01/app/19cru27/grid

-- 2. path definition and extraction (execute as grid)
[+asm:grid@tdb02t]$ export newgridhome=/u01/app/19cru27/grid
[+asm:grid@tdb02t]$ unzip /tmp/linux.x64_193000_grid_home.zip -d $newgridhome

-- 3. preemptive opatch replacement
[+asm:grid@tdb02t]$ mv $newgridhome/opatch $newgridhome/opatch.bak.19.3
[+asm:grid@tdb02t]$ unzip /tmp/p6880880_190000_linux-x86-64.zip -d $newgridhome

-- 4. verification of the java engine
[+asm:grid@tdb02t]$ $newgridhome/opatch/opatch version
opatch version: 12.2.0.1.46
opatch succeeded.
    
field experience: the sequence of operations
the sequence of operations here is absolute: extract the base binaries first, forcibly relocate the obsolete opatch directory, and then extract the modernized opatch framework. i have observed engineers reverse this logic, inadvertently overwriting their freshly downloaded opatch tool with the legacy 2019 version contained within the base installer. furthermore, successfully querying the opatch version at this stage is not merely a formality; it definitively proves that the internal java runtime environment bundled within the new grid home is fully operational.

4. executing the software-only grid infrastructure deployment

this phase represents the core technical execution of the out-of-place paradigm. rather than running a standard installation, we invoke the gridsetup.sh utility armed with the powerful -applyru parameter.

the automated slipstream process
the applyru directive forces the oracle universal installer (oui) into a highly specialized mode. it will dynamically merge the release update patches into the raw 19.3 binaries in memory, perform complex relinking of the c libraries, and lay down a fully patched footprint onto the filesystem. crucially, we command it to perform a "software only" installation, completely bypassing the cluster configuration phase.

-- execute the slipstream deployment (execute as grid)
[+asm:grid@tdb02t]$ cd $newgridhome
[+asm:grid@tdb02t]$ ./gridsetup.sh -applyru /tmp/37641958 -silent \
                   oracle_home_name=grid_19c_ru27 \
                   -responsefile $newgridhome/install/response/gridsetup.rsp \
                   oracle_install_option=crs_swonly

... [snip: oui log showing automated patching and relinking] ...
successfully applied the patch.
the installation of oracle grid infrastructure was successful.
    
field experience: the golden rule of out-of-place
here is the absolute, unbreakable rule of out-of-place patching: when the installer completes and explicitly prompts you to execute root.sh from the newly created directory—you must absolutely ignore it. executing root.sh at this premature juncture will aggressively attempt to start the high availability services from the shadow home while your primary production system is still actively utilizing those identical service names. this collision will trigger an immediate, chaotic node eviction and catastrophic database crash. walk away from the terminal if you feel the urge to run that script.

5. parallel construction: deploying the new database engine

with the grid infrastructure shadow home fully compiled, we must meticulously replicate the exact same process for the relational database management system (rdbms) engine. the architectural symmetry must be maintained.

constructing the database shadow home
we forge a new directory (e.g., /u01/app/oracle/product/19cru27/db_1), inject the pristine base software, upgrade the opatch engine, and execute the silent installer. this parallel structure ensures that when the cutover occurs, both tiers of the oracle stack transition to the new release update simultaneously.

-- 1. directory creation (execute as root)
[root@tdb02t]$ mkdir -p /u01/app/oracle/product/19cru27/db_1
[root@tdb02t]$ chown -r oracle:oinstall /u01/app/oracle/product/19cru27
[root@tdb02t]$ chmod -r 775 /u01/app/oracle/product/19cru27

-- 2. path definition and extraction (execute as oracle)
[cdb1:oracle@tdb02t]$ export newdbhome=/u01/app/oracle/product/19cru27/db_1
[cdb1:oracle@tdb02t]$ unzip /tmp/linux.x64_193000_db_home.zip -d $newdbhome

-- 3. opatch replacement and slipstream installation
[cdb1:oracle@tdb02t]$ mv $newdbhome/opatch $newdbhome/opatch.bak
[cdb1:oracle@tdb02t]$ unzip /tmp/p6880880_190000_linux-x86-64.zip -d $newdbhome
[cdb1:oracle@tdb02t]$ cd $newdbhome
[cdb1:oracle@tdb02t]$ ./runinstaller -applyru /tmp/37641958 -silent \
                   oracle_home_name=db_19c_ru27 \
                   -responsefile $newdbhome/install/response/db_install.rsp \
                   oracle_install_option=install_db_swonly

... [snip: oui log confirming software-only database deployment] ...
    

6. the critical cutover: switching engines and migrating configs

we have reached the nexus of the operation. our shadow homes are fully patched, verified, and dormant. this is the exact moment the maintenance window officially begins and production downtime is initiated. we must surgically redirect the clusterware to utilize the new paths and migrate all security and networking configurations.

executing the pointer switch
this maneuver involves three highly coordinated steps: manually transferring the network configurations (tnsnames.ora, sqlnet.ora) and the critical administrative password file to the shadow home. next, we utilize the server control utility (srvctl) to modify the database resource pointers. finally, we execute the postpatch scripts to terminate the active stack and reboot it from the new binaries.

-- 1. migrate critical configuration files (execute as oracle)
[cdb1:oracle@tdb02t]$ cp $old_oracle_home/dbs/orapwcdb1 $newdbhome/dbs/
[cdb1:oracle@tdb02t]$ cp $old_oracle_home/network/admin/*.ora $newdbhome/network/admin/

-- 2. redirect the clusterware pointers
[cdb1:oracle@tdb02t]$ srvctl modify database -d $oracle_unqname -o $newdbhome

-- 3. execute the clusterware cutover (execute as root)
[root@tdb02t]$ export oracle_home=/u01/app/19cru27/grid
[root@tdb02t]$ $oracle_home/crs/install/roothas.sh -prepatch -dstcrshome $oracle_home
... [snip: clusterware unlocking confirmed] ...

-- warning: the following command initiates hard downtime
[root@tdb02t]$ $oracle_home/crs/install/roothas.sh -postpatch -dstcrshome $oracle_home
... [snip: clusterware terminating active services and restarting from shadow home] ...
clsrsc-672: post-patch steps for patching gi home successfully completed.
    
field experience: the configuration trap
the migration of the password file (orapw*) is non-negotiable. if you fail to replicate this specific file into the new database home, the automated background scripts attempting to initialize the data guard broker or specific high-availability listeners will fail authentication, leaving the database mounted but utterly inaccessible across the network.

7. standardizing the operating system profiles

the clusterware and the database engine are now successfully operating from the patched shadow directories. however, if an administrator opens a new secure shell (ssh) session, their default paths will still route to the obsolete directories. we must update the global environmental variables to reflect the new reality.

updating the bash profiles
we must meticulously edit the .bash_profile files for the root, grid, and oracle user accounts, redefining the grid_home and oracle_home variables to permanently point to the /19cru27/ pathways.

-- example modification for the oracle user profile
[cdb1:oracle@tdb02t]$ vi ~/.bash_profile
# modify the following line:
# export oracle_home=/u01/app/oracle/product/19c/db_1
export oracle_home=/u01/app/oracle/product/19cru27/db_1

[cdb1:oracle@tdb02t]$ source ~/.bash_profile
[cdb1:oracle@tdb02t]$ echo $oracle_home
/u01/app/oracle/product/19cru27/db_1
    

8. logical synchronization: executing datapatch against the new home

while the physical binaries driving the system are now fully updated, the database's internal logical architecture—the massive matrix of sql dictionary tables and pl/sql packages—remains completely unaware of the new software. this volatile discrepancy must be resolved immediately.

synchronizing the data dictionary
we must deploy the datapatch utility from the newly activated oracle home. this sophisticated script connects to the running instance, analyzes the current state of the internal registry, and systematically injects the necessary delta scripts to align the database dictionary with the physical binaries.

-- execute logical synchronization (execute as oracle)
[cdb1:oracle@tdb02t]$ cd $oracle_home/opatch
[cdb1:oracle@tdb02t]$ ./datapatch -verbose

... [snip: datapatch connecting to pdbs and applying sql scripts] ...
sql patch application completed successfully.

-- verify registry status via sql*plus
[cdb1:oracle@tdb02t]$ sqlplus / as sysdba
sql> select patch_id, action_time, description from dba_registry_sqlpatch;
    

9. the emergency parachute: executing a full rollback procedure

the defining brilliance of the out-of-place patching methodology lies in its risk mitigation. if the new release update triggers a catastrophic application compatibility error, reverting to the previous state does not require a complex uninstallation sequence or tape restorations. we simply redirect the clusterware pointers back to the original, untouched directories.

executing the instantaneous reversion
if a critical failure is detected, we revert the database pointer via srvctl, restore the old bash profiles, and execute the identical roothas scripts, but aimed at the original binary paths. this maneuver restores the system to its precise pre-maintenance baseline within minutes.

-- 1. redirect database pointer to original home
[cdb1:oracle@tdb02t]$ srvctl modify database -d $oracle_unqname -o /u01/app/oracle/product/19c/db_1

-- 2. execute clusterware reversion (execute as root)
[root@tdb02t]$ export old_grid=/u01/app/19c/grid
[root@tdb02t]$ $old_grid/crs/install/roothas.sh -prepatch -dstcrshome $old_grid
[root@tdb02t]$ $old_grid/crs/install/roothas.sh -postpatch -dstcrshome $old_grid

-- 3. logically revert the dictionary via datapatch
[cdb1:oracle@tdb02t]$ cd /u01/app/oracle/product/19c/db_1/opatch
[cdb1:oracle@tdb02t]$ ./datapatch -verbose
    
concluding thoughts on enterprise resilience
out-of-place patching fundamentally transforms database maintenance from a high-wire act of faith into a structured, predictable engineering workflow. by isolating the volatile binary upgrades from the active production traffic and guaranteeing an instantaneous rollback path, architects can ensure maximum service availability. master this methodology, and you will eliminate the terror of the 3 am patching window forever.

Leave a Reply

Scroll to Top

Discover more from OraPert For Oracle

Subscribe now to keep reading and get access to the full archive.

Continue reading