Wednesday, February 28, 2024

Patching 19.3 to 19.16 step by step

 

Patching 19.3 to 19.16 step by step


In this article, we are going to learn how to apply patching from 19.3 to 19.16 step by step. Today we applying patch number 34133642 on the Linux platform.

Note:
Before applying the patch make sure the below point should be clear.

  • You must go throw the README file
  • Download Patch p34133642_19.16.00_Linux-x86-64
  • Make sure the minimum OPATCH version is 12.2.0.1.30
  • Download OPATCH Version

    To update the OPATCH version we need to download it first and then replace it with the existing OPATCH in ORACLE HOME. Follow the steps in the below screenshot.

    Download PATCH for 19.16

    To upgrade Oracle 19.3 to 19.16 we need the following patch number 34133642.

  • Steps to applying Patching 19.3 to 19.16

    After downloading the OPATCH and PATCH copy both files to the Database server. In my case, I have copied both files to /u02 mount point.

    Step 1. Update OPATCH utility

    Update the OPATCH utility version using the following commands. Make sure before upgrading the opatch utility you must take its backup.

    Check OPATCH version

    Note:
    Before updating the OPATCH version, you should check the current version of OPATCH.

    $cd $ORACLE_HOME/OPatch
    $./opatch version
    OPatch Version: 12.2.0.1.17
    OPatch succeeded.

    In my case the OPATCH version is 12.2.o.1.17 and we need the minimum OPATCH version to be 12.2.0.1.30.

    $ cd $ORACLE_HOME/
    $mv OPatch OPatch_bkp

    Upgrade OPATCH

    Upgrade the OPATCH utility, just unzip the newly downloaded opatch utility version in ORACLE_HOME directly.


$unzip /u02/p6880880_190000_Linux-x86-64 -d $ORACLE_HOME/

Once the above command is completed then check the OPATCH version again.

$ ./opatch version
OPatch Version: 12.2.0.1.35
OPatch succeeded.

This time you can see the OPATCH version successfully updating.

Step 2. Verify the Database Current version

Using the below command verify the current version details and some other information related to the database.

SQL> select BANNER_FULL from v$version;
BANNER_FULL
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

Check dba_registry status

Registry verification is an important part of the patching activity.

SQL>col comp_id for a10
SQL>col version for a11
SQL>col status for a10
SQL>col comp_name for a37
SQL>select comp_id,comp_name,version,status from dba_registry;
COMP_ID    COMP_NAME                             VERSION     STATUS
---------- ------------------------------------- ----------- ----------
CATALOG    Oracle Database Catalog Views         19.0.0.0.0  VALID
CATPROC    Oracle Database Packages and Types    19.0.0.0.0  VALID
RAC        Oracle Real Application Clusters      19.0.0.0.0  OPTION OFF
JAVAVM     JServer JAVA Virtual Machine          19.0.0.0.0  VALID
XML        Oracle XDK                            19.0.0.0.0  VALID
CATJAVA    Oracle Database Java Packages         19.0.0.0.0  VALID
APS        OLAP Analytic Workspace               19.0.0.0.0  VALID
XDB        Oracle XML Database                   19.0.0.0.0  VALID
OWM        Oracle Workspace Manager              19.0.0.0.0  VALID
CONTEXT    Oracle Text                           19.0.0.0.0  VALID
ORDIM      Oracle Multimedia                     19.0.0.0.0  VALID
SDO        Spatial                               19.0.0.0.0  VALID
XOQ        Oracle OLAP API                       19.0.0.0.0  VALID
OLS        Oracle Label Security                 19.0.0.0.0  VALID
DV         Oracle Database Vault                 19.0.0.0.0  VALID
15 rows selected.

Step 3. Rollback Plan

Due to any type of error if patching got failed, then you must have a backup plan.

  • Take Oracle Home Backup
  • Full Database RMAN Backup or Create Guaranteed Restore Point.

Take Oracle Home backup

$cd $ORACLE_HOME
$tar -cvf oracle_home_15jan2023.tar $ORACLE_HOME

Before Patching 19.3 to 19.16 steps

Let's start the preparation for applying the patch. Unzip the path in the /u02 directory using the following commands.

Step 3. Unzip the Patch

$unzip /u02/p34133642_19.16.00_Linux-x86-64.zip -d /u02

Note:
Yes, check the prerequisites before applying the patch using the following command.

Step 4. Check prerequisites

$cd /u02/34133642
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./

Output from the above query, if everything is fine.

Oracle Interim Patch Installer version 12.2.0.1.35
Copyright (c) 2023, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u02/app/oracle/product/19.3.0/db_home
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/19.3.0/db_home/oraInst.loc
OPatch version    : 12.2.0.1.35
OUI version       : 12.2.0.7.0
Log file location : /u02/app/oracle/product/19.3.0/db_home/cfgtoollogs/opatch/opatch2023-01-12_23-58-36PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

Step 5. Stop all Database services

Check how many instances & listener is running on the machine.

$ ps -ef|grep pmon
oracle    3027 30927  0 00:00 pts/3    00:00:00 grep --color=auto pmon
oracle    3376     1  0 Jan12 ?        00:00:01 ora_pmon_orcl

Check the Listener status.

$ ps -ef|grep tns
root        14     2  0 Jan12 ?        00:00:00 [netns]
oracle    3035 30927  0 00:00 pts/3    00:00:00 grep --color=auto tns
oracle    6197     1  0 Jan12 ?        00:00:01 /u02/app/oracle/product/19.3.0/db_home/bin/tnslsnr LISTENER -inherit

On my machine, one instance and one listener are currently running, letting down both services.

$ lsnrctl stop LISTENER
LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 13-JAN-2023 00:00:55
Copyright (c) 1991, 2019, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=noida)(PORT=1521)))
TNS-12543: TNS:destination host unreachable
 TNS-12560: TNS:protocol adapter error
  TNS-00513: Destination host unreachable
   Linux Error: 113: No route to host

Shutdown Database (PDB & cdb both)

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  YES
         3 PDB                            MOUNTED
SQL>shut immediate

Apply Patching 19.3 to 19.16

Using the following command applies the patch.

Note:
Make sure 7 GB of disk space is free in your binary location otherwise the patch will be failed.

Step 6. Execute opatch apply

$cd /u02/34133642
34133642]$ORACLE_HOME/OPatch/opatch apply

Patching component oracle.precomp.common, 19.0.0.0.0...
Patching component oracle.precomp.lang, 19.0.0.0.0...
Patching component oracle.jdk, 1.8.0.201.0...
Patch 34133642 successfully applied.
Sub-set patch [29517242] has become inactive due to the application of a super-set patch [34133642].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u02/app/oracle/product/19.3.0/db_home/cfgtoollogs/opatch/opatch2023-01-13_00-09-07AM_1.log
OPatch succeeded.

Start All database services

Once opatch apply successfully, then start all the database services.

Start Listener

$ lsnrctl start LISTENER

Start Database

$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 13 00:33:15 2023
Version 19.16.0.0.0
Copyright (c) 1982, 2022, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.
Total System Global Area 2466250360 bytes
Fixed Size                  9137784 bytes
Variable Size             570425344 bytes
Database Buffers         1879048192 bytes
Redo Buffers                7639040 bytes
Database mounted.
Database opened.

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  YES
         3 PDB                            MOUNTED
 
SQL> alter session set container=PDB;
Session altered.

SQL> ALTER DATABASE OPEN;
Database altered.

SQL> SHOW PDBS
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 PDB                            READ WRITE YES

Post PATCH steps

Now it's time to run datapatch -verbose which updates the patch information at the database binary & dictionary level as well as binary files.

Run datapatch -verbose

34133642]$ cd $ORACLE_HOME/OPatch
OPatch]$ ./datapatch -verbose

Execute utlrp.sql

Once datapatch -verbose is completed then execute the utlrp script to validate the invalid objects.

SQL> @?/rdbms/admin/utlrp.sql
Session altered.

Verify the PATCH

Check applied patch details using lsinventory command.

OPatch]$ ./opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.35
Copyright (c) 2023, Oracle Corporation.  All rights reserved.
Oracle Home       : /u02/app/oracle/product/19.3.0/db_home
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/19.3.0/db_home/oraInst.loc
OPatch version    : 12.2.0.1.35
OUI version       : 12.2.0.7.0
Log file location : /u02/app/oracle/product/19.3.0/db_home/cfgtoollogs/opatch/opatch2023-01-13_07-35-21AM_1.log
Lsinventory Output file location : /u02/app/oracle/product/19.3.0/db_home/cfgtoollogs/opatch/lsinv/lsinventory2023-01-13_07-35-21AM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: noida
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 19c                                                  19.0.0.0.0
There are 1 products installed in this Oracle Home.
Interim patches (2) :
Patch  34133642     : applied on Fri Jan 13 00:17:16 IST 2023
Unique Patch ID:  24865470
Patch description:  "Database Release Update : 19.16.0.0.220719 (34133642)"
   Created on 14 Jul 2022, 16:09:56 hrs UTC
   Bugs fixed:

Verify at the Database Level

SQL> select patch_uid,patch_id,SOURCE_VERSION, TARGET_VERSION, action, status,action_time,description from dba_registry_sqlpatch;

Check Invalid Objects

SQL>COLUMN object_name FORMAT A30
SQL>SELECT owner,object_type,object_name,status FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner, object_type, object_name;
no rows selected

Check dba_registry

SQL>col comp_id for a10
SQL>col version for a11
SQL>col status for a10
SQL>col comp_name for a37
SQL>select comp_id,comp_name,version,status from dba_registry;
COMP_ID    COMP_NAME                             VERSION     STATUS
---------- ------------------------------------- ----------- ----------
CATALOG    Oracle Database Catalog Views         19.0.0.0.0  VALID
CATPROC    Oracle Database Packages and Types    19.0.0.0.0  VALID
RAC        Oracle Real Application Clusters      19.0.0.0.0  OPTION OFF
JAVAVM     JServer JAVA Virtual Machine          19.0.0.0.0  VALID
XML        Oracle XDK                            19.0.0.0.0  VALID
CATJAVA    Oracle Database Java Packages         19.0.0.0.0  VALID
APS        OLAP Analytic Workspace               19.0.0.0.0  VALID
XDB        Oracle XML Database                   19.0.0.0.0  VALID
OWM        Oracle Workspace Manager              19.0.0.0.0  VALID
CONTEXT    Oracle Text                           19.0.0.0.0  VALID
ORDIM      Oracle Multimedia                     19.0.0.0.0  VALID
SDO        Spatial                               19.0.0.0.0  VALID
XOQ        Oracle OLAP API                       19.0.0.0.0  VALID
OLS        Oracle Label Security                 19.0.0.0.0  VALID
DV         Oracle Database Vault                 19.0.0.0.0  VALID
15 rows selected.

PostgreSQL Partitioning Overview

 

                      PostgreSQL Partitioning Overview

 

Partitioning has made a lot of improvements in the Postgres database.

 

Overview


Quick introduction to partitioning and timeline of adding partitioning features to PG before we get into the enhancements done in PG-13. Partitioning is way of splitting or dividing a large table into smaller pieces, the table can be divided using the List, Range or Hash partitioning techniques offered by PG.

The parent partition table doesn’t store any data, the data is stored in the partitions defined when creating the partitioned table. While the partitions can be accessed directly, the queries are typically directed at the logical parent relation and the tuples are routed to the correct partition for inserts and update and in-case of a read query the desired partitions are scanned for executing the client query.

Here are some key partitioning feature made it to PG over the years…




 

Here I explain how to take advantage of this feature effectively. PostgreSQL partitioning features:


Partition pruning 

Partition-wise join

Partition-wise aggregation

Partition pruning

Partition pruning is a feature that narrows down the partitions to be accessed by SQLs, which in effect means you can exclude those that do not need to be scanned.


Partitioning always create based on a specific column as a key column.

For example, in the figure below, the sale_date column is set as a partition key.

By using the partition key in WHERE clause, the partitions to be accessed are narrowed down. Since the search is performed only for the targeted partitions, efficient scan is possible.

Note the following about partition pruning:

-  They are enabled by default.

-  They can be enabled/disabled by setting enable_partition_pruning in postgresql.conf or by using SET.

 

Partition pruning example:

Let`s see the effect of partition pruning by comparing the scan performance for partitioned and non-partitioned tables.

 

The sales table in the sample below employs a range partition, with sale_date as the partition key. You can see that the table is divided into 3 partitions, while nonpartition_sales is not a partition table.

 

--Partitioned table

mydb=# \d+ sales Table "public.sales" Column | Type |…

-----------+------------+

id | integer | p_name | text | amount | integer | sale_date | date |

Partition key: RANGE (sale_date)

Partitions: sales_2019_Q4 FOR VALUES FROM ('2019-10-01') TO ('2020-01-01'), sales_2020_Q1 FOR VALUES FROM ('2020-01-01') TO ('2020-04-01'), sales_2020_Q2 FOR VALUES FROM ('2020-04-01') TO ('2020-07-01')

 


mydb=# SELECT COUNT(*) FROM sales; count

3000000

(1 row)

--Non-partitioned table mydb=# \d+ nonpartition_sales

Table "public.nonpartition_sales" Column |                Type   |…

-----------+------------+

id | integer | p_name | text | amount | integer | sale_date | date |

 


mydb=# SELECT COUNT(*) FROM nonpartition_sales; count


3000000

(1 row)

 

--Comparing estimated execution times

Let`s check the execution plan in the non-partitioned table of a SELECT statement using the partition key (sale_date).

 

mydb=# EXPLAIN ANALYZE SELECT * FROM nonpartition_sales mydb-# WHERE sale_date < '2020-04-01' AND id=1;


QUERY PLAN

Gather

Workers Planned: 2

Workers Launched: 2

-> Parallel Seq Scan on nonpartition_sales A

Filter: ((sale_date < '2020-04-01'::date) AND (id = 1)) Rows Removed by Filter: 986797

Planning Time: 0.249 ms

Execution Time: 3156.240 ms                                <<<<**** (8 rows)

--sequential scan is executed in the entire table

--Estimated execution time for non-partitioned table

Now let`s see how the plan changes when executing the same SQL for the partitioned table. mydb=# EXPLAIN ANALYZE SELECT * FROM sales


WHERE sale_date < '2020-04-01' AND id=1; QUERY PLAN

Gather

Workers Planned: 2

Workers Launched: 2

-> Parallel Append

-> Parallel Seq Scan on sales_2019_Q4 A

Filter: ((sale_date < '2020-04-01'::date) AND (id = 1)) Rows Removed by Filter: 292428

-> Parallel Seq Scan on sales_2020_Q1 A

Filter: ((sale_date < '2020-04-01'::date) AND (id = 1)) Rows Removed by Filter: 539199

Planning Time: 0.642 ms

Execution Time: 423.267 ms                                <<<<**** (12 rows)

--sequential scan is performed in the relevant partitions

--Estimated execution time for partitioned table

 

As we can see above, the execution time for the partitioned table is almost 7.5x faster (3156.240 ms | 423.267ms).

 

***Partition-wise join***

 

Partitionwise join currently applies only when the join conditions include all the partition keys,


which must be of the same data type and have one-to-one matching sets of child partitions. Because partitionwise join planning can use significantly more CPU time and memory during planning, the default is off .

Partition-wise join is a feature that joins partitions during a JOIN operation on partitioned tables.

 

Combining partitions that have the same range and values eliminates unnecessary JOIN processing, thus improving performance.

 

Suppouse that partitioning create these ranges separately: values 1 to 1000 as part1, 1001 to 2000 as part2, and 2001 to 3000 as part3.

 

Our JOIN condition is emp.id = emp_info.id, so the emp_1 and the emp_info_1 partitions are joined, since they meet the join condition

(each id column takes values from 1 to 1000).

 

However, the emp_1 and the emp_info_2 partitions do not join, since they have been partitioned with different values.

Other partitions created with the same partitioning range values are also joined, as illustrated below.

Note the following about partition-wise joins:

-They are disabled by default.

This is because a lot of CPU and memory may be consumed when creating execution plans. Verification is required whether they will be effective in your environment before using them.

 

-They can be enabled/disabled by setting enable_partitionwise_join in postgresql.conf or by using SET.

 

-Their performance may be even amplified by using parallel query - we will discuss this later in this article.

 

***Partition-wise join example***

Let`s compare the estimated times in execution plans where partition-wise joins are disabled and enabled.

 

The emp and emp_info tables in this example uses hash partitioning, with the id column set as the partition key.

Both tables are divided into 3 partitions under the same conditions.

 

***Table emp***

mydb=# \d+ emp Table "public.emp"

Column | Type |

--------+---------+

id | integer | name | text | dept | integer |

Partition key: HASH (id) Index:

"emp_pkey" PRIMARY KEY, btree (id)

Partition: emp_0 FOR VALUES WITH (modulus 3, remainder 0), emp_1 FOR VALUES WITH (modulus 3, remainder 1), emp_2 FOR VALUES WITH (modulus 3, remainder 2)

mydb=# SELECT COUNT (*) FROM emp;


count


3000000

(1 row)

 

***Table emp_info***

mydb=# \d+ emp_info Table "public.emp_info"

Column | Type |

--------+---------+-

id | integer | addr | text | rating | integer |

Partition key: HASH (id) Index:

"emp_pkey" PRIMARY KEY, btree (id)

Partition: emp_info_0 FOR VALUES WITH (modulus 3, remainder 0), emp_info_1 FOR VALUES WITH (modulus 3, remainder 1), emp_info_2 FOR VALUES WITH (modulus 3, remainder 2)

 


mydb=# SELECT COUNT (*) FROM emp_info; count

3000000

(1 row)

 

***Comparing estimated execution times***

Check that the partition-wise joins are disabled.

 


mydb=# SHOW enable_partitionwise_join; enable_partitionwise_join

off

(1 row)

 

Check the execution plan of a JOIN of the partitioned tables above, using id as the join key. The scanning results of partitions in the table are obtained separately, and the join process is performed at the end.

mydb=# EXPLAIN ANALYZE

mydb-# SELECT emp.id, emp.name, emp_info.rating mydb-# FROM emp LEFT OUTER JOIN emp_info mydb-#               ON emp.id = emp_info.id A


mydb-# WHERE emp_info.rating = 10; QUERY PLAN

Gather

Workers Planned: 2

Workers Launched: 2

-> Parallel Hash Join

Hash Cond: (emp_1.id = emp_info_1.id)

-> Parallel Append B

-> Parallel Seq Scan on emp_1

-> Parallel Seq Scan on emp_0

-> Parallel Seq Scan on emp_2

-> Parallel Hash

-> Parallel Append C

-> Parallel Seq Scan on emp_info_1 Filter: (rating = 10)

Rows Removed by Filter: 326749

-> Parallel Seq Scan on emp_info_0 …,


Filter: (rating = 10)

Rows Removed by Filter: 980068

-> Parallel Seq Scan on emp_info_2 Filter: (rating = 10)

Rows Removed by Filter: 489780 Planning Time: 0.538 ms

Execution Time: 11080.809 ms                                     <<<<**** (23 rows)

-Join key

-Sequential scan for partitions in table emp

-Sequential scan for partitions in table emp_info

-Estimated execution time

Next, we enable partition-wise joins.

 

mydb=# SET enable_partitionwise_join TO on; SET

 


mydb=# SHOW enable_partitionwise_join; enable_partitionwise_join

on

(1 row)

Lastly, we check again the execution plan for the same SQL.

Note below that each partition pair is joined by a "Nested Loop" ,

and the results are summarised at the end.

mydb=# EXPLAIN ANALYZE

mydb-# SELECT emp.id, emp.name, emp_info.rating mydb-# FROM emp LEFT OUTER JOIN emp_info mydb-#               ON emp.id = emp_info.id A


mydb-# WHERE emp_info.rating = 10; QUERY PLAN

Gather

Workers Planned: 2

Workers Launched: 2

-> Parallel Append

-> Nested Loop

-> Parallel Seq Scan on emp_info_1 B Filter: (rating = 10)

Rows Removed by Filter: 980248

-> Index Scan using emp_1_pkey on emp 1 Index Cond: (id = emp_info_1.id)

-> Nested Loop

-> Parallel Seq Scan on emp_info_0 Filter: (rating = 10)

Rows Removed by Filter: 980068

-> Index Scan using emp_0_pkey on emp0 Index Cond: (id = emp_info_0.id)

-> Nested Loop

-> Parallel Seq Scan on emp_info_2 Filter: (rating = 10)

Rows Removed by Filter: 326520

-> Index Scan using emp_2_pkey on emp_2 Index Cond: (id = emp_info_2.id)


Planning Time: 1.290 ms

Execution Time: 1136.806 ms                               <<<<<***** (25 rows)

-  Join key

-  Sequential scan for partition in table emp_info

-  Index scan for partition in table emp

-  Estimated execution time

 

As we can see above, the execution time using partition-wise join is almost 10x faster (11080. 809 ms | 1136.806 ms).

 

The Nested Loop makes a big difference in speed because the data range to scan during the join processing of partitions is reduced.

The execution plan changes depending on whether partition-wise join is enabled.

With this feature, you can optimise JOIN by filtering the scan target in the most efficient way.

 

****Partition-wise aggregation****

In partition-wise aggregation, aggregation is done for each partition in a partition table and the results are integrated at the end.

Processing time can be shortened by performing aggregation for each partition. Similarly to partition-wise joins, note the following about partition-wise aggregations:

-They are disabled by default.

This is because a lot of CPU and memory may be consumed when creating execution plans. Verification is required whether they will be effective in your environment before using them.

-They can be enabled/disabled by setting enable_partitionwise_aggregate in postgresql.conf or by using SET.

-Their performance may be even amplified by using parallel query. Continue reading for an explanation of parallel query.

 

***Partition-wise aggregation example***

Again, let`s compare the estimated times in execution plans for the same SQL statement, where partition-wise aggregation is disabled against after it is enabled.

 

The sales table in the example uses range partitioning, with sale_date set as the partition key. The table is divided into three partitions.

--Table sales

 

mydb=# \d+ sales Table "public.sales" Column | Type |

-----------+---------+

id | integer | p_name | text | amount | integer | sale_date | date |

Partition key: RANGE (sale_date) Index:

"sales_id_idx" btree (id)

Partitions: sales_2019_Q4 FOR VALUES FROM ('2019-10-01') TO ('2020-01-01'),


sales_2020_Q1 FOR VALUES FROM ('2020-01-01') TO ('2020-04-01'), sales_2020_Q2 FOR VALUES FROM ('2020-04-01') TO ('2020-07-01')


mydb=# SELECT COUNT (*) FROM sales; count

3000000

(1 row)

***Comparing estimated execution times*** Check that partition-wise aggregation is disabled.


mydb=# SHOW enable partitionwise_aggregate; enable_partitionwise_aggregate

off

(1 row)

Check the execution plan of an SQL performing an aggregation on the partitioned table. mydb=# EXPLAIN ANALYZE

SELECT p_name, sum(amount) sales_total FROM sales WHERE p_name = 'prod_A' GROUP BY p_name;


QUERY PLAN

GroupAggregate

Group Key: sales_2019_Q4.p_name

-> Append

-> Bitmap Heap Scan on sales_2019_Q4 A … Recheck Cond: (p_name = 'prod_A'::text) Heap Blocks: exact=5483

-> Bitmap Index Scan on sales_2019_Q4_p_name_idx Index Cond: (p_name = 'prod_A'::text)

-> Bitmap Heap Scan on sales_2020_Q1 A … Recheck Cond: (p_name = 'prod_A'::text) Heap Blocks: exact=6716

-> Bitmap Index Scan on sales_2020_Q1_p_name_idx Index Cond: (p_name = 'prod_A'::text)

-> Bitmap Heap Scan on sales_2020_Q2 A … Recheck Cond: (p_name = 'prod_A'::text) Heap Blocks: exact=6120

-> Bitmap Index Scan on sales_2020_Q2_p_name_idx Index Cond: (p_name = 'prod_A'::text)

Planning Time: 0.345 ms

Execution Time: 503.067 ms                                <<<<<**** (20 rows)

-  Heap scan for partition

-  Estimated execution time

Next, we enable partition-wise aggregation.

 

mydb=# SET enable_partitionwise_aggregate TO on; SET

 


mydb= SHOW enable partitionwise aggregate; enable_partitionwise_aggregate

on

(1 row)


 

Then we check the execution plan`s estimated time again.

mydb=# EXPLAIN ANALYZE

SELECT p_name, sum(amount) sales_total FROM sales WHERE p_name = 'prod_A' GROUP BY p_name;


QUERY PLAN

Finalize GroupAggregate

Group Key: sales_2019_Q4.p_name

-> Append

-> Partial GroupAggregate

Group Key: sales_2019_Q4.p_name A

-> Bitmap Heap Scan on sales_2019_Q4 Recheck Cond: (p_name = 'prod_A'::text) Heap Blocks: exact=5483

-> Bitmap Index Scan on sales_2019_Q4_p_name_idx Index Cond: (p_name = 'prod_A'::text)

-> Partial GroupAggregate

Group Key: sales_2020_Q1.p_name A

-> Bitmap Heap Scan on sales_2020_Q1_p_name Recheck Cond: (p_name = 'prod_A'::text)

Heap Blocks: exact=6716

-> Bitmap Index Scan on sales_2020_Q1_p_name_idx Index Cond: (p_name = 'prod_A'::text)

-> Partial GroupAggregate

Group Key: sales_2020_Q2.p_name A

-> Bitmap Heap Scan on sales_2020_Q2 Recheck Cond: (p_name ='prod_A'::text) Heap Blocks: exact=6120

-> Bitmap Index Scan on sales_2020_Q2_p_name_idx Index Cond: (p_name = 'prod_A'::text)

Planning Time: 0.522 ms

Execution Time: 394.091 ms <<<<<***** (26 rows)

-Heap scan on partition

-Estimated execution time

 

As we can see above, the execution time using partition-wise aggregation is almost 30% faster (503.067ms | 394.091ms)

this is because the number of target rows is reduced by executing aggregation in each partition with fewer rows to scan, the overall speed is increased.

 

***Parallel query for partition table***

Parallel query is a feature that executes a single SQL in parallel using multiple processes. Performance is improved because the processing is distributed to multiple CPUs.

 

Parallel queries can be executed on partition tables, allowing performance benefits of partition- wise joins

and partition-wise aggregations to be enhanced even further.

 

This is due to the fact that processing is carried out in parallel for each partition, with the results then combined at the end.


In the example below where we query a partitioned table, we see that a sequential scan is performed on each partition in parallel.

 


mydb=# EXPLAIN SELECT COUNT(*) FROM sales; QUERY PLAN

Finalize Aggregate

-> Gather

Workers Planned: 2

-> Partial Aggregate

-> Parallel Append

-> Parallel Seq Scan on sales_2020_Q2

-> Parallel Seq Scan on sales_2020_Q1

-> Parallel Seq Scan on sales_2019_Q4

(8 rows)

Parallel queries in partitioned tables are enabled by default.

They can be enabled/disabled by setting enable_parallel_append in postgresql.conf.

 

In Oracle How to shrink Securefile LOB tables

As per Oracle support, if the table uses securefile LOB the traditional shrink method ALTER TABLE MODIFY LOB (XMLRECORD) (SHRINK SPACE) will...