The use of bulk insert
in the loop per row does not really speed things up (although there is an unexplained little gain), since only one row is inserted per bulk insert. Bulk insert is intended for large volumes of rows, which are then split in batches of thousand(s) rows which are loaded through a specialized loader.
Measurements here where even a lot worse:
--
-- 23057 rows per second.
--
create or replace table tst@inmemorystorage
as
select *
from calendar@datadictionary
where 1=0
--
-- 25 rows per second.
--
begin
for r
in
( select *
from calendar@datadictionary
)
loop
insert into tst@inmemorystorage
(id, day_date, day_in_week, week_number, month_number, day_in_month, month_nr_day1_in_week, month_nr_day7_in_week, iso_week_number, day_in_year_number, year_number, year_nr_day1_in_week, year_nr_day7_in_week, iso_year_number, day_since_19000101_number, century_number, era)
values
(r.id, r.day_date, r.day_in_week, r.week_number, r.month_number, r.day_in_month, r.month_nr_day1_in_week, r.month_nr_day7_in_week, r.iso_week_number, r.day_in_year_number, r.year_number, r.year_nr_day1_in_week, r.year_nr_day7_in_week, r.iso_year_number, r.day_since_19000101_number, r.century_number, r.era)
;
end loop;
end;
--
-- 50 rows per second.
--
begin
for r
in
( select *
from calendar@datadictionary
)
loop
bulk insert into tst@inmemorystorage
(COLUMNS)
values
(r.COLUMNS)
;
end loop;
end;
A nice extension would be that a syntax like the following would be possible:
bulk insert into TABLE
execute native ''
just like using for r in ( execute native...
. An idea for this has been registered.
Alternative
As a workaround, one of the following can be used:
- view,
- temporary table.
View
Create a view in PostgreSQL that contains the output from the query. Then use:
--
-- Optionally preceed with a 'drop view' and 'create view' when more
-- flexibility is needed.
--
create or replace table MYTABLE@InMemoryStorage
as
select *
from VIEW@postgresql
Temporary table
An alternative, which is more easily configured, is to insert the results in a PostgreSQL-table. Then retrieve that table like in:
begin
execute native 'create table if not exists TEMPTABLE(...)';
execute native 'insert into TEMPTABLE ... query ...';
bulk insert MYTABLE@InMemoryStorage
( columns
)
select ...
from TEMPTABLE@postgresql
;
end;
Please note that on the first execution, the TEMPTABLE
is introduced, but it’s existence is not seen by the Invantive SQL metadata register. A reconnect will be necessary to refresh the metadata.
The performance related to the original SQL-statements on the same device is then approximately 10.000 rows per second.