Хорошо, мы не знаем вашу систему, поэтому вам сложно много рассказать. Ваш вопрос действительно зависит от вашего окружения. В любом случае, вот некоторые тесты, которые показывают время и ресурсы, необходимые для использования вашего метода, по сравнению с другим методом:
Допустим, ваш метод - это метод 1, а другой метод - метод 2.
20:43:24 SQL> set autotrace on;
20:43:30 SQL> alter session set SQL_TRACE=TRUE;
Session altered.
20:43:39 SQL> --let's make sure we are reading from disk (for arguments sake)
20:43:45 SQL> alter system flush shared_pool;
System altered.
20:43:45 SQL> alter system flush buffer_cache;
System altered.
20:43:45 SQL>
20:43:45 SQL> --clear my examples
20:43:45 SQL> drop table t;
drop table t
*
ERROR at line 1:
ORA-00942: table or view does not exist
20:43:49 SQL> drop table u;
drop table u
*
ERROR at line 1:
ORA-00942: table or view does not exist
20:43:49 SQL>
20:43:49 SQL> --create table u - we will populate this with random numbers
20:43:49 SQL> create table u (y varchar2(4000));
Table created.
20:43:50 SQL>
20:43:50 SQL> --insert 1 million rows of random numbers
20:43:50 SQL> insert into u
20:43:50 2 (select dbms_random.normal
20:43:50 3 from dual
20:43:50 4 CONNECT BY level <= 1000000);
1000000 rows created.
Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | U | | | |
|* 2 | CONNECT BY WITHOUT FILTERING| | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(LEVEL<=1000000)
Statistics
----------------------------------------------------------
4175 recursive calls
58051 db block gets
13118 consistent gets
47 physical reads
54277624 redo size
675 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
56 sorts (memory)
0 sorts (disk)
1000000 rows processed
20:44:21 SQL>
20:44:21 SQL> --create table t - we will populate this from table u
20:44:21 SQL> create table t (x varchar2(4000));
Table created.
20:44:21 SQL>
20:44:21 SQL> --let's make sure we are reading from disk (for arguments sake)
20:44:21 SQL> alter system flush shared_pool;
System altered.
20:44:21 SQL> alter system flush buffer_cache;
System altered.
20:44:26 SQL>
20:44:26 SQL> --insert data from u to t (this is how you said you did this)
20:44:26 SQL> insert into t (select * from u);
1000000 rows created.
Execution Plan
----------------------------------------------------------
Plan hash value: 537870620
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 997K| 1905M| 1750 (1)| 00:00:21 |
| 1 | LOAD TABLE CONVENTIONAL | T | | | | |
| 2 | TABLE ACCESS FULL | U | 997K| 1905M| 1750 (1)| 00:00:21 |
---------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
5853 recursive calls
58201 db block gets
24213 consistent gets
6551 physical reads
54591764 redo size
681 bytes sent via SQL*Net to client
599 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
57 sorts (memory)
0 sorts (disk)
1000000 rows processed
20:44:41 SQL>
20:44:41 SQL>
20:44:41 SQL> --now let's start over with a different method
20:44:41 SQL> drop table t;
Table dropped.
20:44:48 SQL> drop table u;
Table dropped.
20:44:50 SQL>
20:44:50 SQL> --create table u - we will populate this with random numbers
20:44:50 SQL> create table u (y varchar2(4000));
Table created.
20:44:51 SQL>
20:44:51 SQL> --insert 1 million rows of random numbers
20:44:51 SQL> insert into u
20:44:51 2 (select dbms_random.normal
20:44:51 3 from dual
20:44:51 4 CONNECT BY level <= 1000000);
1000000 rows created.
Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | U | | | |
|* 2 | CONNECT BY WITHOUT FILTERING| | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(LEVEL<=1000000)
Statistics
----------------------------------------------------------
2908 recursive calls
58153 db block gets
12831 consistent gets
10 physical reads
54284104 redo size
683 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
31 sorts (memory)
0 sorts (disk)
1000000 rows processed
20:45:20 SQL>
20:45:20 SQL> --let's make sure we are reading from disk (for arguments sake)
20:45:20 SQL> alter system flush shared_pool;
System altered.
20:45:20 SQL> alter system flush buffer_cache;
System altered.
20:45:25 SQL>
20:45:25 SQL> --create table t using table u
20:45:25 SQL> create table t as (select * from u);
Table created.
20:45:36 SQL>
20:45:36 SQL> drop table t;
Table dropped.
20:45:41 SQL> drop table u;
Table dropped.
20:45:41 SQL>
20:45:41 SQL> commit;
Commit complete.
20:45:41 SQL> spool off
Хорошо, мы заботимся о двух протестированных нами методах, т. Е.
insert into t (select * from u);
, за который мы получаем ответ автотрассы и
create table t as (select * from u);
, для которого мы не получаем автоматическую трассировку.
К счастью, я также запустил sql_trace и взял TKprof статистики.
Вот что я получаю:
для "вставить в t (выберите * из u);":
********************************************************************************
SQL ID: bjdnhkhq8r6h4
Plan Hash: 537870620
insert into t (select * from u)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 2 2 0 0
Execute 1 1.74 7.67 6201 22538 58121 1000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.74 7.71 6203 22540 58121 1000000
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91
Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD TABLE CONVENTIONAL (cr=0 pr=0 pw=0 time=0 us)
1 TABLE ACCESS FULL U (cr=4 pr=5 pw=0 time=0 us cost=1750 size=1997891896 card=997948)
********************************************************************************
и для «создать таблицу t как (выберите * из u)» мы получим:
********************************************************************************
SQL ID: asawpwvdj1nbv
Plan Hash: 2321469388
create table t as (select * from u)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 2 2 1 0
Execute 1 0.90 2.68 6372 12823 8573 1000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.90 2.71 6374 12825 8574 1000000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91
Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD AS SELECT (cr=13400 pr=6382 pw=6370 time=0 us)
1000000 TABLE ACCESS FULL U (cr=12640 pr=6370 pw=0 time=349545 us cost=1750 size=2159012856 card=1078428)
********************************************************************************
Так что это говорит нам?
Что ж:
-Метод 2 занял примерно на 65% меньше общего времени, чем метод 1 (на 5 секунд меньше на 1 миллион строк)
-Метод 2 занял примерно на 48% меньше процессорного времени в целом, чем метод 1
-Много больше диска было проанализировано с помощью метода 2, чем метод 1
- для метода 2 было получено гораздо меньше буферов, чем для метода 1
надеюсь, это поможет вам:)