I find this opportunity to introduce myself to the World of Oracle. Friends those of you who are taking interest in the performance analysis and like working with Oracle SQL interface and not using the third party tools please do explore the power of Oracle and use the inbuilt features given by Oracle as TKPROF.
The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can user TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command:
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
If a suitable plan table is not present one can be created by doing the fooling as the SYS user:
@ORACLE_HOME\rdbms\admin\utlxplan.sql
CREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;
GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;
With this done we can trace a statement:
ALTER SESSION SET SQL_TRACE = TRUE;
SELECT COUNT(*)
FROM dual;
ALTER SESSION SET SQL_TRACE = FALSE;
The resulting trace file will be located in the USER_DUMP_DEST directory. This can then be interpreted using TKPROF at the command prompt as follows:
TKPROF <trace-file> <output-file> explain=user/password@service table=sys.plan_table
The Tkprof utility usage can be enhanced by parameters such as
SYS = YES/NO : This option allows the user to request that the recursive SQL statements (issues by the SYS user) not be displayed in the output. The default is YES. Recursive SQl usually includes internal calls and any table maintenance, such as adding an extent to a table during an insert.
RECORD = filename : This option produces a file of all the user's SQL statements.
INSERT = filename : This option creates a script to create a table and store the TRACE file statistics for each SQL statement traced.
SORT = parameters : This is the order in which to disp[lay the statements in the output. There are about 20 options in all for sorting the output.
The sample command usage for the readers is below :
tkprof ora_0076.trc Atul.txt explain=system/manager sort=(FCHCPU, EXECPU, PRSCPU) print=5 sys=no record=record1.sql
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands:
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
Note that the trace file is a point-in-time picture of what happened on the system at a given moment. In contrast, the explain plan (detailed analysis) is generated when the TKPROF listing is analyzed.
SELECT COUNT(*)
FROM dual
call count cpu elapsed disk query current rows
------- ----- ----- ------- ------- ------- ------- -------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 1 4 1
------- ----- ----- ------- ------- ------- ------- -------
total 4 0.02 0.02 0 1 4 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 121
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
1 TABLE ACCESS FULL DUAL
Things to look out for in the trace include:
When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It's often better to focus on those statements that are taking most of the cpu time.
Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and subquery processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.
The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.
,i>Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.
Multiple parse calls for a single statement imply a library cache issue.
Once you've identified your problem statements you can check the execution plan to see why the statement is performing badly.
Now What to Do with the output :
I hope this review would be helpful to the technology users in a deep sense as I feel that there is a lot of things yet to be explored but this is just the beginning. Stay happy !!
- Thank You! We appreciate your effort.