Overview
In RedPrairie, we often have to analyze trace files. The trace files are generally very helpful because nothing is missed when you trace everything - but that also implies that nothing is skipped. This means that it is not a surprise to get a trace file that has several million lines.Most new MOCA Clients have nice trace viewers but they all have a problem with extremely large trace files. Also the analysis is quite difficult.
Solution.
One solution is to load the trace to the database. With a proper schema that represents the trace the analysis can become manageable. Following tables represent a possible schema:-
usr_temp_cmd_trace_analysis
Column | Specification | Comments | Example |
uc_trace_file | String(500) | Name of the trace file | /blah/trace.log |
uc_cmd | String(4000) | Command Name | USRint/list order lines available for wave planning |
uc_lvl | String(100) | Command Stack Level in trace | 1 |
uc_start_ts | String(100) | Start time stamp from trace | 17:15:32,571 |
uc_end_ts | String(100) | End time stamp from trace | 17:15:34,059 |
uc_elapsed_ms | Float | Elapsed time of this command execution | 1488 |
uc_start_line_num | Float | Starting Line Number in trace file | 202 |
uc_end_line_num | Float | Ending Line Number in trace file | 121208 |
uc_args | String(4000) | Pipe separated List of arguments and values | |wh_id=xyz(STRING) |
-
usr_temp_sql_trace_analysis
Column | Specification | Comments | Example |
uc_trace_file | String(400) | Name of trace file | /blah/trace.log |
uc_bound_sql | String(4000) | SQL Statement with bind variables | select super_usr_flg from les_usr_ath where usr_id = :var0 |
uc_unbound_sql | String(4000) | SQL Statement with bind variables replaced with values | select super_usr_flg from les_usr_ath where usr_id = 'XXX' |
uc_rows_returned | String(100) | # rows returned | 1 |
uc_elapsed | String(100) | Elapsed Time in seconds | 0.002135123 |
uc_line_num | Float | Line Number in trace file | 377 |
uc_cmd | String(4000) | Command that called this SQL | MCSbase/list users |
uc_lvl | String(100) | Command Stack Level In Trace | 10 |
uc_cmd_start_line_num | Float | Starting Line number of this trace file | 352 |
Loading the trace to these tables
Simple MOCA command can be developed to read the trace file and load these tables. The latest version of Oracular MOCA Client will include this functionality as well. Since the trace files can be very large, you should use groovy to read the file - as opposed to using "read file". Additionally you should commit the data as it is being loaded into these tables.Typical Analysis
Once data is loaded into these tables analyzing huge trace files becomes quite manageable, for example:- Using usr_temp_cmd_trace_analysis, you can pin point which execution of a command took the longest, i.e. highest value of uc_elapsed_ms.
- usr_temp_sql_trace_analysis.uc_bound_sql has bind variables so it can help in analyzing SQL statements that are executed very frequently. You can group by this column and sum uc_elapsed and take count(*). This way it becomes easy to find SQLs that are executed very often.
- usr_temp_sql_trace_analysis.uc_bound_sql can be explained to see how the statement will be executed by the database.
- By comparing the total time in usr_temp_sql_trace_analysis and usr_temp_cmd_trace_analysis, we can find out if the performance issue may be external to the database. For example if usr_temp_cmd_trace_analysis shows 2 minutes but the total time in usr_temp_sql_trace_analysis is 1 minute, we know that 50% of the overhead is outside the database.
- If we see that a command is executed very frequently, according to usr_temp_cmd_trace_analysis, some caching techniques may be employed or the commands may be disabled.
We will add some of these features to future versions of Oracular MOCA client.