The facilities supporting performance analysis generate the following different kinds of output files:
- Data collector log files
- Simulation performance reports
- Replication performance reports
- Replication log files
- Confidence interval files
- gnuplot scripts
Data collector log files
For each data collector monitor it is possible to save the numerical data values that are extracted from a net during a simulation in a log file. The Logging option for a data collector monitor determines whether the data values will be saved in a log file.
Data collector log files are saved in simulation output directories.
There are four columns of numbers in each data collector log file. The first column is the data values that are observed by the data collector. The second column is a counter. The third column indicates the simulation steps at which the data values were collected. The fourth column indicates the model time at which the data values were collected.
Here is an excerpt from a data collector log file.
#data counter step time
0 1 0 0
1 2 2 127
0 3 3 127
1 4 5 179
0 5 6 179
1 6 8 453
0 7 9 453
1 8 11 532
The first line of the excerpt is a comment indicating the meaning of the four columns. The second line of the excerpt indicates that the value
0 was the first value collected, it was collected at step
0 and model time
0. Since the value was collected at step 0, it must have been the value returned by the initialization function for the monitor.
The last line of the excerpt indicates that the value 1 was the 8th value collected, and it was collected after step
11 at model time
Simulation performance reports
Simulation performance reports provide an overview of the statistics that are calculated during one simulation for the data collector monitors in a net. A simulation performance report will be saved if the net contains data collector monitors. A simulation performance report is saved in a simulation output directory immediately after the stop functions for monitors are invoked. Simulation performance reports are saved as HTML files, and are named
Here is an example of a simulation performance report for a net that contains 6 data collector monitors.
Simulation performance report options can be used to select which statistics should be included in simulation performance reports. These options are found in the net overview in the index.
Note that if confidence intervals are included in simulation performance reports, then it is important to understand that the confidence interval for a particular data collector will not be accurate if the data values for the data collector are not independent and identically distributed (IID). For more information about IID values, see Calculating statistics.
There may be a difference in the model time shown in the simulation performance report and the current model time shown in the index under the net overview. The simulation performance report shown above was generated after the Fast Forward tool was applied. The simulation performance report was saved when model time was
11131. However, the figure below shows the current model time to be
11163 after the simulation feedback was updated after the simulation stopped.
If simulation stop criteria are met in a timed simulation at a time when there are no more enabled transitions, then CPN Tools will attempt to increase the model time to the next time at which a transition is enabled when simulation feedback is updated. This is what caused the difference between the model time in the simulation performance report and the current model time that was shown in the net overview after the simulation stopped.
Replication log files
When simulation replications are run and a net contains data collector monitors, then some of the statistics that are calculated at the end of each simulation are used to calculate more reliable statistics that are based on data from the independent simulation replications.
Suppose that a data collector named
DC is defined for a net. A number of different statistics are calculated for that data collector at the end of a simulation. This means that the average of the data values at the end of the simulation is just one estimate of what the average for the data collector should be. Running another simulation would most likely result in a different estimate of the average for the data collector in question. By running multiple simulations, several IID estimates of a particular value, such as average marking size, or minimum list length can be collected.
The IID values for a number of different statistics are collected at the end of each simulation when running simulation replications. These values are then saved in replication log files. For each data collector, the following values are saved in replication log files:
- sum (only if the data collector calculates untimed statistics)
The replication log files are saved in the Replication Log Files Directory.
Below is an excerpt of a replication log file named
Queue_Delay_avrg_iid.log. The i’th line shows the average for the
Queue_Delay data collector after the i’th simulation completed when running simulation replications. The values in a given replication log file are independent, and they are assumed to be identically distributed.
Note that these values can be found in the simulation performance reports for the corresponding three simulation performance reports.
Replication performance reports
Replication performance reports contain statistics that are calculated for the data values that are found in replication log files. Since it is very likely that the values in the replication log files are IID, the confidence intervals in replication performance reports are more likely to be accurate than those in the simulation performance reports.
In the figure below, the row for
avrg_iid in the section under
Queue_Delay contains statistics for the values in the replication log file shown above. Replication performance reports are saved as HTML files.
Replication performance report options can be used to select which statistics should be included in replication performance reports. These options are found in the net overview in the index.
Confidence interval files
The confidence intervals that can be found in replication performance reports are also saved in plain text files. Three different confidence intervals can be calculated: 90%, 95%, and 99%. All confidence intervals for a certain level, e.g. 95%, will be saved in a single file.
Confidence interval files are saved in replication output directories, and are named
X is either
Below is an excerpt from a confidence interval report named
confidenceintervals95.txt. The first column indicates the data collector and statistic for which the confidence interval has been calculated. The second column indicates the level of the confidence interval, i.e. 90, 95, or 99. The third column indicates the number of data values for which the average and confidence interval was calculated. The fourth column is the length of half of the confidence interval. The final two columns are the lower and upper endpoints for the confidence interval, i.e. [avrg-half length, avrg+half length].
#name, percent, n, avrg, half length, lower ci endpoint, upper ci endpoint
Marking_size_Server’Busy_1_avrg_iid 95 3 0.899389 0.069259 0.968648 0.830130
Marking_size_Server’Busy_1_count_iid 95 3 201.000000 0.000000 201.000000 201.000000
Marking_size_Server’Busy_1_max_iid 95 3 1.000000 0.000000 1.000000 1.000000
Marking_size_Server’Busy_1_min_iid 95 3 0.000000 0.000000 0.000000 0.000000
Queue_Delay_avrg_iid 95 3 357.610000 316.552047 674.162047 41.057953
Queue_Delay_count_iid 95 3 100.000000 0.000000 100.000000 100.000000
Queue_Delay_max_iid 95 3 1102.333333 979.460644 2081.793977 122.872689
Queue_Delay_min_iid 95 3 0.000000 0.000000 0.000000 0.000000
Queue_Delay_sum_iid 95 3 35761.000000 31655.204653 67416.204653 4105.795347
Queue_Length_avrg_iid 95 3 3.753231 3.136438 6.889669 0.616793
Queue_Length_count_iid 95 3 204.666667 6.252114 210.918781 198.414553
Queue_Length_max_iid 95 3 12.666667 12.748634 25.415300 ~0.081967
Queue_Length_min_iid 95 3 0.000000 0.000000 0.000000 0.000000
It can be useful to compare confidence intervals for different levels for one set of simulation replications. Below is a figure showing the 90%, 95%, and 99% confidence intervals for average queue delay based on data from 3 simulations.
It can also be useful to run several different simulation replications for different numbers of simulations to investigate the effects on both the estimate of the performance measure as well as the confidence intervals.
When multiple simulation replications are run, scripts are generated for plotting data collector log files in the
gnuplot is not part of CPN Tools, and it must be installed separately. Information regarding
gnuplot can be found on the homepage for gnuplot.
One gnuplot script is generated for each data collector monitor.
n simulation replications have been run. The script that is generated for a data collector monitor can be used to plot the log files from each of the
n different simulations for the data collector.
Here is an example of a gnuplot script that will plot the data from three simulations for the
Queue Length data collector monitor.
# gnuplot script generated by CPN Tools
# plot 3 files
title “Queue_Length 1” \
title “Queue_Length 2” \
title “Queue_Length 3” \
Here is the result of loading the above script into
An additional gnuplot script can be used to plot each of the data collector gnuplot scripts one after another.
Here is an example of an automatically generated script that will load the gnuplot scripts that plot the log files for 6 different data collector monitors.
pause -1 “Queue_Length”
pause -1 “Queue_Delay”
pause -1 “Processed_A_Jobs”
pause -1 “Count_trans_occur_Arrivals’Arrive_1”
pause -1 “Server_Utilization”
pause -1 “Marking_size_Server’Busy_1”
pause -1 “Done”