Legend:
Library
Module
Module type
Parameter
Class
Class type
Analyze module.
Micro-benchmark usually uses a linear-regression to estimates the execution time of a code segments. For example, the following table might represent {!Measurement_raw.t} array collected by Benchmark.run:
Bechamel records 3000 samples and the number of iterations can grows geometrically (see Benchmark.run). Then, Bechamel can use 2 algorithms:
Ordinary Least Square
RANdom SAmple Consensus
The user can choose one of it. Currently, OLS is the best to use. These algorithms will estimate the actual execution time of the code segment. Using OLS with the above data would yield an estimated execution time of 9.6 nanoseconds with a goodness of fit (r²) of 0.992.
More generally, Bechamel lets the user choose the predictors and responder. Indeed, the user can use others metrics (such as perf) and the API allows to analyze such metrics together.
val ols : r_square:bool ->bootstrap:int ->predictors:string array->OLS.tt
ols ~r_square ~bootstrap ~predictors is an Ordinary Least Square analysis on predictors. It calculates r² if r_square = true. bootstrap defines how many times Bechamel tries to resample measurements.
val ransac : filter_outliers:bool ->predictor:string ->RANSAC.tt
one analysis measure { Benchmark.stat; lr; kde; } estimates the actual given measure for one predictor. So, one analysis time { Benchmark.stat; lr; kde; } wants to estimate actual run-time (or execution time) value, where analysis is initialized with runpredictor.
merge witnesses tbls returns a dictionary where the key is the label of a measure (from the given witnesses) and the value is the result of this specific measure.