Legend:
Library
Module
Module type
Parameter
Class
Class type
Analyze module.
Micro-benchmark usually uses a linear-regression to estimates the execution time of a code segments. For example, the following table might represent {!Measurement_raw.t} array collected by Benchmark.run:
Bechamel records 3000 samples and the number of iterations can grows geometrically (see Benchmark.run). Then, Bechamel can use 2 algorithms:
Ordinary Least Square
RANdom SAmple Consensus
The user can choose one of it. Currently, OLS is the best to use. These algorithms will estimate the actual execution time of the code segment. Using OLS with the above data would yield an estimated execution time of 9.6 nanoseconds with a goodness of fit (r²) of 0.992.
More generally, Bechamel lets the user to choose predictors and the responder. Indeed, the user can use others metrics (such as perf) and the API allows to analyze such metrics each other.
val ols : r_square:bool ->bootstrap:int ->predictors:string array->OLS.tt
ols ~r_square ~bootstrap ~predictors is an Ordinary Least Square analyze on predictors. It calculate r² if r_square = true. bootstrap is the number of how many times Bechamel try to resample measurements.
val ransac : filter_outliers:bool ->predictor:string ->RANSAC.tt
one analyze measure { Benchmark.stat; lr; kde; } estimates the actual given measure for one predictors. So, one analyze time { Benchmark.stat; lr; kde; } where analyze is initialized with runpredictor wants to estimate actual run-time (or execution time) value.
merge witnesses tbls returns a dictionary where the key is the label of a measure (from the given witnesses) and the value is the result of this specific measure.