it is basically a “performance regression detector” It looks at a time series of benchmark results (e.g. test runtime, latency, memory usage across commits) and tries to answer one question: did something actually change, or is this just noise?
in a performance critical project I am working on, I had a precommit hook to run microbenchmarks to avoid perf regressions but noise was a real issue. Have to to try it to be sure if it can help but seems like a solution to a problem I had.
Change Detection for Continuous Performance Engineering: Otava performs statistical analysis of performance test results stored in CSV files, PostgreSQL, BigQuery, or Graphite database. It finds change-points and notifies about possible performance regressions.
You can also read "8 Years of Optimizing Apache Otava: How disconnected open source developers took an algorithm from n3 to constant time" - https://arxiv.org/abs/2505.06758v1
I would be interested in a layperson summary of how this code deals with whacky distributions produced by computer systems. I am too stupid to understand anything more complex than introductory SPC which struggles with non-normal distributions.
e.g. The about page, as of this writing, does not say anything about the project.
https://otava.apache.org/docs/overview
in a performance critical project I am working on, I had a precommit hook to run microbenchmarks to avoid perf regressions but noise was a real issue. Have to to try it to be sure if it can help but seems like a solution to a problem I had.
You can also read "8 Years of Optimizing Apache Otava: How disconnected open source developers took an algorithm from n3 to constant time" - https://arxiv.org/abs/2505.06758v1
and "The Use of Change Point Detection to Identify Software Performance Regressions in a Continuous Integration System" - https://arxiv.org/abs/2003.00584 (and I guess this blog post https://www.mongodb.com/company/blog/engineering/using-chang... )
(https://en.wikipedia.org/wiki/Change_detection explains what's change detection)