<html>
<head><meta charset="utf-8"><title>Quantifying noise in measurements · t-compiler/performance · Zulip Chat Archive</title></head>
<h2>Stream: <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/index.html">t-compiler/performance</a></h2>
<h3>Topic: <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html">Quantifying noise in measurements</a></h3>

<hr>

<base href="https://rust-lang.zulipchat.com">

<head><link href="https://rust-lang.github.io/zulip_archive/style.css" rel="stylesheet"></head>

<a name="229713061"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/229713061" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Simon Vandel Sillesen <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#229713061">(Mar 10 2021 at 18:01)</a>:</h4>
<p>I think the perf site does not currently do a great job at showing how noisy a benchmark is. </p>
<p>What are your thoughts on adding a summary statistic like standard deviation or error bars in percentage besides each benchmark on the compare page? Implementation wise, we could extend the noise run to have more iterations (3-5) and then display the summary statistic of that population</p>



<a name="229714688"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/229714688" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#229714688">(Mar 10 2021 at 18:12)</a>:</h4>
<p>I'd personally love this. It's hard to know when to trust a benchmark and when not to.</p>



<a name="229715215"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/229715215" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> oliver <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#229715215">(Mar 10 2021 at 18:15)</a>:</h4>
<p>which tools are there to integrate summary statistics into static code analysis?</p>



<a name="229727317"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/229727317" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Simon Vandel Sillesen <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#229727317">(Mar 10 2021 at 19:27)</a>:</h4>
<p><span class="user-mention" data-user-id="281739">@oliver</span> for what I'm imagining above, there's no static code analysis involved. It's just calculating the summary statistic based on 3-5 repeated runs of the benchmarks. It's pretty simple, but hopefully better than the current state</p>



<a name="229730788"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/229730788" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> oliver <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#229730788">(Mar 10 2021 at 19:48)</a>:</h4>
<p>but then those parts of the code can be collected and traced right, indexed pointing to where in the code the measurement came from</p>



<a name="229740331"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/229740331" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Simon Vandel Sillesen <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#229740331">(Mar 10 2021 at 20:41)</a>:</h4>
<p>Right, okay you mean identifying <em>where</em> in the code the noise comes from. That could also be useful, and probably also more difficult</p>



<a name="230083767"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/230083767" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> oliver <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#230083767">(Mar 12 2021 at 19:23)</a>:</h4>
<p>it would be interesting to try to predict the perf outcome when changing a particular code section</p>



<a name="230086510"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/230086510" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> oliver <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#230086510">(Mar 12 2021 at 19:45)</a>:</h4>
<p><a href="https://arxiv.org/abs/1810.05286">https://arxiv.org/abs/1810.05286</a><br>
"While we cannot compute exactly the set of impacted tests for a particular change, we can approximate this computation by  learning  to  identify  which  tests  would  have  reported  a failure, based on historical data."</p>



<a name="230086852"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/230086852" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> oliver <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#230086852">(Mar 12 2021 at 19:47)</a>:</h4>
<p>"Change level features consist of:<br>
•Change history for files is useful to identify active areas of development which are more prone to breakages. We thus use features indicating number of changes made to modified files in the last 3, 14, and 56 days.<br>
•File cardinality, or number of files touched in a change.Large  changes  are  harder  to  review  and  we  assume  that probability of a test failure is lower for small changes.<br>
•Target cardinality, i.e. number of test targets triggered by a change. If certain files are used in many projects then a small change in them might trigger unexpected behavior.<br>
•Our projects use multiple programming languages, which have different breakage patterns. We use a fixed-size bit vector to identify extensions of files modified in a change.<br>
•Number  of distinct  authors for  files  in  a  change  might indicate  common  code  that  is  used  in  multiple  project and requires extra attention.<br>
Target level features consist of:<br>
•Historical failure rates of a target are a good baseline for the probability of failure. We include a vector of failure rates in the last 7, 14, 28 and 56 days as a feature.<br>
•Project name is useful to identify an area the target covers and categorize breakage patterns based on a project.<br>
•Number of tests in a target can be used as a proxy of the code area covered by it.<br>
Cross features are:<br>
•Minimal  distance between  one  of  the  files  touched  in  a change  and  the  prediction  target.  The  feature  approximates  how  close  are  changes  to  a  given  target  and  the significance of the impact on it.<br>
•Number  of common  tokens shared  by  paths  of  modified files  and  test  defines  lexical  distance  to  proxy  human perceived relevance."</p>



<a name="245339478"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245339478" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245339478">(Jul 08 2021 at 16:46)</a>:</h4>
<p>Revisiting this thread. As I see it there are two types of noise:</p>
<ul>
<li>Noise over N runs of the same benchmark for the same commit. This is what <span class="user-mention" data-user-id="139182">@Simon Vandel Sillesen</span> mentions above</li>
<li>Historic noise of a benchmark - i.e., std deviation over the last N runs of this benchmark </li>
</ul>
<p>Right now we could keep track of a rolling average of deviation for some number of commits in the past. For instance, when recording the result of a benchmark run, look at the results from the previous N runs of that benchmark, calculate that mean, and determine that std deviation from that mean for those historic runs. Record that std deviation as noisiness of that benchmark at the point that the new run happens.</p>



<a name="245339743"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245339743" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245339743">(Jul 08 2021 at 16:48)</a>:</h4>
<p>It seems like the best time to do this is when recording the results of a perf run, but we probably could also do this after the fact if we're able to find the results that immediately proceeded the commit in question</p>



<a name="245339993"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245339993" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245339993">(Jul 08 2021 at 16:50)</a>:</h4>
<p>I'm still not familiar with the details of how the collector collects results, but if everyone agrees that it would be useful to collect historical noise as measured by the std deviation of N previous runs of the benchmark, I could try to take a look at implementing it.</p>



<a name="245340566"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245340566" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> simulacrum <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245340566">(Jul 08 2021 at 16:55)</a>:</h4>
<p>We run several times on the same benchmark</p>



<a name="245340592"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245340592" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> simulacrum <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245340592">(Jul 08 2021 at 16:55)</a>:</h4>
<p>So there's likely at least 2 runs available to give a little measure of noise</p>



<a name="245340676"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245340676" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> simulacrum <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245340676">(Jul 08 2021 at 16:56)</a>:</h4>
<p>But I think historical is likely to be more practical, or having dedicated "noise measurements" every few months</p>



<a name="245345064"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245345064" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245345064">(Jul 08 2021 at 17:31)</a>:</h4>
<p>In theory one could also get more signal out of looking at all benchmarks in a run instead of individual benchmarks. But the resulting statement is more nebulous "overall it looks like a small regression, but we can't point to any individual benchmark since they're all in their recent noise range", so that's probably less useful.</p>



<a name="245375882"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245375882" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245375882">(Jul 08 2021 at 21:39)</a>:</h4>
<blockquote>
<p>Right now we could keep track of a rolling average of deviation for some number of commits in the past.</p>
</blockquote>
<p>You do NOT want the rolling average. Averages are sensitive to big outliers. And when we're measuring performance deltas then real optimizations or regressions are the big changes. To get the noise we need something histogram-based so we can exclude the big changes.</p>



<a name="245455031"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245455031" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> pnkfelix <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245455031">(Jul 09 2021 at 15:06)</a>:</h4>
<p>mm. I heard a recent argument that trimmed mean is a good way to go</p>



<a name="245455112"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245455112" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> pnkfelix <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245455112">(Jul 09 2021 at 15:06)</a>:</h4>
<p>But to be fair that was more about metrics w.r.t. availability and latency</p>



<a name="245455140"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245455140" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> pnkfelix <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245455140">(Jul 09 2021 at 15:07)</a>:</h4>
<p>(well, I guess compile-times <em>are</em> latency… heh.)</p>



<a name="245458691"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245458691" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245458691">(Jul 09 2021 at 15:35)</a>:</h4>
<p>There has to be good statistical methods for determining variance that takes into account outliers, no? Histograms are great, and we'll want those, but we'll also want a method for categorizing noise that automated systems can take advantage of, and histograms don't allow for that.</p>



<a name="245458718"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245458718" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245458718">(Jul 09 2021 at 15:35)</a>:</h4>
<p>Well, since we'd be taking abs(delta) trimming only makes sense on one end of the distribution. And to do that kind of trimming you need to keep multiple samples in memory rather than just calculating a moving average. At that point you might as well build a histogram and later pick a percentile as your cutoff point based on bucket populations.</p>



<a name="245459157"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245459157" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245459157">(Jul 09 2021 at 15:39)</a>:</h4>
<p>How do histograms not let you cut off outliers? They're basically a discrete version of a CDF. So you can cut off at some percentile.</p>
<p>Just to make sure we're talking about the same thing: I mean a histogram of deltas. With noise we expect a left-heavy distribution with the small buckets containing all the noise and the significant changes being the outliers on the right</p>
<p>So we can say that most changes aren't performance-impacting (this is a heuristic based on looking at past charts, not an iron law) so we expect a large percentile of deltas to be noise.</p>



<a name="245459176"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245459176" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245459176">(Jul 09 2021 at 15:39)</a>:</h4>
<p>I see - I think we agree.</p>



<a name="245459395"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245459395" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245459395">(Jul 09 2021 at 15:41)</a>:</h4>
<p>That'll ignore the small but significant changes which are within the noise floor of a single change and only become apparent over multiple samples. Getting those is where things start to get tricky.</p>



<a name="245463837"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245463837" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245463837">(Jul 09 2021 at 16:18)</a>:</h4>
<p>On a practical level, I'm wondering _when_ the histogram should be calculated. We could calculate inside the collector when recording the results of a run, we could try to look up previous runs that match the same description (i.e., the same "pstat_series"), but we could also do this on demand since the storing something new in the database would only be in order to cache the lookup.</p>



<a name="245463989"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245463989" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245463989">(Jul 09 2021 at 16:19)</a>:</h4>
<p>One issue is is that we don't currently keep track of _when_ runs are done. Most likely we'll want to find N number of runs that happen most recently relative to the run in question (by the way, by "run" I mean what the DB refers to as "collection")</p>



<a name="245464575"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245464575" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245464575">(Jul 09 2021 at 16:24)</a>:</h4>
<p><span class="user-mention" data-user-id="116122">@simulacrum</span> what do you think about adding a created_at field to the collection table that would allow us to know _when_ perf runs happened? This could be used to compare runs temporally.</p>



<a name="245465052"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245465052" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> simulacrum <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245465052">(Jul 09 2021 at 16:28)</a>:</h4>
<p>Hm, I'm not sure what you would want to use it for. I think it seems fine to add if necessary, but I feel like recent by "previous N commits" is better</p>



<a name="245465336"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245465336" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245465336">(Jul 09 2021 at 16:30)</a>:</h4>
<p>Two concerns with previous N commits:</p>
<ul>
<li>Try runs are from different branches and don't have a parent child relationship. If two try runs happen don't we want the results of one to factor into noise calculation of the other?</li>
<li>Walking commit graphs sounds more expensive than a db lookup, but maybe not....</li>
</ul>



<a name="245466282"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245466282" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> simulacrum <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245466282">(Jul 09 2021 at 16:38)</a>:</h4>
<p>previous N commits in the 'parent commit' sense</p>



<a name="245466346"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245466346" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> simulacrum <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245466346">(Jul 09 2021 at 16:38)</a>:</h4>
<p>we can store a vec for main, and try commits have one lookup in theory to get to parent and then you're on main</p>



<a name="245466667"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245466667" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245466667">(Jul 09 2021 at 16:41)</a>:</h4>
<p>Ok cool we'll start with that. I might just prototype something next week.</p>



<a name="245605323"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605323" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605323">(Jul 11 2021 at 12:43)</a>:</h4>
<p>So I have something working, but one question  I have is how do we handle benchmarks where a change has led to a new mean. The proposal put forth by <span class="user-mention" data-user-id="330154">@The 8472</span> works really well for benchmarks that maintain a steady mean over time. But some changes fundamentally shift the mean which introduces more variance that is <em>not</em> noise. A histogram approach won't necessarily help here as the mean is thrown off and <em>all</em> deltas will have more variance from the mean than they should.</p>



<a name="245605413"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605413" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605413">(Jul 11 2021 at 12:45)</a>:</h4>
<p>Looks like step detection can help with this. I'll look into that.</p>



<a name="245605487"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605487" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605487">(Jul 11 2021 at 12:47)</a>:</h4>
<p>Don't measure deltas relative to a mean, measure deltas between adjacent commits.</p>



<a name="245605602"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605602" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605602">(Jul 11 2021 at 12:50)</a>:</h4>
<p>Given recent commits <em>a, b, c, d</em> calculate deltas <em>abs(a-b), abs(b-c), abs(c-d)</em>. build a histogram of absolute deltas.<br>
For a new perf run <em>e</em> you calculate the delta <em>abs(d, e)</em> and then look into which bucket it falls on the histogram. the higher the percentile the more likely it's not noise.</p>



<a name="245605610"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605610" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605610">(Jul 11 2021 at 12:50)</a>:</h4>
<p>How would you define the buckets?</p>



<a name="245605663"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605663" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605663">(Jul 11 2021 at 12:52)</a>:</h4>
<p>look at hdrhistogram. you configure it with a value range, a desired precision and a 3rd parameter and it figures out what's needed. you can query it for percentiles.</p>



<a name="245605684"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605684" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605684">(Jul 11 2021 at 12:53)</a>:</h4>
<p><a href="https://crates.io/crates/hdrhistogram">https://crates.io/crates/hdrhistogram</a></p>



<a name="245605751"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605751" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605751">(Jul 11 2021 at 12:54)</a>:</h4>
<p>I suppose we could also just play it by ear since at first we only want the binary decision of "is this noise"</p>



<a name="245605810"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605810" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605810">(Jul 11 2021 at 12:56)</a>:</h4>
<p>Oh, you meant how to choose which bucket to pick as threshold and not the bucket sizes?</p>



<a name="245605817"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605817" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605817">(Jul 11 2021 at 12:57)</a>:</h4>
<p>Yes</p>



<a name="245605876"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605876" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605876">(Jul 11 2021 at 12:59)</a>:</h4>
<p>yeah, I'd just print a histogram for each benchmark and pick a threshold by hand so that whatever we manually deemed noteworth in the past is above it</p>



<a name="245605887"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605887" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605887">(Jul 11 2021 at 12:59)</a>:</h4>
<p>I guess it's a pretty high quantile since there are lots of commits each week but only few noteworthy items</p>



<a name="245605995"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245605995" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245605995">(Jul 11 2021 at 13:01)</a>:</h4>
<p>I also want to classify benchmarks as noisy or not. I think I should be able to measure the mean and std deviations of changes (i.e., the delta between two commits) and then calculate the coefficient of variance on that population (trimming outliers at the top which are more likely to be real changes).</p>



<a name="245606055"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245606055" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245606055">(Jul 11 2021 at 13:03)</a>:</h4>
<p>If those deltas have a high coefficient of variance than the benchmark is noisy. And we can tweak that threshold over time.</p>



<a name="245606066"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245606066" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245606066">(Jul 11 2021 at 13:03)</a>:</h4>
<p>You can also get that by looking at the histogram, just pick a lower percentile instead to figure out where the noise is, then get the ratio of a low-quantile delta (the noise) to the magnitude of a non-delta sample.</p>



<a name="245606134"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245606134" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245606134">(Jul 11 2021 at 13:05)</a>:</h4>
<p>I guess I just am having a harder time translating "looking at the histogram" into code</p>



<a name="245607103"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607103" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607103">(Jul 11 2021 at 13:27)</a>:</h4>
<p>Yeah that's the heuristic part based on past experience, the one parameter we need to tune.</p>
<p>If 50% of all rustc contributors were people that made big feature changes no matter the cost in performance and the other 50% then did crazy optimizations on the next commit to fix the regression then the charts would go up and down like a sawtooth. Then most changes would be significant and occupy a major fraction of the histogram and you'd only find noise in the bottom decile for example.</p>
<p>But that's not what happens, big performance changes that we care about are rare, so the noise probably dominates more than 50% of it and the interesting changes must be somewhere in the top decile or maybe even higher.</p>
<p>Looking at the graphs from 2021-06-02 onwards most benchmarks experienced at least one big step change and often several smaller ones that obviously rise above the noise.<br>
But a few are troublesome. E.g. <em>deep-vector-check</em> is noisy but clearly moves the mean over time, but often within the noise. The histogram approach will have trouble categorizing something like that from a single perf run, you either suffer false positives or negatives from single samples. But the histogram for something like that should look different. The easy ones should be positive-skewed while the difficult ones are flatter or even negative-skewed.</p>



<a name="245607478"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607478" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607478">(Jul 11 2021 at 13:36)</a>:</h4>
<p>I guess we don't really need a histogram. The sample count is small enough that one could calculate statistical measures directly on them.</p>



<a name="245607742"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607742" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607742">(Jul 11 2021 at 13:45)</a>:</h4>
<p>I think I'm settling for calculating commit deltas, ordering them from smallest to largest, removing the top 25% since we're not after large changes, measuring the mean of that subsection of deltas, and then dividing by the mean of the benchmark population to normalize.</p>



<a name="245607748"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607748" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607748">(Jul 11 2021 at 13:45)</a>:</h4>
<p><a href="https://docs.rs/average/0.9.2/average/index.html">https://docs.rs/average/0.9.2/average/index.html</a></p>



<a name="245607753"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607753" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607753">(Jul 11 2021 at 13:45)</a>:</h4>
<p>That should answer the question of how much non-significant change happens on average.</p>



<a name="245607761"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607761" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607761">(Jul 11 2021 at 13:45)</a>:</h4>
<p>If it's above a certain percentage, we can label that noise.</p>



<a name="245607982"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245607982" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245607982">(Jul 11 2021 at 13:50)</a>:</h4>
<blockquote>
<p>removing the top 25% since we're not after large changes, measuring the mean of that subsection of deltas</p>
</blockquote>
<p>That smells like bad statistics. I think that would result in classifying <em>more</em> than 25% of all changes as non-noise.</p>
<p>Taking a mean of an already oneside-trimmed sample set only means you're getting even value than whatever the highest one the 75th percentile was.</p>



<a name="245608240"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245608240" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245608240">(Jul 11 2021 at 13:57)</a>:</h4>
<p>I agree. But I think what we need is a measure of how many changes are above some arbitrary nose threshold. If a large percentage of changes are above that threshold than we can also classify the benchmark as highly variable. Highly variable differs from noisy in that highly variable benchmarks might actually be measuring real performance sensitivity. Ideally our benchmarks would only experience significant change some small percentage of time.</p>



<a name="245608331"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245608331" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245608331">(Jul 11 2021 at 13:59)</a>:</h4>
<p>Right, e.g. the CGU partitioning stuff isn't really noise. ~highly responsive to perturbations</p>



<a name="245673257"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245673257" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245673257">(Jul 12 2021 at 11:25)</a>:</h4>
<p>The first PR of what is likely to be many is up: <a href="https://github.com/rust-lang/rustc-perf/pull/902">https://github.com/rust-lang/rustc-perf/pull/902</a>.</p>



<a name="245735853"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245735853" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245735853">(Jul 12 2021 at 19:27)</a>:</h4>
<p>fwiw the "change detector" i wrote for lolbench used kernel density estimation and deviations from the mean of a trailing window (30 nightlies or sth iirc) <a href="https://github.com/anp/lolbench/blob/main/src/analysis.rs#L356">https://github.com/anp/lolbench/blob/main/src/analysis.rs#L356</a></p>



<a name="245735959"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245735959" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245735959">(Jul 12 2021 at 19:28)</a>:</h4>
<p>you can get good results with k-means clustering if you know the <code>k</code> but thats Hard if you have a bunch of time series</p>



<a name="245736223"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245736223" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245736223">(Jul 12 2021 at 19:30)</a>:</h4>
<p>i kept looking at time series analysis techniques and i think it might be a rabbit hole to try and treat it like a time series vs. just a sample from an unknown distribution</p>



<a name="245738892"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245738892" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245738892">(Jul 12 2021 at 19:52)</a>:</h4>
<p>It depends on whether you want to pull things out of the noise floor or not. If not then just looking at the distribution of deltas  should work well enough. If you want to find jumps that are only apparent over multiple samples you have to treat it as a time series.</p>



<a name="245800798"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245800798" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245800798">(Jul 13 2021 at 09:46)</a>:</h4>
<p>This is certainly pushing the ends of my statistics 101 course I took in uni so forgive me if I have some catching up to do. Looking into the KDE technique, I'm not sure if it buys us anything here which I believe is what <span class="user-mention" data-user-id="330154">@The 8472</span> is saying. I don't think we need to detect jumps if we're looking at deltas though it might eventually be interesting to detect large noise through time series analysis, but if we assume that noise is equally likely to happen on every perf run, we should detect noise by seeing too many "significant" changes.</p>



<a name="245866031"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245866031" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245866031">(Jul 13 2021 at 18:19)</a>:</h4>
<p>mm yeah that makes sense, your analysis is only running pairwise, not on historical data?</p>



<a name="245867581"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245867581" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> rylev <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245867581">(Jul 13 2021 at 18:31)</a>:</h4>
<p>Currently the analysis is on historical data, but we're only looking at deltas between adjacent performance runs. We don't do any analysis on population as a whole (other than population analysis of the deltas)</p>



<a name="245871125"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245871125" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245871125">(Jul 13 2021 at 18:58)</a>:</h4>
<blockquote>
<p>Looking into the KDE technique, I'm not sure if it buys us anything here which I believe is what @The 8472 is saying.</p>
</blockquote>
<p>Partially. Let's say we have an artificial sample set of 0.4, 0.5, 0.4, 0.5, 0.4, 0.5, 0.4, 0.5, 0.4, 0.5, 0.41, 0.51, 0.41, 0.51, 0.41, 0.51, 0.41, 0.51, 0.41, 0.51... then obviously the mean jumped by 0.01. the delta approach will discard that change because for a single sample it's much smaller than what we qualified as noise, even though it is significant once you look at many samples. But trying to extract these from arbitrary timelines is hard and beyond my statistics knowledge. Clustering seems like it might find some of those, on good days :D</p>



<a name="245871947"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245871947" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245871947">(Jul 13 2021 at 19:04)</a>:</h4>
<p>yeah this makes sense!</p>



<a name="245969034"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245969034" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Eh2406 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245969034">(Jul 14 2021 at 14:39)</a>:</h4>
<p>(Not statistically justified, but) Something inspired by "Jenks natural breaks" would call that out pretty easily. <br>
Split the history into N continuous chunks. Pick the change dates to minimize <code>sum(standard deviation(data within chunk))</code>.</p>



<a name="245969277"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245969277" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Eh2406 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245969277">(Jul 14 2021 at 14:41)</a>:</h4>
<p>And yes, this is a reformulation of one dimensional k-means clustering.</p>



<a name="245969498"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/245969498" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Eh2406 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#245969498">(Jul 14 2021 at 14:43)</a>:</h4>
<p>but this formulation supports efficient dynamic programming solutions.</p>



<a name="246026366"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246026366" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246026366">(Jul 14 2021 at 21:49)</a>:</h4>
<p>my fear with that kind of approach is being able pick a good N for dozens/hundreds of different series that might have very different clustering. are there good ways to also iterate on number of clusters?</p>



<a name="246026629"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246026629" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> The 8472 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246026629">(Jul 14 2021 at 21:52)</a>:</h4>
<p>machine learning? <span aria-label="sweat smile" class="emoji emoji-1f605" role="img" title="sweat smile">:sweat_smile:</span>  (I recently saw some paper on clustering beating sota normal algorithms)</p>



<a name="246027089"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246027089" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Eh2406 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246027089">(Jul 14 2021 at 21:56)</a>:</h4>
<p>I don't know of a statistically justified one. But I bet the <code>minimize(sum(standard deviation(data within chunk))</code> vs number of chunks graph will have a clear bend in it.<br>
Or do the calculation for all N in 0..=data-points, and plot the histogram for etch day the number of times that day was a brake point.</p>



<a name="246028902"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246028902" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246028902">(Jul 14 2021 at 22:14)</a>:</h4>
<p>you joke about ML but i was vaguely wondering if it would be worth it last time i spent a while trying to wrap my brain around this</p>



<a name="246028966"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246028966" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246028966">(Jul 14 2021 at 22:15)</a>:</h4>
<p>hm maybe it's worth trying to do clustering and pick the cluster number after all, that could be a pretty reasonable approach</p>



<a name="246030098"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246030098" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> Eh2406 <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246030098">(Jul 14 2021 at 22:27)</a>:</h4>
<p>(I'd want to constrain the clusters to be continuous time ranges)</p>



<a name="246030181"></a>
<h4><a href="https://rust-lang.zulipchat.com#narrow/stream/247081-t-compiler/performance/topic/Quantifying%20noise%20in%20measurements/near/246030181" class="zl"><img src="https://rust-lang.github.io/zulip_archive/assets/img/zulip.svg" alt="view this post on Zulip" style="width:20px;height:20px;"></a> anp <a href="https://rust-lang.github.io/zulip_archive/stream/247081-t-compiler/performance/topic/Quantifying.20noise.20in.20measurements.html#246030181">(Jul 14 2021 at 22:27)</a>:</h4>
<p>agreed</p>



<hr><p>Last updated: Aug 07 2021 at 22:04 UTC</p>
</html>