<html lang="en">
<head>

<meta charset="utf-8">

    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta name="description" content="">
    <meta name="author" content="">

    <!-- Le styles -->
    <link href="css/bootstrap.css" rel="stylesheet">
    <style>
      body {
        padding-top: 60px; /* 60px to make the container go all the way to the bottom of the topbar */
      }
    </style>
    <link href="css/bootstrap-responsive.css" rel="stylesheet">


<title>Poisson Binomial Test</title>
<script type="text/javascript">

  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-32661354-1']);
  _gaq.push(['_trackPageview']);

  (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
  })();

</script>
</head>
<body>

    <div class="navbar navbar-fixed-top">
      <div class="navbar-inner">
        <div class="container">
          <a class="brand" href="index.html">Poisson Binomial Test</a>
            <ul class="nav">
              <li><a href="index.html">Produce a report</a></li>
              <li><a href="fileFormat.html">File Format</a></li>
              <li class="active"><a href="#">Experimental Setup</a></li>
            </ul>
        </div>
      </div>
    </div>

<div class="container">
<h2>Experimental Setup</h2>

<h3>Building the Context (selecting the collection of datasets)</h3>
<p>
In the Poisson binomial test, the datasets are assumed to be sampled i.i.d. from a <i>context</i>, an unknown probability distribution over all possible tasks. This means that choosing the datasets after observing the performances of your algorithm is invalid. Instead, you should select the collection of datasets a priori, related to the type of problem you want to solve. For example, if you build a learning algorithm that is supposed to be <i>good</i> on machine vision tasks, you should collect several machine vision datasets and compare your algorithm to some state of the art algorithms. 
</p>
<p>
It is not valid to take one big dataset and break it into several datasets to increase the amount of tasks in the context. However, you may consider that your context is the uniform distribution over the collection of currently observed datasets. In this case, you can split your datasets at will but you must mention it in your paper and you can't state that your algorithm would generalize to other similar tasks.
</p>

<h3>Performing the experiment</h3>
<p>
You should split each dataset into a training and testing set. Each algorithm must use the same training set to produce the predictors and then evaluate the predictor on each testing sample. Keeping only the average empirical loss is not sufficient; you should keep the full stream of errors for each test sample. Moreover, the streams from the different algorithms must be aligned. This requirement is to be able to take into account the dependencies induced when comparing different algorithms on the same training and testing data.
</p>
<!--
<h3>Adding some more precision</h3>
<p>
To obtain a better precision on the final probability P( algorithmA better algorithmB | context ) provided in the report, you can repeat the experiment several times and compute the expectation. This should be done by randomly splitting the training and testing sets several times. Note that the different experiments won't be independant and the variance of those measurements will be meaningless. However, the expectation is valid and will provide a more reliable answer.
</p>
-->
</div>
</body>
</html>
