<!DOCTYPE html>
<html>
    <head>
    <style type="text/css">
        body, p, td, h1, h2, h3 {font-family: "museo-sans-1", "museo-sans-2", arial, helvetica, sans-serif;}
    </style>

<body>

<h1>Grml Feedback Benchmarking</h1>

<hr noshade size=1>
    
<p>
    In the present setup, 1000 requests were sent with parallelism of 50 (see ab commandline near each chart). When a pair of charts are displayed, the left one plots Apache benchmarking and the right one the integrated Django test server.
</p>

<table border="0" cellapdding="0" cellspacing="2">
<tr>
	<td colspan="2">
        <h2>Basic load test</h2>
        <p>Following 2 charts show the testing of the application main page, which returns static content at requests. As it was expected the Apache servers performs better, with the latency of 23ms with 7.2 standard deviation, versus Django test server which had 1802ms latency with 7588.3 +/- sd. Django sd (larger than the mean connection time) shows that this particular experimental setup can't be really trusted and needs to either be adjusted to reduce the standard deviation or to be disconsidered.
        </p>
        <b>Other stats:</b>
    </td>
<tr>
    <td valign="top">
        Time taken for tests:   0.514 seconds<br>
        Requests per second: 1943.72 [#/sec] (mean)<br>
        Time per request: 25.724 [ms] (mean)<br>
        Time per request: 0.514 [ms] (mean, across all concurrent requests)<br>
        Transfer rate: 5007.36 [Kbytes/sec] received:<br>
    </td>
        <td valign="top">
        Time taken for tests:   74.022 seconds<br>
        Requests per second: 13.51 [#/sec] (mean)<br>
        Time per request: 3701.102 [ms] (mean)<br>
        Time per request: 74.022 [ms] (mean, across all concurrent requests)<br>
        Transfer rate: 30.85 [Kbytes/sec] received<br>
        </td>
</tr>
<tr>
	<td valign="top"><img src="01_apache.png"></td>
    <td valign="top"><img src="01_apache_vs_django.png"></td>
</tr>
</table>

<p>&nbsp;</p>
    
<table border="0" cellapdding="0" cellspacing="2">
    <tr>
        <td colspan="2">
            <h2>Load test with DB growth</h2>
            <p>From <b>1000 rows</b> to <b>1999 rows</b></p>
            <p>Following 2 charts shows the testing of the application /happy/ page, which loads data from database. Currently, the table from where the data are pulled has 1000 rows. The Apache servers displays an increase in the latency of 1025ms with 319.9 standard deviation, versus Django test server which had 7787ms latency with 16918.7 +/- sd.
            </p>
            <b>Other stats:</b>
        </td>
    <tr>
    <tr>
        <td valign="top">
            Time taken for tests:   74.022 seconds <br>
            Requests per second:    13.51 [#/sec] (mean)<br>
            Time per request:       3701.102 [ms] (mean)<br>
            Time per request:       74.022 [ms] (mean, across all concurrent requests)<br>
            Transfer rate:          30.85 [Kbytes/sec] received<br>
        </td>
        <td valign="top">
            Time taken for tests:   168.219 seconds
            Requests per second:    47.39 [#/sec] (mean)<br>
            Time per request:       1055.125 [ms] (mean)<br>
            Time per request:       21.103 [ms] (mean, across all concurrent requests)<br>
            Transfer rate:          5712.26 [Kbytes/sec] received<br>
        </td>
    </tr>
    <tr>
        <td valign="top"><img src="02_apachehappy.png"></td>
        <td valign="top"><img src="02_apachehappy_vs_djangohappy.png"></td>
    </tr>
    <tr>
        <td colspan="2">
            <p>In the next test the DB was 999 rows larger (1999 rows in total). The performance of the application degrades considerably. The Apache servers latency is 1921ms with 912.2 standard deviation, versus Django test server latency of 9677ms 18675.3 +/- sd.
                </p>
                <b>Other stats:</b>
        </td>
    </tr>
    <tr>
        <td valign="top">
            Time taken for tests:   38.802 seconds<br>
            Requests per second:    25.77 [#/sec] (mean)<br>
            Time per request:       1940.116 [ms] (mean)<br>
            Time per request:       38.802 [ms] (mean, across all concurrent requests)<br>
            Transfer rate:          6160.31 [Kbytes/sec] received<br>
        </td>
        <td valign="top">
            Time taken for tests:   206.630 seconds<br>
            Requests per second:    4.84 [#/sec] (mean)<br>
            Time per request:       10331.493 [ms] (mean)<br>
            Time per request:       206.630 [ms] (mean, across all concurrent requests)<br>
            Transfer rate:          929.32 [Kbytes/sec] received<br>
        </td>
        </tr>
    <tr>
        <td valign="top"><img src="02_apachehappy1999.png"></td>
        <td valign="top"><img src="02_apachehappy1999_vs_djangohappy1999.png"></td>
    </tr>
</table>
    
<p>&nbsp;</p>
    
<table border="0" cellapdding="0" cellspacing="2">
    <tr>
        <td>
            <h2>Load test via proxy</h2>
            <p>Unfortunately I wasn't able to configure squid properly: acordng to Apache logs my requests weren't cached properly returning TCP_MISS/200. The overall perfomance shows an increased latency comparing with the first chart: 180ms of 889.4 +/- sd.
            </p>
            <b>Other stats:</b>
        </td>
    </tr>
    <tr>
        <td valign="top">
            Time taken for tests:   5.635 seconds<br>
            Requests per second:    177.46 [#/sec] (mean)<br>
            Time per request:       281.761 [ms] (mean)<br>
            Time per request:       5.635 [ms] (mean, across all concurrent requests)<br>
            Transfer rate:          475.52 [Kbytes/sec] received<br>
        </td>
    </tr>
    <tr>
        <td valign="top"><img src="03_apache_via_proxy.png"></td>
    </tr>
</table>

<p>&nbsp;</p>
    
<table border="0" cellapdding="0" cellspacing="2">
    <tr>
        <td>
            <h2>Load test under environment limits</h2>
            <p>For the testing, the memory limit was set to 500000 KB, as specified in apache configuration file:<br/>
            APACHE_ULIMIT_MAX_FILES='ulimit -v 500000'</p>
            <p>Apache refused to start with smaller memory allocation:<br />
            apache2: Syntax error on line 210 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/mime.load: Cannot load /usr/lib/apache2/modules/mod_mime.so into server: /usr/lib/apache2/modules/mod_mime.so: failed to map segment from shared object: Cannot allocate memory</p>
            
            <p>With 500000KB allocation, The app performed ok: latency 25ms with 2.4 +/- sd.</p>
            <b>Other stats:</b>
        </td>
    </tr>
    <tr>
        <td valign="top">
            Time taken for tests:   0.506 seconds<br>
            Requests per second:    1976.27 [#/sec] (mean)
            Time per request:       25.300 [ms] (mean)
            Time per request:       0.506 [ms] (mean, across all concurrent requests)
            Transfer rate:          5091.21 [Kbytes/sec] received<br>
        </td>
    </tr>
    <tr>
        <td valign="top"><img src="04_apache_ulimit500000.png"></td>
    </tr>
</table>
    
<p><b>(g) How would you improve performance for 10x, 100x, 1000x the qps levels?</b></p>

    <p>The simplest things I can think of without analysing the code would be: replicate app db services across many machines; use dedicated cache servers;</p>


</body>
</html>


