---
title: Nginx web server on Debian with LetsEncrypt HTTPS and Certbot
x-toc-enable: true
...

Introduction
============

Hosting a website is a rite of passage since the 1990s, for those who wish to
truly have a voice on the internet. It can be done inexpensively, on commonly
available hardware. Any decent operating system (e.g. FreeBSD, OpenBSD, Linux
distros) can easily run a web server, and many servers exist (nginx, lighttpd,
Apache, OpenBSD's own *httpd* and more).

This tutorial will teach you how to set up a secure web server on [Debian
Linux](https://www.debian.org/), using nginx. We will use *Let's Encrypt* as
our Certificate Authority, enabling the use of encryption via `https://` URLs
with nginx listening on port 443.

Let's Encrypt is a non-profit Certificate Authority run by
the [Internet Security Research Group](https://www.abetterinternet.org/). You
can learn more about Let's Encrypt on their website:

<https://letsencrypt.org/>

You can read about nginx here:

<https://nginx.org/en/>

Requirements
============

Operating system
----------------

This guide talks about Debian, but these instructions could easily be adapted
for other distros, or FreeBSD. Always read the manual!

IP addresses
------------

*One* IPv4 address and *one* IPv6 address for the host. Both IP addresses must
be publicly routed, and pingable from the internet.

Port forwarding is also acceptable.

Ports
-----

You must also ensure that ports 80 and 443 are *open*. IP routing and packet
filtering are beyond the scope of this article, but you might check the [router
section](../router/) for further guidance.

DNS
---

You need `A` (IPv4) and `AAAA` (IPv6) pointers in your DNS configuration, for
your domain name, pointing to the IPv4 and IPv6 address of the host that will
run your web server.

You might consider [hosting your own DNS](../dns/), using the guides provided
by Fedfree.

It is assumed, by this tutorial, that you have configured the following:

* `example.com.` (bare domain) A and AAAA records
* `www` (`www.example.com`) A and AAAA records

Example entries (from the ISC BIND zone file used
for [libreboot.org](http://libreboot.org/)):

```
libreboot.org.  IN  A    81.187.172.132
www             IN  A    81.187.172.132
libreboot.org.  IN  AAAA 2001:8b0:b95:1bb5::4
www             IN  AAAA 2001:8b0:b95:1bb5::4
```

Optional: DNS CAA
-----------------

You may wish to configure DNS CAA (Certificate Authority Authorization) for
your domains. Something like this would be placed inside your DNS zone
file (the syntax below is for an ISC BIND zone file):

```
example.com.    IN  CAA 0 issue "letsencrypt.org"
example.com.    IN  CAA 0 iodef "mailto:you@example.com"
```

Where `example.com` is specified, substitute your own domain name (change the
email address aswell).

More information is available here: \
<https://letsencrypt.org/docs/caa/>

More information about ISC BIND zone files: \
[../dns/zonefile-bind.md](../dns/zonefile-bind.md)

The specified email address should ideally match what you provide to certbot,
while generating new certificates. By putting CAA records on your zone files,
only LetsEncrypt shall be permitted to create certificates for the domain name.

Software installation
---------------------

Firstly, install nginx. Debian provides these packages to choose from:

* [nginx-core](https://packages.debian.org/bullseye/nginx-core)
* [nginx-light](https://packages.debian.org/bullseye/nginx-light)
* [nginx-full](https://packages.debian.org/bullseye/nginx-full)
* [nginx-extras](https://packages.debian.org/bullseye/nginx-extras)

If unsure, choose `nginx-core`, as this is the default version chosen by
virtual package `nginx`. As root:

	apt-get install nginx-core

You will also install certbot and openssl. As root:

	apt-get install certbot openssl

Diffie-Hellman key exchange
===========================

Introduction
------------

The Diffie-Hellman key is used for TLS
[handshakes](https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake),
between your client's browser and the nginx server. You can learn more about
Diffie-Hellman on wikipedia:

<https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange>

And about TLS here:

<https://en.wikipedia.org/wiki/Transport_Layer_Security>

Diffie Hellman parameters
-----------------------

Run this command:

	openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

You should now have this file: `/etc/ssl/certs/dhparam.pem` - please verify its
contents. You will later make use of this, while configuring nginx.

You may find these resources insightful (read every page/thread), regarding
key size for dhparams:

* <https://github.com/certbot/certbot/issues/489>
* <https://gnupg.org/faq/gnupg-faq.html#no_default_of_rsa4096>
* <https://www.keylength.com/en/4/>
* <https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar2.pdf>
* <https://github.com/certbot/certbot/issues/2080>
* Nice page showing the performance impact of 4096-bit vs 2048-bit RSA: \
  <https://blog.nytsoi.net/2015/11/02/nginx-https-performance>

Changing this key every few months would be a good practise. You could do it
when you renew certificates in certbot.

The key size of 2048 bits still secure enough, at least until year ~2028. If
you *do* want to use something stronger, write *4096* instead, and
use `--rsa-key-size 4096` later on when running certbot.

Certbot
=======

Introduction
------------

Certbot implements the
[ACME](https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment)
protocol used by LetsEncrypt, interacting with it for the creation, renewal and
revocation of certificates. For more information, please refer to the following
resources:

* <https://certbot.eff.org/>
* <https://letsencrypt.org/docs/challenge-types/>
* <https://letsencrypt.org/docs/client-options/>

Certbot is the *reference implementation*, but alternative programs are
available. This tutorial will make use of *certbot*, because that is the one
recommended by Let's Encrypt.

First, stop nginx
-----------------

*If you already have certificates in place, you can skip this step.*

We will actually set this up *first*. When you installed nginx, it will have
been started automatically. You must stop it now:

	systemctl stop nginx

Although certbot *does* provide nginx integration, we will *not* be using it,
because that is not as flexible as is ideal. Instead, we will be using
the *certonly* mode in certbot.

Generate new certificate
------------------------

*If you already have certificates in place, you can skip this step.*

**STOP! Before you continue, please ensure that DNS is working. You can try to
ping your server on `example.com` and `www.example.com`, where `example.com` is
to be substituted with your actual domain name.**

If you've already got DNS properly configured, you can literally just run it
now to generate your brand new key.

**STOP! DO NOT run these commands straight away, but read instead and keep them
for reference:**

	certbot certonly -d example.com
	certbot certonly -d www.example.com

Read the following sections, and really learn about *certbot*. When you're
ready to continue, run the above commands (adapted for your purposes).

Read the manual first!
----------------------

First, read the entire certbot manual:

<https://eff-certbot.readthedocs.io/en/stable/using.html>

OCSP Must-Staple
----------------

You might consider adding the `--must-staple` argument to certbot, when making
your keys. [OCSP stapling](https://en.wikipedia.org/wiki/OCSP_stapling) is
enabled in the example nginx config, per this guide, but browsers need not use
it; they can still choose to query LetsEncrypt directly. *Stapling* enables
greater performance and security, which we'll have more to say about later.

LetsEncrypt certificates support use of OCSP. You can enable OCSP stapling,
but `--must-staple` configures the generated certificate in such a way where
conformant browsers will fail validation, *unless* the server performs OCSP
stapling. This provides a useful (if brutal) way of informing you when stapling
is either disabled or misconfigured. More on this later.

The `--must-staple` argument is *optional*, but *highly recommended*.
LetsEncrypt doesn't enable adding this by default because they know a lot of
webmasters won't enable OCSP stapling, a fact they alluded to in this article
from 15 December 2022:

<https://letsencrypt.org/2022/12/15/ocspcaching.html>

RSA key sizes
-------------

In certbot, the default size is 2048 bits. If you've
generated 2048-bit `dhparam.pem`, you should use the default RSA 2048-bit size
in certbot aswell. *If you specified 4096 bits, then you should use that in
certbot.*

You can pass `--rsa-key-size 4096` in certbot for the higher 4096-bit key size,
but please do consider performance requirements (especially on high-traffic
servers).

RSA key sizes of 2048 bits are still perfectly acceptable, until around the
year ~2028. Some of the elliptic curve-based ciphers that you'll configure in
nginx, for TLS, do also have an equivalent strength of 7680-bit RSA.

certonly mode
-------------

When certbot generates a certificate, it will ask you whether you wish to spin
up a temporary web server (provided by certbot), or place files in a webroot
directory (provided by your current httpd, in this case nginx).

This is because LetsEncrypt does, via ACME protocol, verify that the host
machine running `certbot` is of the same IP address as specified by A/AAAA
records in the DNS zone file, for a given domain. It will place files in a
directory relative to your *document root*, showing that you do actually have
authority of the host. This is done, precisely for authentication purposes.

You *do not* have to stop nginx every time. We only did it as a first-time
measure, but we'll later configure certbot to work *in certonly mode* but
with nginx running; stopping nginx will *not* be required, when renewing
certificates. More on this later.

First-time certificate creation
-------------------------------

If this is your first time using certbot on this server, certbot will ask you
other questions, such as:

* email address
* ask you to read terms of service and accept
* share your email address with EFF (**say NO**)

If all went well, certbot will tell you that it ran successfully.

Why run certbot twice?
----------------------

You may notice I did two runs:

* `-d example.com`
* `-d www.example.com`

This is down to you. You *could* do it all in one run:

	certbot certonly -d example.com -d www.example.com

However, in this instance, it would mean that you have both domains (which are
technically different domains) handled by one set of keys. While this may seem
efficient, it may prove to be a headache later on.

Verify that the certificates exist
----------------------------------

Check inside this directory: `/etc/letsencrypt/live`

You should now see directories for `example.com` and `www.example.com`, whatever
your domain is.

BACK IT UP
----------

MAKE SURE to always keep backups of `/etc/letsencrypt`, on external media. Use
of `rsync` is recommended, for backups, as it handles backups incrementally
and it's generally quite robust. Refer to the `rsync` manpage for info.

Although it may not be readily apparent from the certbot output, you will now
have an account on Let`s Encrypt, as defined by a generated set of keys, and
losing them could be a headache later as it may prevent auth, especially when
renewing keys.

Nginx configurations explained
==============================

Navigate to directory at `/etc/nginx`. Inside this directory, you will see a
lot of files, and subdirectories.

Debian provides documentation, for default configuration files
inside `/etc/nginx`, which you can read about here: \
<https://wiki.debian.org/Nginx/DirectoryStructure>

More general documentation, specific to Debian, can be found here: \
<https://wiki.debian.org/Nginx/>

Comments on nginx config files
------------------------------

When you see lines beginning with the `#` character, please know that
they are *comments*. They do not alter any behaviour; instead, they are used to
disable parts of a configuration or to provide annotation.

For example, if the following configuration line were commented like so:

```
	# gzip on;
```

To uncomment that line, and enable it, you would change it to:

```
	gzip on;
```

It is important that all config lines end with the `;` character, as you will
see in the following configuration examples.

/etc/nginx/nginx.conf (default)
---------------------

Open this file, so that you can study it. It is important that you get to know
the default configuration, so that you know what you're doing later when you
learn (from Fedfree) what to change.

This is the main configuration file for nginx. Nginx has their own
documentation, explaining what each entry means, but Debian has its own default
configuration so it's necessary to school you about *its* configuration. Let's
go through some of the entries in here, to educate you about what they mean:

* `user www-data;`: this specifies that the user `www-data` is what runs the
  nginx processes. The `www-data` user is largely Debian-specific. More info:
  <https://nginx.org/en/docs/ngx_core_module.html#user>
* `worker_processes auto;`: this specifies the number of threads that nginx
  runs on. The `auto` setting means that it will use the total number of
  cores/threads on your CPU. You could make this run on a finite number of
  threads, but we'll leave this alone. More info:
  <https://nginx.org/en/docs/ngx_core_module.html#worker_processes> - you
  might set this to a value *lower* than the number of physical CPU cores, if
  you need cores for something other than http, on high-traffic servers.
* `pid /run/nginx.pid;`: this is a path to a file where nginx will contain its
  process ID. The `/run` directory is a standard place where processes in
  Linux shall define a number, for any running process. You can learn more
  about the `/run` directory here:
  <https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s15.html>
* `include /etc/nginx/modules-enabled/*.conf;`: This can contain configs,
  which you may use to enable configs. It's not strictly necessary, and
  the `modules-enabled` directory is empty by default, at least on Debian's
  nginx package.

Now, we see this block:

```
events {
	worker_connections 768;
	# multi_accept on;
}
```

^ The `worker_connections` directive specifies how many times nginx can fork.
Each fork can handle a large number of simultaneous connections. 768 is a nice
conservative number, for typical hardware specifications these days, but you
might actually set this to a higher number if your machine can handle it.
Nginx handles HTTP connections in parallel, on any number of threads.

The `multi_accept` directive is commented, and the default setting is `off`.
When set to `on`, nginx will handle multiple one connection at a time, per
worker thread. Basically, you can assume in this setup that your server can
handle 768 simultaneous connections at any given time. Again, you can tweak
all of this based on what your hardware can handle. The defaults are very
conservative, and should work nicely on most hardware.

If you'll get a lot of traffic to your site, you might increase the number for
the `worker_connections` setting, and turn `multi_accept` on. It's up to you.

The number of connections in total, is calculated by the number of worker
processes multipled by the maximum number of worker connections. In our case,
worker processes is set to auto; assume that we have a CPU with two cores,
where we then assume that 1536 connections are possible at any given time.

If we assume that each request takes 10 seconds to serve (a bold assumption,
given that it'll probably be less), and that we have 1536 connections at any
given time, that's literally *million of daily site visitors*. If you're only
getting a few thousand site visitors every day, then the default settings are
actually overkill. You should leave them alone, or otherwise tweak if needed.

Now we move on to the `http` block below that. Here, we will paste the whole
lot into the article but then break it up into comments below each part,
telling you what each part does:

```
http {

	##
	# Basic Settings
	##

	sendfile on;
```

^ On traditional POSIX, programs handle data with a combination of `read()`
and `write()`. The `read()` function copies the contents of a file into memory,
and `write()` writes it back (from memory) to a file. In other words, I/O is
done in userspace.

With the `sendfile` directive turned on, nginx will use the `sendfile()`
function in the Linux kernel. This *might* also work in FreeBSD systems,
but Linux can avoid buffering in and out of memory when simply copying files.
In this case, `sendfile()` is used to directly copy the file from disk to
the network socket, without buffering in memory. It happens in kernel space.

This option should be turned *off* if you're planning to run a reverse proxy
in nginx, but otherwise it should be left on for performance optimisation.

More info on these pages:

* Linux: <https://man7.org/linux/man-pages/man2/sendfile.2.html>
* FreeBSD: <https://www.freebsd.org/cgi/man.cgi?query=sendfile&sektion=2>
* OpenBSD: does not have `sendfile()`, so nginx's `sendfile` directive is
  useless there.

NOTE: If you're using a filter such as gzip(for page compression), sendfile
won't work and nginx will default to using normal `read()` and `write()`
buffer. You have to balance performance concerns, taking into account CPU,
disk, and bandwidth. Nginx lets you tweak, depending on your use-case, so you
should adapt to your environment that you need.

The `sendfile` directive is covered in more detail here:
<https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/>

Not enabled in the default Debian nginx config: `sendfile_max_chunk`. To stop
someone with a very fast internet connection hogging a worker process, you
might add the line below it, e.g.:

	sendfile_max_chunk 5m; # single sendfile() call will only transfer 5MB

Again, tweak according to your own needs. Nginx *lets you tweak things*.

```
	tcp_nopush on;
```

^ This option pertains to your network's MTU and MSS. The *MTU* defines the
maximum size of packets on your network, measured in bytes (on a typical PPPoE
network, this might be 1492 or 1500 if jumbo frames are supported). MTU is
short for *Maximum Transmission Unit*.

If a packet exceeds MTU size, it will become fragmented, sending part of it in
a new packet. This is handled automatically, by your router.

MSS (Maximum Segment Size) is defined in the TCP header, for any given
connection on the internet; it defines the largest size, in bytes, of *data*
that can be sent in a single TCP segment, where the *segment* consists of a
header and then the data part. It is *this* context that we are most interested
in.

The `tcp_nopush` directive makes nginx *wait* until the *data* part is full,
per MSS rule, before sending, so that lots of data can be sent simultaneously,
instead of pushing out additional packets.

Related: `tcp_nodelay`, while not set here, can also be set. If set, the last
packet will be sent immediately. 

Information:

* <https://en.wikipedia.org/wiki/Transmission_Control_Protocol>
* <https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nopush>
* <https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay>

```
	types_hash_max_size 2048;
```

^ Size of hash table storing MIME types, in bytes. Conservative default value;
you might consider increasing it. See:
<https://nginx.org/en/docs/http/ngx_http_core_module.html#types_hash_max_size>

```
	# server_tokens off;
```

^ You should *turn this setting off*. Uncomment the line, removing the `#` so
that it says:

	server_tokens off;

If `server_tokens` is turned *on*, your HTTP server will yield information to
clients, such as server version and operating system. Turning this *off* will
hide such information, making it harder for wily individuals to know what
vulnerabilities you have based purely on server/OS version.

More info:
<https://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens>

```
	# server_names_hash_bucket_size 64;
```

^ See: <https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size>

Leave this at the default value. Values possible: 32, 64 and 128.

If you have a particularly long domain name in use, you might consider
increasing this to 128. For example:

`extra.ludicrously.long.to.the.point.of.being.comically.absurd.sub.example.com`

For `blog.johndoe.com`, the default setting is fine.

Debian leaves this commented, by default. On a 64-bit processor, this will
probably default to 64. If unsure, uncomment the line and set it to 64.

Setting it to 128 may negatively affect system performance, depending on your
machine, so leaving it at 64 seems wise; increase it if you need to.

```
	# server_name_in_redirect off;
```

^ *Leave this turned off*. It may even be prudent to uncomment this, and turn
it off explicitly. When set to off, the primary `server_name` (default landing
page) will not be used, and instead the one specified in the "Host" request
header field, or the IP address of the server, will be used.

The default is off anyway. We need this turned *off*, because we'll be using
virtual hosts, and redirects.

More info:
<https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name_in_redirect>

```
	include /etc/nginx/mime.types;
	default_type application/octet-stream;
```

^ MIME: <https://en.wikipedia.org/wiki/Media_type>

The `mime.types` file assigns MIME types to specific file extensions.

For file extensions not explicitly defined, the `default_type` directive is
used.

For example: `text/html` would be treated as HTML and rendered as such by your
browser, whereas `text/plain` would just be a standard text file rendered as
such in your browser (it would just display raw the contents of the file).
The `application/octet-stream` type is a binary file, that would be presented
for you to download (for example, tar archives should not be rendered as text
by your browser).

Incorrectly configured MIME types will lead to whacky results. The `nginx`
server, at least how Debian configures it, provides sane defaults.

```
	##
	# SSL Settings
	##

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;
```

^ Wholly inadequate encryption config that we will *nuke* later in this guide.

The `ssl_protocols` directive is self-explanatory.

These directives shall be documented, later in the guide.

```
	##
	# Logging Settings
	##

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;
```

^ This should be fairly obvious.

The nginx daemon records to this file throughout the day. At the beginning of
a new day, the current log is copied to `access.log.1` and to `error.log.1`,
and the main one is written fresh.

With your server in operation, you could try these commands:

	tail -f /var/log/nginx/access.log
	goaccess -f /var/log/nginx/access.log

You could also tail the error log. These can let you see, in real-time,
accesses via the HTTP daemon.

```
	##
	# Gzip Settings
	##

	gzip on;
```

This setting, when turned on as above, enables the server to *compress* data
sent to clients, if the client supports it (most of them do).

On *very slow* server hardware (really really really old, weak CPUs), you might
turn this *off*, bandwidth permitting.

The vast majority of people should leave this turned *on*, especially if they
have data limits on their internet connection.

```
	# gzip_vary on;
```

^ Commented, so the default setting (off) is used. If enabled, this tells
proxies to cache both the regular and gzip-compressed versions of any given
file.

It is recommended to turn this *on*, *unless* you don't have a lot of memory,
or you do have a lot of memory but you have a *lot of files* and a lot of
visitors.

You needn't worry about this setting, unless you're actually running a proxy.

For this and other gzip-related items reference below, see:
<https://nginx.org/en/docs/http/ngx_http_gzip_module.html>

```
	# gzip_proxied any;
```

^ Default *off*. If enabled, it enables compressed responses when proxied. See:
<https://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_proxied>

If you're not running a proxy, you needn't worry about this setting.

```
	# gzip_comp_level 6;
```

^ Default *1*, this sets the compression level on gzip-compressed responses.
Here, you must take into account the capabilities of the clients. The suggested
value of *6* here may be a nice compromise. You will see little benefit setting
this to *9*, in most cases.

NOTE: nginx does not cache gzipped files in memory, so it be run every time,
but the overhead of gzip is quite low. With lower compression level set, you
would have lower CPU usage if that became a problem on high traffic servers.
You should tweak this according to your needs.

*Think about it.* Most files that a web server will serve, are *text files*,
and text compresses more easily. Text files are typically small, and so it
makes more sense to compress them, in terms of CPU cycles. The savings on
bandwidth usage are easily measurable. On the other handy, most binaries that
you serve are going to be things like images and videos, many of which are
already compressed. Ergo, it makes sense to disable compression for binaries.
For example, compressing (in nginx) a JPEG file would likely yield little
benefit in terms of compression ratio, while wasting more CPU cycles. Relying
on `read()` and `write()` also makes little sense, for large files, if
the `sendfile()` function is available!

You should uncomment this, and try the value of *6* to start off with.

```
	# gzip_buffers 16 8k;
```

Set the number of buffers to send per gzip-compressed response, and the size
of each buffer. The buffer *size* should ideally be set to the same size
as `BUFSIZ` on your system, which on most 64-bit hosts (at least `x86_64`)
is 8KB. The `BUFSIZ` number is a `#define` provided by your libc, which is most
likely the *GNU C Library* if you're running Debian.

If you have the Debian package named `libc6-dev` installed, you shall find
BUFSIZ defined (as a number of bytes) in `/usr/include/stdio.h`. Example entry:

```
/* Default buffer size.  */							 
#define BUFSIZ 8192
```

It is recommended that you *always* set buffer size the same as `BUFSIZ`, and
specify the number of buffers as required.

It is recommended that you set this, explicitly, and you might try `16 8k`
first as suggested above (though the directive is commented, in the above
example).

If you're running a 32-bit host, it may be more efficient to use 32 4KB buffers
instead. Nginx defaults to `32 4k` or `16 8k`, depending on the host platform.

In some situations, you might set this much higher, e.g. `32 8k`, but it is
recommended to use a more conservative configuration (for your setup).

```
	# gzip_http_version 1.1;
```

Defaulting to `1.1`, this says that the client must support a minimum HTTP
version to receive a gzip-compressed response. It can be set to `1.0` or `1.1`.

Nginx also supports operating as an `HTTP/2` server, which this guide will
later show you how to do, but `HTTP/1.1` clients are compatible
with `HTTP/2` compliant servers (via backwards compatibility, in
the `HTTP/2` specification).

It is recommended that you explicitly set this to `1.1`, as that will ensure
maximum compatibility. Later in this guide, you will also be advised to disable
client usage of TLS prior to *version 1.2*, and *TLS 1.2* was first defined
in year 2008:

<https://www.ietf.org/rfc/rfc5246.txt>

The `HTTP/1.1` specification became canon in *year 1999*, so it's more than
likely that your clients will support it, but since we'll be mandating use of
TLS 1.2 or newer, there is little point in `gzip_http_version` being set
to `1.0`. The `HTTP/1.1` specification is defined, here:

<https://www.ietf.org/rfc/rfc2616.txt>

The newer `HTTP/2` specification is defined here, as of year 2015:

<https://www.ietf.org/rfc/rfc7540.txt>

We'll have more to say about this, later in the guide.

```
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
```

^ Where gzip is enabled, files conforming to `text/html` MIME type will always
be sent out compressed, if the client supports it (per your configuration of
MIME types as already described above).

See: <https://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_types>

The `gzip_types` directive specifies additional files of given MIME types, that
you wish to compress in responses.

In the above example, MIME types are declared explicitly. Uncomment it to
enable. The value `*` shall declare that files of *all* MIME types are to be
compressed.

The default value for this is `text/html`, which means that Debian's default
nginx configuration *only* compresses `text/html`. Therefore, you should
uncomment this line. Again, you do *not* need to write `text/html` in here,
as nginx will always compress that MIME type when gzip is enabled.

```
	gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
```

Finally, we get to the meat of the pie:

```
	##
	# Virtual Host Configs
	##

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}
```

^ the `conf.d` directory shall contain *additional* configuration, as desired.
In some cases, it makes more sense to modify the main `nginx.conf` file.

In the *default* setup, provided by Debian, this directory is empty.

A good use of `conf.d` is when you have *one* `nginx.conf` applicable to many
hosts, but those hosts each define their own configurations on top. For example,
you may wish to run a *reverse proxy* and special directives for that (aside
from those provided for each *virtual host*) may be placed inside `conf.d`.

The exact order in which files of `conf.d` apply shall be *alphanumerical*,
and *ascending*. For this reason, it is good practise to prefix a number to
each configuration file.

For example:

* `0000-do-this-first.conf`
* `0001-do-this-second.conf`
* `0002-do-this-third.conf`

^ The `sites-enabled` directory shall contain *symbollic links* pointing to
files inside of `sites-available`, located at `/etc/nginx/sites-available`.

The basic premise is that you shall enable or disable specific websites based
on the presence of those links. This is at least how Debian recommends doing
it, and I'm inclined to agree. It makes sense.

Inside `sites-available` you shall find a file named `default`. Inside
of `sites-enabled`, you shall find that a symlink named `default` *also*
exists, pointing to the one in `sites-available`.

The rules are identical, in that files/links of `sites-enabled` are loaded
in per name in alphanumerical order, ascending. *However*, this is entirely
irrelevant for our purposes, because you will be shown how to configure each
domain name in its own file, isolated from all other virtual host
configurations. For example, your domain name of `domainname.com` would be
defined in `/etc/nginx/sites-available/domainname.com`, pointed to
in `sites-enabled/` and it will define all hosts for that domain, including
sub-domains.

We shall cover these files, in the next section, but first one more section
we haven't covered:

The final block that you'll see, in Debian's default nginx configuration, looks
something like this:

```
#mail {
#	   # See sample authentication script at:
#	   # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#	   # auth_http localhost/auth.php;
#	   # pop3_capabilities "TOP" "USER";
#	   # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#	   server {
#		   listen	 localhost:110;
#		   protocol   pop3;
#		   proxy	  on;
#	   }
#
#	   server {
#		   listen	 localhost:143;
#		   protocol   imap;
#		   proxy	  on;
#	   }
#}
```

^ This pertains to mail proxying, which you can read about here: \
<https://docs.nginx.com/nginx/admin-guide/mail-proxy/mail-proxy/>

Nginx can act as a load balancer for email, which Fedfree could cover in a
future tutorial, but we'll leave this alone for now. Leave this commented, for
the time being, unless you wish to configure it yourself later.

/etc/nginx/sites-available/default
----------------------------------

There are plenty of comments in this file. I'm going to show you the contents
of it, without comments, so that it doesn't over-run this page

In this file, you shall see:

```
server {
	listen 80 default_server;
	listen [::]:80 default_server;
	root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.html index.htm index.nginx-debian.html;

	server_name _;

	location / {
		# First attempt to serve request as file, then
		# as directory, then fall back to displaying a 404.
		try_files $uri $uri/ =404;
	}
```

The `server {}` block, as above, defines rules for a given hostname. In the
example above, a *default* name is assigned, which applies to all undefined
hostnames pointing to your nginx server. This shall also apply to direct IP
addresses (typed into the user's browser) that your nginx server listens on.

The `listen 80` directive shall specify that this server listens on port 80,
via IPv4.

The `listen [::]:80` directive shall specify that this server listens on
port 80, via IPv6.

In the examples above, to be more specific: `default_server` refers to the
situation where no `Host` field is defined in the HTTP request, or the defined
Host field pertains to a hostname that we ourselves have not configured.

So, in the above examples, `listen 80 default_server` means nginx will listen
on port 80, *for undefined hostnames or http requests without a defined host*.

Other options are possible, which we will cover later in this case. For
example, listening on port 443 for HTTPS can be specified like so:

```
listen 443 ssl;
listen [::]:443 ssl;	
```

Additionally, you could enable `HTTP/2` (only works for HTTPS):

```
listen 443 ssl http2;
listen [::]:443 ssl http2;	
```

More on this later.

The `server_name` directive shall define a hostname, e.g. `boobworld.com`.
In this example above, the `_` name is used, which refers to `default_server`.

The `root` directive shall define your *document root*, which is the root
directory of your website, containing your home page (e.g. `index.html`).

Other entries and directives, mostly commented, exist in the file. The file has
comments in it which imply best practise of configuring *all* websites as
virtual hosts in the same file.

It is best recommended, by Fedfree, that you *only* use the `default` file for
a default *landing page*, in the event that an undefined hostname (or IP
address) is used.

Specific hostnames, as defined by `server_name`, should be handled in a file
per each domain name. That file should specify `server {}` blocks for each
host of that domain name; for example, `boobworld.com`, `www.boobworld.com`
and `chat.boobworld.com` should all be handled in the same file. This also
includes any redirects, e.g. HTTP to HTTPS, www to non-www (or non-www to
www).

The `location` block defines rules for a specific location. In this case, a
hardcoded path of `/`, referring to the entire website, is used. The rule in
here is very sensible, and you might consider using it on virtual hosts:

```
		# First attempt to serve request as file, then
		# as directory, then fall back to displaying a 404.
		try_files $uri $uri/ =404;
```

More info about the `try_files` directive written here:
<https://nginx.org/en/docs/http/ngx_http_core_module.html#try_files>

Basically, `try_files` is most commonly used for redirection rules.

/etc/nginx/fastcgi.conf
-----------------------

Configuration for FastCGI. This will not be covered, at all, by this tutorial,
because it is intended that this and related configuration will be covered in
a follow-up tutorial.

This file is used when configuring PHP. In the default configuration as
provided by `nginx-core` (which this tutorial assumes you installed), FastCGI
is not enabled at all; this file is therefore irrelevant, for now.

You may find the following resource insightful: \
<https://wiki.debian.org/nginx/FastCGI>

/etc/nginx/fastcgi\_params
--------------------------

Ditto.

/etc/nginx/koi-win
------------------

See: <http://nginx.org/en/docs/http/ngx_http_charset_module.html>

The `charset` option (in `nginx.conf`) is not enabled by this guide, or by
Debian. If enabled, this file is an example of what can be set. It defines
a *charset*, which nginx would provide in the `Content-Type` field of a
response header.

Fedfree recommends that you do not worry about this.

/etc/nginx/win-utf
------------------

Another file defining charsets. Unused, by default.

/etc/nginx/snippets/fastcgi-php.conf
------------------------------------

Example configuration file, for enabling PHP. Unused, by default.

FastCGI will be covered, in a future follow-up tutorial.

/etc/nginx/snippets/snakeoil.conf
---------------------------------

Useless, unused config that enables useless, self-signed certificates. You'll
be using LetsEncrypt, so pay it no mind.

/etc/nginx/proxy\_params
------------------------

Unused configuration. If proxying is to be enabled in nginx, this file could
be used to provide configuration for it.

Nginx can be configured for use as a reverse proxy server, mail proxy server
and generic UDP proxy. Proxying is not enabled or configured by this guide,
but it can (will) be covered in a future follow-up tutorial.

You may find the following resources insightful:

* <https://nginx.org/en/#mail_proxy_server_features>
* <https://nginx.org/en/#generic_proxy_server_features>
* <https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html>
* <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>

/var/www/html
-------------

This is the document root for your landing page, host `default_server` as
defined by `sites-available/default`.

It is not recommended that you host *your* website here. This is best used as
a landing page.

The default website is an nginx welcome page, telling you to configure your
web server. You may aswell leave it alone. In fact, Fedfree is advising you to
do so.

The *reason* we use this as a landing page, and use virtual hosts for real
websites, is that we can then more easily know if a given domain name has been
misconfigured in `sites-available/` and `sites-enabled/`.

TLS configuration
=================

We will now configure TLS, for `https://` URLs. It is recommended that any
modern website be *HTTPS-only*, with automatically HTTP-to-HTTPS redirect
and *HSTS* enabled. This is what we will cover.

/etc/nginx/nginx.conf (TLS)
---------------------

The correctness of the following configuration will differ, per your
requirements and newer standards that come out. You should regularly check
online, to know when these settings need changed. We will configure which
ciphers are to be used.

Look for this section, in the file:

```
	##
	# SSL Settings
	##

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;
```

Change it to say the following:

```
	##
	# SSL Settings
	##

	ssl_protocols TLSv1.2 TLSv1.3;
	ssl_prefer_server_ciphers off;
	ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
	ssl_ecdh_curve secp384r1;
	ssl_session_cache shared:SSL:10m;
	ssl_session_timeout 1d;
	ssl_session_tickets off;
	ssl_stapling on;
	ssl_stapling_verify on;
	resolver 1.1.1.1 1.0.0.1 valid=300s;
	resolver_timeout 5s;
	add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
	add_header X-Frame-Options DENY always;
	add_header X-Content-Type-Options nosniff always;
	ssl_dhparam /etc/ssl/certs/dhparam.pem;
```

NOTE: the `always` option on the `add_header` line forces those headers to
always be added, to all HTTP responses that go out.

See:
<https://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header> - as
you can see, there are *conditions* under which `add_header` is actually
applied. We want HSTS, nosniff and x-frame-options deny to always apply, no
matter what!

The configuration above shall provide:

* TLS 1.2 and 1.3 support. For security purposes, older TLS versions *and SSL*
  are not enabled.
* Permit the client to pick which cipher they want, from the list advertised
  by your nginx server.
* Support only those ciphers ranging from reasonably to highly secure.

You might adapt the above, to your requirements. Mozilla provides a handy
dandy *configurator*, which can be used to tweak based on your needs:

<https://ssl-config.mozilla.org/>

It is *recommended* to adapt their config. The configuration above, provided by
Fedfree, is based upon Mozilla's *intermediate* recommendation on Nginx. This
provides reasonable compatibility with most browsers.

The *modern* configuration, as defined by Mozilla, is largely pointless, at
least on this day 2 January 2023, for most people. As of this day, there are no
major issues known with supporting TLS 1.2, and it provides a nice fallback for
those who have yet to update to more modern web browsers.

To explain some of those configurations, above:

```
	ssl_protocols TLSv1.2 TLSv1.3;
```

Enables TLS version 1.2 and 1.3, specifically. No other TLS versions are
enabled, and the older *SSL* is disabled. Older TLS/SSL are *insecure*, with
many known vulnerabilities (e.g. POODLE attacks on SSLv3).

```
	ssl_prefer_server_ciphers off;
```

^ The `ssl_prefer_server_ciphers` directive means that the server's own ciphers
should be used, rather than the client's. It is best to actually turn
this *off*, but support only secure ciphers in the first place; the user can
then use the one most performant, for their hardware configuration.

Nowadays, most CPUs have AES acceleration making encryption much more
performant, but some people on older CPUs may wish to pick one based on
(software) performance criteria, depending on which one is most optimised
for their use-case. Depending on your threat model, use of stronger encryption
may not actually be desirable, or beneficial; for example, if you're using Tor,
mainly browsing static-only sites and not providing sensitive data to websites,
it might be entirely superfluous.

The server should only support ciphers ranging from reasonably to highly
secure. We will cover this in more detail, when configuring ciphers later on.

On the other hand, your threat model might be that you run a secure database
of some kind, and you want to ensure that all accesses are as secure as
possible, with less chance of data leaking to adversaries (e.g. commercial
competitors to your company), so you might turn this setting *on*, forcing
clients to use a particular set of ciphers, in a particular order of
preference. If that is the case, you may also want to disable all versions of
TLS except the latest 1.3 spec.

```
	ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
```

^ List of ciphers supported, advertised to clients by nginx. The client may
select from this list, which cipher they wish to use.

```
	ssl_ecdh_curve secp384r1;
```

Specify the elliptic curve to use, for ECDHE ciphers as defined
in `ssl_ciphers`. More information available here:

<https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve>

You might change this to `auto` instead, if you wish. If this setting is used,
nginx will defer to a built-in list provided by your version of OpenSSL
and defaulting to `prime256v1` for OpenSSL older than *1.0.2*.

From reading of the nginx documentation, it seems that these options are
recommended (by nginx):

* `prime256v1`
* `secp384r1`

Both can also be used, like so:

```
	ssl_ecdh_curve secp384r1:prime256v1;
```

This setting will be compatible with a few more clients. In this
example, `secp384r1` is the default.

In older nginx versions, `prime256v1` was the default, but `secp384r1` is more
secure.

This advice *will* become obsolete, at some point in the future. When dealing
with encryption, you should always do your own research and make sure that
what you have is *up to date*.

```
	ssl_session_cache shared:SSL:10m;
	ssl_session_timeout 1d;
	ssl_session_tickets off;
```

^ These pertain to TLS *sessions*.

The timeout setting specifies that a given session should time out after one
day. This is a *conservative* choice, but you might consider it to say `5m`,
like so:

```
	ssl_session_timeout 5m;
```

This makes it time out after five minutes. The lower the duration, the more
work your server (and clients) have to do, but it would increase security, by
mitigating the chance of one session being compromised. There is nothing really
wrong with it being *1 day*. You might alternatively set it to `60m` instead,
for 1-hour session expiration.

*Tickets* should never be enabled, as that would compromise *forward secrecy*.
Make sure `ssl_session_tickets` is turned *off*. This enables a given session
to be restarted at a later date, and it would require a much larger session
cache.

The `ssl_session_cache` setting about specifies: session cache is `shared`
between worker processes (defined via `worker_connections`
and `worker_processes` as shown earlier on), and that the size of the cache
is *10MB*. This is a reasonable default, but you can set it to whatever you
like.

```
	ssl_stapling on;
	ssl_stapling_verify on;
```

^ This pertains to OCSP staping, and it's recommended to turn these on. More
information about it can be found here:

<https://en.wikipedia.org/wiki/OCSP_stapling>

The `ssl_stapling_verify` setting makes nginx itself also verify OCSP
responses that *it* receives, in the same way a client might do so.

Use of `--must-staple` is assumed, when you ran `certbot`. If you didn't, then
stapling will still work. OCSP stapling is beneficial because the server deals
with OCSP requests and attaches a time stamped copy during TLS handshakes; this
is also beneficial when the user is browsing behind a web portal such as
coffee-shop or airport wifi. More information about its application can be
found here:

<https://blog.cloudflare.com/high-reliability-ocsp-stapling/>

(NEVER use CloudFlare for any of your hosting. This link is provided for
educational purposes only. Use of large centralised CDN providers is BAD for
the internet, because the more users they get, the more power they get, and
such power will *always* be abused)

(using their DNS is OK though, for the example below. You can change those
IPs to whatever you want:)

```
	resolver 1.1.1.1 1.0.0.1 valid=300s;
	resolver_timeout 5s;
```

^ For fetching OCSP records. (why does nginx need to know this?
Isn't `resolvconf` enough?)

The two IP addresses are public DNS resolvers. You can change these to
whatever you like. If you're running your own, you could use your own. You
could even set local ones here, if you have resolvers running on your local
network.

```
	add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
```

^ Hypertext Strict Transport Security (HSTS) tells the browser to prefer HTTPS,
when fetching resources on your website, for `max-age` number of seconds (in
the above configuration, this equals *2 years*), *including sub domains*. This
is a useful mitigation against *downgrade attacks*.

If your CA falls over and dies, HSTS will not necessarily render your website
inaccessible; you can simply find another CA. I've used LetsEncrypt since 2015
when it first became available, on `lighttpd`. I switched to `nginx` in 2017.
LetsEncrypt is rock-solid, so I wouldn't worry if I were you.

You *must* always keep robust backups, safely and easily recoverable by you,
because HSTS *will* screw you over if you lose access to LetsEncrypt account
keys. Remember this wisdom:

The *best* backup is the one you don't need.

The *worst* backup is the one you wish you had.

```
	add_header X-Frame-Options DENY;
```

^ This tells browsers that your web pages should not be displayed inside HTML
iframes. This can protect against certain phishing attempts, where a site
pretends to be you while running malicious code of some kind.

This is browser-dependent, but any decent browser will honour this header.
More information available here:

<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options>

```
	add_header X-Content-Type-Options nosniff;
```

^ If MIME types are improperly configured, some browsers may try to correct it
and apply correct behaviour according to what they think is the correct MIMe
type. They might do this by looking at the file extension in the URI, for
instance. This is called *MIME sniffing*.

Certain MIME types represent executable content, and this could potentially be
used for illicit gains by attackers. This header option informs the browser
that sniffing should not be performed.

More information available here:
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options>

```
	ssl_dhparam /etc/ssl/certs/dhparam.pem;
```

^ This directive overrides use of OpenSSL's weaker defaults in favour of your
generated Diffie Hellman key, which you generated earlier on in this guide.

OCSP stapling (per domain name)
=============

What is it?
-----------

When your browser accesses a website, it ought to know whether a given
certificate has expired or has been revoked. Historically, this was done using
a *Certificate Revocation List* (CRL), but this was only practical in very
early days when the internet was much smaller. These days, such files would be
too large to download, making HTTPS impractical because you would need to have
information about every website ever.

The [Online Certificate Status
Protocol](https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol)
solves this problem, because you make a request only for one record at a time.
This is done by your browser communicating with a third party service on the
internet, but it has drawbacks:

* Added latency, because there are additional HTTP requests to make
* The OCSP responder becomes a single point of failure; if they're down, your
  browser might hard-fail and show a warning.
* Worse: your browser might silently fail, permitting access to the site. This
  could open you up to attack, if an adversary could knock the OCSP service
  offline via DDoS attack to disable validation, after they successfully gave
  you a dodgy certificate.

One solution is for *your* HTTP server to send a cached OCSP record, time
stamped, during TLS handshakes. This would bypass the need for extra HTTP
requests on the part of the client, thereby saving time (lower latency) and
improving security. This is called *OCSP stapling*.

Not only is this faster, and more secure, but it's also more reliable for
reasons already mentioned; in addition, it will bypass many issues when using
web portals like in hotels or airports, where client-driven OCSP validation
often fails. It also means that a third party (OCSP service, in this case
LetsEncrypt) won't be able to glimpse your browsing habits as easily (there are
not a lot of CAs out there).

There is one drawback: mess this up, and your site visitors get a nasty error
when trying to access pages.

Configuration
-------------

Earlier on in this tutorial, you were advised to pass `--must-staple` when
running `certbot`. You were also provided information about how to enable OCSP
stapling in nginx.

This section is for reference only. It provides context for configuration that
you will perform later, when you learn how to add configurations for each
specific domain name.

You will configure these entries, per domain name defined in each
port 443 `server {}` block, pertaining to a given hostname:

* `ssl_certificate`: this is: `/etc/letsencrypt/live/example.com/fullchain.pem`
* `ssl_certificate_key`: this
  is: `/etc/letsencrypt/live/example.com/privkey.pem`
* `ssl_trusted_certificate`: this
  is: `/etc/letsencrypt/live/example.com/chain.pem`

All of these must be present. In the above list, `example.com` would be your
chosen domain name, in my case `example.com`.

These entries should not be specified within `nginx.conf`, but they are
mentioned here for reference.

SSL Labs provides a test suite, that will tell you whether OCSP stapling works:
<https://www.ssllabs.com/>

More easily add TLS certificates
================================

Later on, we will be *adding* a 2nd website, after nginx is up, and generating
TLS certificates *without* shutting down nginx like we did before. This section
is to be followed, in preparation for that.

When the server is operational, you don't want to to kill active connections,
especially on a busy website, *and especially* if you're going to run databases
of some kind.

Early in this guide, you were instructed to use the `certonly` mode in certbot,
with certbot acting in standalone mode, rather than webroot mode. However, you
should be able to add new TLS certificates while nginx is running, for new
domain names that you wish to add.

/etc/nginx/sites-available/default
----------------------------------

Your current file will look something like this, once all the comments
are removed:

```
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        server_name _;

        root /var/www/html;
        index index.html index.htm index.nginx-debian.html;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }
}
```

Look at those lines:

```
        root /var/www/html;
        index index.html index.htm index.nginx-debian.html;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }
```

Move the `root` and `index` directive into the location block, so that you
have something like this:

```
        location / {
		root /var/www/html;
		index index.html index.htm index.nginx-debian.html;
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }
```

Now, below the `location /` block, you would add a special rule just for
LetsEncrypt ACME challenges, via HTTP-01 challenge type:

```
	location ^~ /.well-known/acme-challenge {
		default_type "text/plain";
		root /var/www/letsencrypt;
	}
```

The entire file should then look like this:

```
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        server_name _;

        location / {
                root /var/www/html;
                index index.html index.htm index.nginx-debian.html;

                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

        location ^~ /.well-known/acme-challenge {
                default_type "text/plain";
                root /var/www/letsencrypt;
        }       
}
```

/var/www/letsencrypt/.well-known/acme-challenge
-----------------------------------------------

*Create* this directory, as root:

	mkdir -p /var/www/letsencrypt/.well-known/acme-challenge

LetsEncrypt's challenge response, in the setup that we're using, only runs
on HTTP. This is perfectly OK for us, because we can point A/AAAA records at
the server without configuring hostnames under nginx, and then run certbot
in certonly mode *with a webroot specified*, so that we don't have to stop
nginx.

You must now reload nginx:

	systemctl reload nginx

Directory listings (indexing) is disabled by default, in nginx, so the contents
of your `acme-challenge` directory will not be publicly visible.

Add a new website
=================

Finally, we get to adding a website. The previous sections of this guide have
already taught you everything you need to know. Commands (replace `example.com`
with your domain name that you made TLS certificates for):

Make website directory
----------------------

Your site will live in `/var/www/example.com`. It could actually live at any
location, so adapt according to your own requirement:

	mkdir -p /var/www/example.com

Create nginx host file, for the site:
---------------------------------------

Create the file:

	touch /etc/nginx/sites-available/example.com

Enable the website
------------------

(you'll still need to actually configure the site)

	cd /etc/nginx/sites-enabled/
	ln -s /etc/nginx/sites-available/example.com example.com

/etc/nginx/sites-available/example.com
--------------------------------------

This is a file that you've just created. Place these contents
in the file, replacing `example.com` with your own domain name:

```
# HTTP (plain, unencrypted)
# Automatically redirects to HTTPS,
# except for LetsEncrypt ACME challenges
server {
	server_name 
		www.example.com example.com;

	# you could add subdomains to server_name aswell
	# for example: server_name git.example.com www.example.com example.com;

	# you would then add an entry, similar to the `server_name example.com`
	# server below

	listen 80;
	listen [::]:80;

        location / {
                return 301 https://$host$request_uri;
        }

        location ^~ /.well-known/acme-challenge {
		# override the above rule, only for LetsEncrypt challenges.
		# this will enable certbot renew to work, without stopping
		# or otherwise reconfiguring nginx in any way

                default_type text/plain;
                root /var/www/letsencrypt;

		# in this case, the 301 redirect rule does not apply, because
		# this location block shall override that rule
        }

}

# HTTPS: redirect www.example.com to example.com
server {
	server_name www.example.com;
	listen 443 ssl http2;
	listen [::]:443 ssl http2;

	ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
	ssl_trusted_certificate /etc/letsencrypt/live/www.example.com/chain.pem;

	disable_symlinks on;

	return 301 https://example.com$request_uri;
}

# HTTPS: this is your actual website configuration
server {
	server_name example.com;
	listen 443 ssl http2;
	listen [::]:443 ssl http2;

	ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
	ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;

	root /var/www/example.com;
	index index.html;

	disable_symlinks on;

	# uncomment this to enable autoindexing, otherwise directories
	# without html index files will return HTTP 403
	# DO NOT turn on autoindex unless you're sure you need it, because
	# it's a potential security threat under some circumstances

	# autoindex on;
}
```

Tests
=====

Now test your website!

nginx config
------------

Before you proceed, run this command:

	nginx -t

This may warn you of any misconfiguration.

Start nginx, like so:

	systemctl start nginx

curl
----

Now try this:

	curl -I http://example.com/

You should see a 301 redirect. Ditto for `http://www.example.com`. Both should
lead to `https://example.com/` and `https://www.example.com/` respectively.

Now try this:

	curl -I https://www.example.com/

You should see a 301 redirect to `https://example.com/`

Now try:

	curl -I https://example.com/

If you've not placed an `index.html` file in your document root, you should
see an HTTP 403 response. You should see the HSTS and nosniff options too.

Browser
-------

Now try all of the above addresses in your browser.

SSL Labs
--------

SSL Labs host an excellent test suite, which can tell you many things, like
whether CAA, HSTS, TLS 1.3 and other things are enabled; make sure OCSP
stapling is also enabled. If you passed `--must-staple` in certbot, check that
too. The SSLLabs tester finds everything wrong with your setup.

See: \
<https://www.ssllabs.com/ssltest/>

IPv6 test
---------

Mythic Beasts have an excellent IPv6 tester on their website. You should
also test it yourself, on IPv4. See:

<https://www.mythic-beasts.com/ipv6/health-check>

Remarks
=======

This section pertains to the host config that we just enabled, bringing the
target domain name online via the web.

Notes about 301 redirects
-------------------------

In the above configuration, `www.example.com` automatically redirects (via
HTTP 301 response) to `example.com`. It is recommended that you either do this,
or do it the other way round: `example.com` redirects to `www.example.com`.
This is for search-engine optimisation (search engines also favour sites that
are HTTPS-only, these days).

For non-www to www redirection, simply swap the HTTPS server blocks above, and
adapt accordingly. For SEO purposes, it doesn't matter whether you do www to
non-www or non-www to www, but you should pick one and stick with it.

For my purposes, I typically prefer that the main website run on `example.com`
instead of `www.example.com`, because I think that looks much cleaner. It's
the difference between Pepsi and Coca Cola, so pick your poison.

HTTP/2
------

You will note, that HTTP/2 is enabled in the above config, but only for HTTPS.
HTTP/2 was already covered earlier on in this guide, and it enables many speed
plus other improvements over the old HTTP/1.1 specification.

Symlinks disabled
-----------------

You'll also note that symlinks are disabled. This means that symlinks inside
the document root will not work, at all. This is for security purposes, and you
are encouraged to do this for every domain name that you host.

However, there may be some situations where symlinks are desired, so this is
done per host, rather than server-wide.

Troubleshooting
===============

Nginx
-----

Run `nginx -t` to test your configuration. In most cases, it will tell you
what you did wrong.

Certbot
-------

When renewing certificates with the command `certbot renew`, certbot expects
to operate on port 80, so we configured port 80 plain HTTP access just for
LetsEncrypt's ACME challenges.

The only viable challenge method used requires unencrypted HTTP, but our server
does away with that for websites. For anything other than the ACME challenge,
URIs automatically redirect to the corresponding HTTPS link.

Maintenance
===========

Debian
------

Basically, just keep it up to date with the latest patches: \
<https://www.debian.org/doc/manuals/debian-faq/uptodate.en.html>

Nginx
-----

Nginx is basically bullet-proof. You might otherwise
try [etckeeper](https://etckeeper.branchable.com/) which is a nice tool for
keeping track of changes you make to configs under `/etc`.

When you make a configuration change, you can do this:

	systemctl reload nginx

Or this:

	systemctl restart nginx

Nginx is very powerful, and highly configurable.

OpenSSL
-------

Always make sure to run the latest OpenSSL patches. Re-generated `dhparam.pem`
from earlier in this guide, every few months. (you could do it, scripted, as
part of automatic certificate renewal)

Renew certificates
==================

Before renewing for the first time, you should test that it will work. Certbot
provides a test function:

	certbot renew --dry-run --webroot -w /var/www/letsencrypt

You *should* absolutely make *sure* nginx is running! For the purpose of this
test. With this setup, the `HTTP-01` challenge type (via LetsEncrypt) is used,
and it happens while the server continues running.

Otherwise, if all is well, just do this:

	certbot renew --webroot -w /var/www/letsencrypt
	systemctl reload nginx

The `reload` command is so that nginx makes use of any newly generated
certificates. The `reload` command differs from `restart` in that existing
connections stay open, until complete, and new connections will also be made
under the old rules, until the new config is applied, per site. In this way,
the reset happens without anybody noticing, and your site remains 100% online.

If all is well, it should Just Work. If it didn't, you'll need to intervene.

If there's something Fedfree can do to improve this tutorial, please get in
touch via the contact page.

Auto-renew certificates
=======================

Testing
-------

*Renewal* is very different than creating a *new* certificate, and the latter
is covered in another section of this guide.

Firstly, test that your configuration works with a dry run:

	certbot renew --dry-run --webroot -w /var/www/letsencrypt

You should put certbot renewals on an automated crontab, though keep in mind:
although the duration of certificates is 3 months (with LetsEncrypt), you may
be generating multiple certificates at different times, so the times may get
out of sync for each of them.

Therefore, it is recommended to run `certbot renew` every week, just in case.

A more automated way to do it is like this:

```
#!/bin/bash

certbot renew --webroot -w /var/www/letsencrypt
systemctl reload nginx

# if you also have mail for example, with certs e.g. mail.example.com
# systemctl restart postfix
# systemctl restart dovecot
```

^ Add the above to a new file at `/sbin/reloadservers`, and mark it executable:

	chmod +x /sbin/reloadservers

OPTIONAL: add the command from earlier in this tutorial, that generated
the `dhparam.pem` file. Add it in the above script.

Then do:

	crontab -e

Add the following to crontab:

```
0 0 * * 0 /sbin/reloadservers
```

HTTP-01 vs DNS-01 challenge
===========================

Problem
-------

See: <https://letsencrypt.org/docs/challenge-types/>

By default, `certbot renew` will use the `HTTP-01` challenge type, which shall
require that certbot *bind* on port 80. This is a problem, because nginx is
listening on port 80, so you would get an error.

Doing it on a webroot (using `certbot certonly` instead) will work *perfectly*,
because *that* requires port 80 and `http://`, but your web server configures
it such that ACME challenges (at `/.well-known/acme-challenge`) do *not*
redirect.

The only thing that can make use of `/.well-known/acme-challenge` is certbot,
and LetsEncrypt communicating with it. Everything else should continue to
redirect.

DNS-01
------

The `DNS-01` challenge type is not provided for, in any way, by this tutorial.
It is mentioned here for reference, because it's an interesting option anyway.

The `DNS-01` challenge type is practical, if:

* You run [your own authoritative name server](../dns/)
* You have it on the same host as nginx *OR* a secure way to control it it from
  the host running nginx

The `DNS-01` challenge can be completed, *without* killing nginx. This means
your site visitors will not lose their connection, no matter how briefly. You
would run `systemctl reload nginx`, after all certificates are renewed. It
must be done *individually per each domain name*. It means you need to actually
be there, inserting responses to each challenge, in each DNS zone file, for
each domain... this is why it's only practical if you're running your own DNS.
You could probably do some kung-fu
with [sed](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/sed.html)
and [awk](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html)
to make the job easy, operating directly on your zone files either locally (if
running on the same machine as nginx) or over ssh.

TLS-ALPN-01
-----------

The `TLS-ALPN-01` challenge type is what we would prefer, but according to that
page, it's not yet working on nginx or certbot.

The benefit of this method is that it can be done purely at the TLS level, so
we would not have to mess with redirect rules under nginx.

When this option becomes viable in the future, it may be documented on Fedfree.

Add a *2nd* new website
=======================

Introduction
------------

You will already know how nginx is configured, at this point. In this new
scenario, you're very happy with your current website but now you want to host
yet another one. You *can* host it on this machine, quite easily. Hosting
multiple websites on the same machine is trivial.

This guide had you set up Nginx with *SNI* for TLS purposes, so it's quite
possible. Most/all modern browsers support SNI these days. SNI (Server Name
Indication) is a feature in modern TLS that permits clients to access via
different certificates, based on what is specified in the Host header. See:

<https://en.wikipedia.org/wiki/Server_Name_Indication>

DO NOT configure the hostname first
-----------------------------------

Remember the `default_server` site, at `/var/www/html`?

If you want to point a new domain (`www.newdomain.com` and `newdomain.com`) to
your server, it will *work* on port 80 via the `default_server` option in
nginx. *This is assuming that you didn`t already host the domain elsewhere with
HSTS, in which case you can simply copy the keys/certificate to your new
installation*.

You will note that we *included* the LetsEncrypt snipped enabling the `webroot`
method to work, via `HTTP-01` challenge. In our setup, the `HTTP-01` challenge
will *work* perfectly, so long as the target domain is accessible on part 80,
which it *is* in this situation.

Add the new TLS certificate
---------------------------

If DNS is properly set up, just do this (for `newdomain.com`):

	certbot certonly --webroot --agree-tos --no-eff-email --email you@example.com -w /var/www/letsencrypt -d newdomain.com

and for `www.newdemain.com`:

	certbot certonly --webroot --agree-tos --no-eff-email --email you@example.com -w /var/www/letsencrypt -d www.domain.com

LetsEncrypt challenge response is done over port 80. If all went well, you
should have the new files under `/etc/letsencrypt/live/newdomain.com`
and `/etc/letsencrypt/live/www.newdomain.com`.

Now when the website is up later on, your crontab will auto-renew the
certificate.

If all went well with certbot, and you have the new certificate, you can simply
configure the new domain name, adapting the same procedures you already 
followed before on this page. When you're sure it's fine, you can then do:

	nginx -t

If nginx reports no problems, you can then do this:

	systemctl reload nginx

Again, this is only for *adding* a brand new certificate. For renewal,
you will instead rely on certbot's *renew* function.

How to revoke a certificate
===========================

Why?
----

If you believe the key is compromised, you should revoke it immediately.

Alternatively, you might have forgot something in certbot, such as:

* `--rsa-key-size 4096` (if you wanted that)
* `--must-staple` (OCSP Must-Staple in the certificate)

In this circumstances, it is best to revoke the key. Certbot will also ask
whether you want to delete the key (say YES).

With the certificate revoked and deleted, you can then generate a new one.

How?
----

You do *not* need to stop nginx, but TAKE NOTE: while the certificate is
revoked, if you've also deleted it, nginx will fail to re-load. Therefore,
when you do this, you should then do one of the following things:

* Re-issue a new certificate, for each one that was revoked
* Disable the target domain (`www.example.com` and `example.com` on port 443)

Sample commands:

	certbot revoke --webroot -w /var/www/letsencrypt --cert-path /etc/letsencrypt/live/example.com/cert.pem --key-path /etc/letsencrypt/live/example.com/privkey.pem --reason unspecified

	certbot revoke --webroot -w /var/www/letsencrypt --cert-path /etc/letsencrypt/live/www.example.com/cert.pem --key-path /etc/letsencrypt/live/www.example.com/privkey.pem --reason unspecified

Other options available for `--reason` are as follows:

* `unspecified`
* `keycompromise`
* `affiliationchanged`
* `superseded`
* `cessationofoperation`

You can then generate a new certificate, and restart nginx.

References
==========

Debian
------

* Debian documentation: <https://www.debian.org/doc/>

Let's Encrypt
-------------

* Let's Encrypt documentation: <https://letsencrypt.org/docs/>
* Let's Encrypt Chain of Trust: <https://letsencrypt.org/certificates/>
* Certbot documentation: <https://eff-certbot.readthedocs.io/en/stable/>

Nginx
-----

* Nginx upstream documentation: <https://nginx.org/en/docs/>

Fun fact:

At the time of publishing this guide, Nginx's own website did not enable HTTP
to HTTPS redirects or HSTS, but it did have HTTPS available, site-wide; some
links however would go back to unencrypted HTTP.

The following page shows you how to *force* use of HTTPS, in common web
browsers:

<https://www.eff.org/https-everywhere/set-https-default-your-browser>

Performance optimisations
-------------------------

This could be a separate guide at some point, but I did find this handy dandy
reference that someone made:

<https://github.com/denji/nginx-tuning/blob/f9f35f58433146c3af437d72ab6156b3eb8782c9/README.md>

As stated by that author, the examples in the link are from a non-production
server. You should not simply copy everything you see there. Adapt it for your
setup. Nginx is extremely powerful. It runs some of the biggest websites on
the internet.

The URL above is to a specific revision, of the guide in that repository. You
can clone the repository like so, to get the latest revision:

	git clone https://github.com/denji/nginx-tuning

The purpose of the Fedfree guide is simply to get you up and running. You are
highly encouraged to play around with your setup, until it performs exactly the
way you want it to.

Honourable mention: ETags
-------------------------

The HTTP `ETag` header is sent out by default, in nginx, for static resources
such as HTML pages.

Since this wasn't mentioned anywhere in the default Debian configs, that means
it's enabled. You can learn more here:

<https://en.wikipedia.org/wiki/HTTP_ETag>

And here:

<https://nginx.org/en/docs/http/ngx_http_core_module.html#etag>

You might want to explicitly enable this, just in case nginx ever changes the
default to *off* in the future. It is a useful performance optimisation,
because it avoids re-sending the same unmodified page if a client has already
seen and cached it.

Clients that cache will store this ETag value, and when requesting a resource,
include their stored ETag value in the request; if nginx sees that the local
version has the same ETag, it sends back an `HTTP 304 Not Modified` message to
the client, rather than the contents of the requested file.

Use of ETags and Gzip compression, as enabled by this guide, will save you a
lot of bandwidth. Have fun!

PS: You might read online that ETags are insecure, but they're really not,
and this article explains why: \
<https://www.pentestpartners.com/security-blog/vulnerabilities-that-arent-etag-headers/>

The security issue with ETags is if you're also running an NFS share, on really
ridiculously old versions of NFS when inodes of files were used as a handlers;
if the inode were known, it could (on those older versions) enable access to
a file without authorisation... on NFS if you're running a version of it from
the year 1989.

Nginx *does not use inodes when generating an ETag!*

Nginx's logic that handles ETag generation can be found here: \
<https://raw.githubusercontent.com/nginx/nginx/641368249c319a833a7d9c4256cd9fd1b3e29a39/src/http/ngx_http_core_module.c>

Look in the function named, in that file:

```
ngx_int_t
ngx_http_set_etag(ngx_http_request_t *r)
```

You'll see it all there. Fedfree recommends that you leave ETags *enabled*.
Nginx's implementation of ETags is perfectly safe, in the configuration that
Fedfree has provided for you.

That is all.
