It is speculated that Google and Amazon blocked domain fronting upon request from the Russian government since domain fronting allows circumventing censorship. What is domain fronting, and how to use it in Nginx?
In theory
Today, almost every connection on the web is encrypted using TLS, making it impossible for eavesdroppers to see what you do online. However, since many websites are hosted on the same machine, the server needs to know ahead of time which certificate it should present to you. So, before the encrypted communication channel is set up, your client sends the unencrypted Server Name Indication to the server. Any eavesdropper (or a government that can only exist if the free exchange of information is prohibited) is able to read the SNI and knows which website you want to visit. If the SNI violates censorship regulations, the request might be blocked.
There are developments to overcome these privacy issues, for example, by
encrypting
the initial Hello message from the client.
A much simple method is domain fronting by supplying a benign SNI.
A client establishes a secured
communcation channel to the server using some of the domains hosted on the
machine. In the case of large CDNs, there are possibly millions of valid domains,
for example, including example.com
.
Eavesdroppers see the SNI and must assume you visit example.com
.
Inside the encrypted communication channel, your client can send arbitrary HTTP
requests.
For example, it can send HTTP requests,
asking for content from a different domain, e.g., wikipedia.org
that is
otherwise blocked.
GET /wiki/Вторжение_России_на_Украину_(2022) HTTP/1.1
Host: ru.wikipedia.org
...
If the server supports domain fronting, it will respond with the content specified in the HTTP request circumventing censorship.
In practice with Nginx
Let’s see how to support this with an Nginx server. The two websites howlargeisthelhc.com and vinogreets.com happen to be hosted on the same machine using a single instance of Nginx.
$ cat <<EOF | gnutls-cli vinogreets.com -p 443
GET / HTTP/1.1
Host: howlargeisthelhc.com
EOF
The server will respond with
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 22 Mar 2022 21:31:32 GMT
Content-Type: text/html
Content-Length: 4665
Last-Modified: Wed, 24 Jun 2020 21:09:43 GMT
Connection: keep-alive
ETag: "5ef3c117-1239"
Strict-Transport-Security: max-age=31536000
Accept-Ranges: bytes
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>How large is the LHC?</title>
...
clearly sending you content for howlargeisthelhc.com even though the encrypted channel used vinogreets.com. Domain fronting is enabled by default in Nginx. You would need to disable it explicitly in Nginx.
What about client verification?
What happens if a domain uses TLS client verification?
With client verification, clients need to present a valid certificate themself
in order to establish the communication channel.
This can be seen as an
additional layer of authentification and authorization.
Suppose we have secure.example.com
protected using TLS client verification in
Nginx. The same instance of Nginx also powers public.example.com
.
What happens if we establish the TLS connection with public.example.com
but
then request content from the secured endpoint?
Let’s run
$ cat <<EOF | gnutls-cli public.example.com -p 443
GET / HTTP/1.1
Host: secure.example.com
EOF
The server will respond with
GET / HTTP/1.1
Host: secure.example.com
HTTP/1.1 421 Misdirected Request
Server: nginx
Date: Tue, 22 Mar 2022 21:35:44 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
<html>
<head><title>421 Misdirected Request</title></head>
<body>
<center><h1>421 Misdirected Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
...
denying our attempt to access the secured endpoint. This line in the Nginx source code is responsible for stopping these kinds of attacks.
This might also interest you