Network basics: About DNS and how it makes our filter work
In our natural habitat, we developers discuss our issues using computing acronyms and abbreviations generously, often generously enough to replace the original term completely. While this sounds perfectly clear to us, it sometimes just doesn’t work out for explaining in everyday language what we’ve done with our last update.
Today we thought it would be interesting to pay a little attention to the way how humans and machines read website addresses differently: DNS and IP.
DN… what?
DNS stands for Domain Name Service, a substantial, yet invisible part of the internet and the way we use it every day. It consists in translating human-readable website names like www.example.com to machine-readable numerical IP addresses like 203.0.113.51. Before your browser can fetch the content of any website, this domain name translation has to be done.
This is what helped us to build the latest filter feature we’re happy to introduce: the brand new DNS based web filter.
Wait… what happened to the content filter?
Our current content filter is based on transparent HTTP proxy filtering. It used to work smoothly with unencrypted web traffic. During the last couple of years, as you’ve very probably noticed, more and more websites switched to the secure version of Hyper Text Transfer Protocol, HTTPS. This means that all communication between your browser and the website is encrypted via the TLS protocol. Hence, the HTTP proxy is unable to identify unwanted content in encrypted connections – this is where our new DNS filter comes to rescue.
In addition to the general trend towards a better secured web, this shift has been reinforced by the Let’s Encrypt initiative, where SSL/TLS certificates are provided for free. These certificates are needed for the encryption part of HTTPS. Let’s Encrypt has issued millions of certificates already and today (2018-01) there are 48 million certificates online.
This chart shows the percentage of encrypted pages loaded (firefox users). The blue line for all users is currently at about 70%. Obviously, a sound share of these high figures are due to the fact that major platforms like Google, Facebook,… are fully encrypted.
Great, can I use DNS filter right now?
The new feature is available on every up-to-date IACBOX with version 17.2, provided the system has at least 4GB of memory. This is needed because the entire filter lists are loaded into the local memory to guarantee the best possible performance.
How does it work?
Instead of filtering the HTTP connection and content itself, DNS queries for websites in the blacklist are blocked. DNS filtering always works, no matter which protocol is used to send or receive data.
A normal unfiltered request cycle looks like this:
When a requested domain matches our blacklist our DNS filter returns that this domain does not exist (NXDOMAIN):
Enduser communiation
When an enduser wants to access a blacklisted page, the browser would just answer that this page is not valid/unknown in case of HTTPS. This certainly is a lack of transparency and is also hard to debug. To come over this disadvantage we provide an option to list the last 5-30 domains on the login page, reachable via http://logon.now. This enables anyone to check blocked domains. How many entries and how long these domains remain stored can be defined in the WebAdmin settings. This feature is available for standard landing pages as well as Login-API plugin.
Anything else?
A good filter also needs a whitelist allowing access to pages accidentally blacklisted or have to be kept reachable for any other reason the operator might have.
Our blacklists get a weekly update for every system with valid software maintenance.
Are you an entrepreneur looking for a solution to these requirements? Or are you a service provider and advise companies on wireless or wired network solutions?
Let's start a project together