We promise our users that everything the user does on the Internet from Tails goes through the Tor network.

Here is how we interpret and implement this promise.


Tor does not support UDP so we cannot simply redirect DNS queries to the Tor transparent proxy.

Most DNS leaks are avoided by having the system resolver query the Tor network using the DNSPort configured in torrc.

There is a concern that any application could attempt to do its own DNS resolution without using the system resolver; UDP datagrams are therefore blocked in order to prevent leaks. Another solution may be to use the Linux network filter to forward outgoing UDP datagrams to the local DNS proxy.

Tails also forbids DNS queries to RFC1918 addresses; those might indeed allow the system to learn the local network's public IP address.

resolv.conf is configured to point to the Tor DNS resolver, and NetworkManager and dhclient are configured not to manage resolv.conf at all:

Some applications need to be able to do clearnet DNS resolutions, so we save the DNS configuration obtained by NetworkManager:

The following is the complete list of the applications allowed to use the clearnet DNS configuration:

Network filter

One serious security issue is that we don't know what software will attempt to contact the network and whether their proxy settings are set up to use the Tor SOCKS proxy correctly. This is solved by blocking all outbound Internet traffic except Tor, and explicitly configuring all applications to use one of these.

The default case is to block all outbound network traffic; let us now document all exceptions and some clarifications to this rule.

Tor user

Tor itself obviously has to connect to the Internet without going through the Tor network. This is achieved by special-casing connections originating from the debian-tor Unix user.

Unsafe Browser and the clearnet user

The clearnet user used to run the Unsafe Browser is granted full network access (but no loopback access) in order to deal with captive portals.

Local Area Network (LAN)

Tails short description talks of sending through Tor outgoing connections to the Internet. Indeed: traffic to the local LAN (RFC1918 addresses) is wide open as well as the loopback traffic obviously.

LAN DNS queries are forbidden to protect against some attacks.

Local services whitelist

The Tails firewall uses a whitelist which only grants access to each local service to the users that actually need it. This blocks potential leaks due to misconfigurations or bugs, and deanonymization attacks by compromised processes. For specifics, see the firewall configuration where this is well commented: config/chroot local-includes/etc/ferm/ferm.conf

Automapped addresses

AutomapHostsOnResolve is enabled in Tor configuration, and a firewall rule transparently redirects to the Tor transparent proxy port the connections targeted at the virtual mapped address space.

Only the amnesia user is granted access to the Tor transparent proxy port, so in practice only this user can use this hostname-to-address mapping facility.


Tor does not support IPv6 yet so IPv6 communication is blocked.

UDP, ICMP and other non-TCP protocols

Tor only supports TCP. Non-TCP traffic to the Internet, such as UDP datagrams and ICMP packets, is dropped.

An unfortunate consequence of fully blocking ICMP is that Path MTU Discovery is broken. We workaround this problem by enabling Packetization Layer Path MTU Discovery. For details, see:

RELATED packets

As a general rule, the Tails' firewall does not accept RELATED packets: accepting them enables quite a lot of code in the kernel that we don't need, so we prefer reducing the attack surface a bit by blocking them. See the "[Tails-dev] Reducing attack surface of kernel and tightening firewall/sysctls" thread for details.

However, RELATED ICMP packets to the loopback interface are let through, in order to smooth user experience whenever a program's local network connection is blocked, and the TCP stack generates ICMP packets (e.g. with TYPE=3 CODE=3, i.e. the destination port is unreachable) to let the program know what is going on early, instead of letting it wait for a timeout.

netfilter's connection tracking helpers

We disable netfilter's automatic conntrack helper assignment (nf_conntrack_helper), that would otherwise enable a lot of code in the Linux kernel that parses incoming network packets, which seems potentially unsafe compared to the little gain it brings in the use case of Tails:

config/chroot local-includes/etc/modprobe.d/no-conntrack-helper.conf

Other non-Tor traffic

When the user allows Tails to connect to Tor automatically, they consent to Tails initiating Internet activity without going through Tor, in order to help them connect to Tor.

This can be useful, for example, to set the clock correctly or detect captive portals.


Here are the criteria we are taking into account when we design and implement such non-Tor Internet activity:

  • Do our best even when we can't be perfect: take into account less advanced adversaries too

    We should do our best to make it less obvious to less advanced adversaries, for example home/work surveillance (such as abusive partner or parents), even when we can't protect against more advanced ones (such as the ISP).

  • Try to blend in: make the "anonymity set" as large as possible

    For example, if the best we can do is to look somewhat like a Fedora/Ubuntu user who has tor installed, it's already useful against some adversaries.

  • Keep it simple to avoid maintenance churn

    Whenever we try to emulate the behavior of another piece of software than Tails in order to blend in, we should emulate software that we can understand easily and that does not change its behavior too often.

  • Take into account connection patterns

    Two connections in a row or more to the same organization or hosting service may be more identifiable than 1 single connection, by said organization or by the user's ISP. So we should try to get as much information as we can from every such connection.

  • Avoid services in a position to aggregate lots of data

    Omnipresent Internet actors, such as the NSA, Google, AWS, or Cloudflare, can correlate Tails' non-Tor activity with other data they aggregate. We should try to make it non-trivial to such adversaries that the connection is coming from a Tails user, to make such correlation more difficult.

  • Use reliable services

    We're doing non-Tor connections in order to improve UX. If these connections rely on unreliable services, then we either have to take the risk that we'll get users confused, or spend more time of software design, development, and maintenance, in order to deal with the unreliability. Both outcomes are problematic, so we should try to connect to reliable services.

    In passing, note that falling back to another service when the first attempted one is unreliable, can be used as an identifiable pattern by both passive and active adversaries.

  • Focus on users who chose to configure Tor automatically

    Keep in mind that this whole reasoning only makes sense in the context of a user who chose to configure Tor automatically, and allowed Tails to initiate non-torified Internet connection to facilitate this. The situations in which the user instead chose to hide Tor are out of scope here.

    To illustrate with our personas:

    • Kim definitely has to hide Tor from home or school ⇒ out of scope
    • Riou and Cris might choose autoconfig or hide Tor depending on the context:
      • Cris should autoconnect when connecting from a domestic airport.
      • Riou might hide Tor when connecting from home.

    And then, when reasoning about such non-torified connections, we should focus on the cases when we recommend autoconfig: public networks or popular Tor usage for circumvention.

  • Take into account subjective perception by users

    Some options may trigger bad feelings in users, which in some cases cannot be justified by a mere security analysis, because technically they would be the better choice to protect users. It does matter how safe users feel when they use Tails or consider to do so.

Case studies

Here we evaluate a few candidate services that we might want to connect to, to see how they perform regarding to the above guidelines.

Spoiler alert: there's no solution that perfectly follows all of our guidelines so the decision will primarily depend on which threat models we want to optimize for, that is our product strategy.

Time synchronization


  • Google captive portal detection: only fails the "aggregation" guideline

    ⇒ perfect except when Google is a relevant adversary in the user's threat model

  • Fedora's NetworkManager captive portal detection

    Run by Red Hat, some of it at AWS: only fails the "aggregation" guideline to some degree (Amazon can get the same data, and does not see the HTTP implementation details, but is probably less motivated to do so than Google).

    ⇒ On the one hand it's somewhat better than Google wrt. aggregation. On the other hand, the anonymity set is smaller when the ISP or home/work surveillance are relevant adversaries, since the DNS request is Fedora-specific.

  • Debian's NetworkManager captive portal detection (network-manager-config-connectivity-debian): only fails the "blend in" guideline: has very few users (0.5% of Debian NetworkManager users according to popcon)

    ⇒ Bad if the ISP or home/work surveillance are relevant adversaries. Better than Riseup on all counts.

Dismissed options

We determined that each of these candidate options are strictly worse than another one:

  • Get the time from Riseup via a HTTPS request to https://riseup.net/canary (HTP)

    • fails the "connection pattern" guideline: short lived connection, not fetching related web resources ⇒ we don't blend into any existing anonymity set, be it from the ISP's perspective, or from Riseup's
    • reliability: good but not spotless; would perhaps not be a blocker if it were the only issue

    ⇒ Very bad if the ISP or home/work surveillance are relevant adversaries.

    Dismissed: worse than Debian's NetworkManager captive portal detection on all counts

  • Ubuntu's NetworkManager captive portal detection: connectivity-check.ubuntu.com points to 3 Google IP addresses, so it leaks more info to the ISP, home/work surveillance, and Google, than to using Google's captive portal detection page directly

    Dismissed: worse than Google on all counts

  • Firefox' captive portal detection

    • harder to track (internal implementation detail of Firefox, rather than a public API'ish setup like NM) → fails "Keep it simple". But at the moment it's just a document containing the string "success"
    • detectportal.firefox.com → AWZ and Google IP addresses → fails the "aggregation" guideline
    • Firefox uses NSS so emulating it is not a curl/wget call away, but probably not too hard either.

    Dismissed: worse than using either Google directly (DNS requests yield a smaller anonymity set) or Fedora@AWS (harder to implement and maintain)

  • Ubuntu/Fedora NTP pool

    • cleartext, no authentication ⇒ the ISP can replay an old, bad, Tor consensus
    • only gives us the time ⇒ if we have to do another connection to detect captive portals anyway, we can get the time from that other connection (via HTP), and then there's no reason to do NTP on top. This could become a candidate again if we decided to implement time synchronization without captive portal detection.

Currently implemented

At the time we're writing this, no such background non-torified Internet connection is implemented in current Tails.