Introduction

The automated test suite is essentially a glue between:

  • OpenCV and dogtail, used for simulating user interaction,

  • libvirt, used for running a specific build of Tails in a virtual machine, and

  • cucumber, for defining features and scenarios testing them using the above two components.

  • chutney, for orchestrating a local Tor network.

Its goal is to automate the development and release testing processes with a Continuous Integration server.

Setup and usage

See setup and usage.

For particularities of automated tests run on our Jenkins infrastructure, see automated tests in Jenkins.

Features

With this tool, it is possible to:

  • Create, modify and destroy virtual machines that can run Tails from a variety of media (primarily DVD and USB drives).

  • Create different kinds of virtual storage (IDE, USB, DVD...), and either cold or hot plug/unplug them into/from the virtual machine.

  • Modify the virtual machine's hardware, e.g. its amount of RAM, its processor features (PAE), etc.

  • Simulate user interaction with the system under testing and check its state.

  • Run arbitrary shell commands inside the virtual machine.

  • Take screenshots from the display of the virtual machine, at particular events, like when a scenario fails. Complete test sessions can also be captured into a video.

  • Capture and analyze the network traffic of the system under testing.

source vs. product cucumber features

Fundamentally speaking there are two types of tests:

  • Tests that make sure that the Tails sources behave correctly at build time (example: features/build.feature); these ones are aptly tagged @source; and,

  • Tests that make sure that the product built from Tails sources, i.e. a Tails ISO image, behaves correctly at run time; these ones are tagged @product.

The requirement of these are quite different; for instance, a @product test must have access to a Tails ISO image to test, whereas a @source test doesn't. Therefore their environments are setup separately using our custom BeforeFeature hook, with the respective tag.

Implementation

Running cucumber in the right environment

The run_test_suite script is a wrapper on top of cucumber, that sets the correct environment up:

  • It uses Xvfb so that the DISPLAY environment variable points to an unused X display (tip: use Xvfb) with screen size and color depth matching what the images used for matching are optimized for (that is, 1024x768 / 24-bit).

  • It runs unclutter on $DISPLAY to prevent the mouse pointer from masking GUI elements looked for through image matching.

  • It passes --format ExtraHooks::Pretty to cucumber calls, to get access to our custom {After,Before}Feature hooks.

Remote shell

This started out as a hack, and while it has evolved it largely remains so. A proper, reliable, established replacement would be welcome, but seems unlikely given the requirements.

Requirements on the host (the remote shell client):

  • can execute a command with blocking until command completion on the server, and get back return code, stdout and stderr separately.

  • can spawn a command without blocking (except an ACK that the server has run the command, perhaps).

  • usable by an unprivileged user

Requirements on the guest (the remote shell server):

  • has to work without a network connection.

  • should interfere minimally with the surrounding system (e.g. no firewall exceptions; actually we don't want any network traffic at all from it, but this kind of follows from the previous requirement any way)

  • must start before the Welcome Screen. Since that's the first point of user interaction in a Tails system (if we ignore the boot menu), it seems like a good place to be able to assume that the remote shell is running.

Scripts:

Using the remote shell with non-test suite Tails VMs

The remote shell could potentially be useful for debugging a Tails VM that was not started as part of the automated test suite. To do that, add a virtio channel device to the libvirt config of the VM:

<channel type='unix'>
  <source mode="bind" path='/path/to/remote-shell.socket'/>
  <target type='virtio' name='org.tails.remote_shell.0'/>
</channel>

When the VM has started you can connect to the socket and execute commands via the remote shell. Here is an example how to do that in Python:

import json
import socket


client = socket.socket( socket.AF_UNIX, socket.SOCK_STREAM )
client.connect("/path/to/remote-shell.socket")

id = 1
_type = "sh_call"
env = {}
user = "root"
cmd = "echo 'huhu?"
data = [id, _type, user, env, cmd]
client.send(json.dumps(data).encode()+b"\n")

answer = json.loads(client.recv(1024).decode())
print(answer)
>>> '[1, "success", 1, "huhu?", ""]\n'

Chutney

Chutney is an orchestration tool for quickly setting up a complete Tor network: directory authorities, entry/middle/exit nodes, bridges, etc. In Tails' test suite we use it to locally set up a Tor network that is under our control to be used by the system under testing, offering us greatly improved performance and robustness compared to if we used the real Tor network.

submodules/chutney/ features/chutney/test-network

Expected, reference images

Expected images, aka. reference images, live in features/images.

When adding or updating one such image, run the compress-image.sh script on it before committing your changes.

The art of writing new product test cases

Writing new @source features, scenarios or steps should be pretty straightforward for anyone with experience with cucumber, but the same can't be said about @product tests, so below we give some pointers for the latter.

Resources

In addition to the libvirt and Cucumber documentation linked to in the introduction:

  • ruby-libvirt API

  • The sniff application, which is installed in Tails, is quite useful to navigate the GUI element hierarchy. However, accerciser (not installed) is even better, due to its ability to highlight elements. Another useful tool is ipython (not installed) with its TAB-completion.

Writing new features and scenarios

First, one should have a good look at especially features/step_definitions/common_steps.rb and the other features to get a general idea of what already is possible. There is a good chance there's already implementations for many steps necessary for reaching the desired state right before when the stuff special to the intended feature begins.

Common steps example

In order to learn some basic step dependencies, and concepts and features of the automated test suite used in features and scenarios, let's walk through a few typical steps, in order:

Given a computer

This is how each scenario (or background) should start. This step destroys any residual VM from a previous scenario, and sets up a completely fresh one, with all defaults. The defaults are defined in features/domains/default.xml, but some highlights are:

  • One virtual x86_64 CPU with one core
  • A reasonable amount of RAM
  • ACPI, APIC and PAE enabled
  • UTC harware clock
  • A DVD drive loaded with the Tails from the ISO
  • No other storage plugged
  • USB 2.0 controller
  • Ethernet interface, plugged into a network bridged with the host

However, most of the time we do not set up a computer from scratch using this step, but restore from a snapshot (also called checkpoint) using the one of the Given Tails has booted ... steps generated in features/step_definitions/snapshots.rb. An example of such a step, and indeed one of the most common ones, is:

Given Tails has booted from DVD and logged in and the network is connected

These steps will actually run multiple steps, saving one or more snapshots along the way. See the next section for details about this.

Returning back to what we'd do after the Given a computer step, there's a number of steps that reconfigures the computer...

And I create a 10 GiB disk named "some_disk"

The identifier (some_disk) is later used if we want to plug it or otherwise act on it. Note that all media created this way are backed by qcow2 images, which grow only as they consume capacity. All such media can be destroyed after the feature ends by using "I create a temporary…" instead.

This step does not necessarily have to be run this early, but it does if we want to plug it as a non-removable drive...

And I plug ide drive "some_disk"

Note that we can decide which type of drive some_disk is here. If we pick a non-removable media, like in this case, it has to be done before we start the virtual machine. Removable media, such as USB drives, can be plugged into a live system at any later point.

And the network is plugged

This plugs the network interface to the host-bridged network (which is the default). There's also the "unplugged" version, which generally is preferred for features that don't rely on the network (mostly so we won't hammer the Tor network unnecessarily). These steps can also be performed on a already running system under test.

And I start the computer

This is where we actually boot the virtual machine.

And the computer boots Tails

This step:

  • verifies that we see the boot menu
  • adds any boot options added via I set Tails to boot with options ...
  • makes sure that the Welcome Screen starts
  • makes sure that the remote shell is up and running

Note that the "I set sudo password ..." step has to be run before the other option steps of the Welcome Screen as it relies on keyboard navigation.

And I set sudo password "asdf"

Beyond the obvious, after this step all steps requiring the administrative password have access to it.

And I log in to a new session
And the Tails desktop is ready
And I have a network connection
And Tor is ready

All these should be pretty obvious. It could be mentioned that the last two steps, like many others, depend on the remote shell to be working.

And all notifications have disappeared

The notifications can block GUI elements that we're looking for later with OpenCV's image matching, so it's important that they are gone in essentially all tests of GUI applications. If we have a network connection, so the time syncing starts and shows its notifications, then this step should be run after the previous step. Otherwise it always depends on the GNOME has started step.

And available upgrades have been checked

Since Tor is working, the check for upgrades will be run. We have to wait for it to complete because that generally will break later if we use snapshots.

Snapshots

To speed up the test suite and get consistent results when setting up state, we make heavy use of virtual machine snapshots. We encourage contributors to read the snapshot definitions in features/step_definitions/snapshots.rb carefully. We generate steps from these descriptions, and they are created lazily on first use (and then reused in subsequent instances, across features). Some things to make note of:

  • Snapshots may have parents, which means that they start by running the parent's step, generating its snapshot, recursively to a "root" snapshot without parent.

  • Snapshots can be made "temporary". If the snapshot description's :temporary field is set to true, then the snapshot will be cleared after the feature it was created in finishes. This is a way to reduce the disk space needed for running the test suite, and is encouraged to use for features where a very particular state is set up, that isn't reused in any other feature.

  • Debugging snapshot creation is made a lot easier by enabling the debug formatter, which will print the steps as they are run.

Scenarios involving the Internet

Each such new scenario should sniff network traffic, and check that all Internet traffic has only flowed through Tor. See the apt feature for a simple example of how to do it.

The interactive debugging REPL

Running the test suite with the --interactive-debugging option can be really useful when investigating problems in the test suite (or developing new steps/scenarios, which usually means encountering a lot of problems) since it allows invoking a debugging REPL in the context a failure occurred at, where you can rapidly iterate on code and find solutions.

The debugging REPL is not only accessible when failures occur, you can also manually put a "breakpoint" either directly in the code using pause(), or by adding the step "I pause" at a suitable place in a scenario.

While it is quite nice to copy-paste code between your editor and the debugging REPL, we can do better: you can edit the code in the usual files in your editor, and then hot-reload it in the debugging REPL using something like load 'features/step_definitions/common_steps.rb' (see the "Limitations of code reloading" section below for some expectations on what is possible to do with this).

Interactive debugging example

Note: if the code you need to modify is located directly in a step definition, please see the "Limitations of code reloading" section below!

Let's say we have a scenario:

Scenario: test
  Given some context
  When I do foo
  Then foo was successful
  And maybe more stuff

where the core of the I do foo step is in a top-level function foo() that it calls, and let's say there is a bug in foo() that makes it raise some exception. So we insert a breakpoint before I do foo:

Scenario: test
  Given some context
  And I pause
  When I do foo
  Then foo was successful
  And maybe more stuff

(Instead of injecting a breakpoint we could also rely on --interactive-debugging, but this way we can fix the bug and let the full scenario run with the fix when we quit the REPL.)

We start the test suite for the problematic scenario, and it stops at the breakpoint we injected:

Paused

Return/q: Continue; d: Debugging REPL

We press d to enter the debugging REPL, which is a normal pry session (very similar to irb if you are familiar with it):

[1] pry(#<Object>)>

We test foo() and see that it indeed fails just as when Cucumber runs the I do foo step:

[1] pry(#<Object>)> foo()
RuntimeError: something went wrong
from features/step_definitions/test.rb:36:in `foo'

Then I start fixing foo() in features/step_definitions/test.rb using my editor, and if I want to try my modification I reload the code and call foo() again:

[2] pry(#<Object>)> load 'features/step_definitions/test.rb'
=> true
[3] pry(#<Object>)> foo()
RuntimeError: something else went wrong this time
from features/step_definitions/test.rb:36:in `foo'

We're not quite there yet! Also, something to keep in mind is that if foo() changes some state, that state might have to be manually reverted in the REPL before calling the modified foo() again (so the easiest situation is if foo() is idempotent).

So we hack, hack, hack and eventually:

[...]
[9] pry(#<Object>)> load 'features/step_definitions/test.rb'
=> true
[10] pry(#<Object>)> foo()
foo() didn't fail!
=> nil

Wohoo! We are done!? Not necessarily, let's say that while the modifications we did indeed made foo() not raise an exception there still is an issue that won't be detected until we run the foo was successful step. So we might want to incorporate that into our test loop (and again, if foo was successful changes state it might have to be manually reverted afterwards):

[11] pry(#<Object>)> step 'foo was successful'
RuntimeError: foo() did something wrong earlier
from features/step_definitions/test.rb:40:in `/^foo was successful$/'

Again we hack, hack, hack and eventually:

[...]
[21] pry(#<Object>)> load 'features/step_definitions/test.rb'
=> true
[22] pry(#<Object>)> foo()
foo() did exactly what it should!
=> nil
[23] pry(#<Object>)> step 'foo was successful'
foo() did exactly what it should earlier!
=> nil

Now we can just quit the REPL (exit or Ctrl+D) and I do foo will call the redefined foo() and the scenario should pass (unless there is an issue in the maybe more stuff step, then we rinse and repeat)!

Limitations of code reloading

One issue while reloading some code with load (like in the example above) is if there are step definitions in that file. Cucumber doesn't replace existing step definitions even when defining a step again with the exact same pattern, it just happily keeps all of them and fails when it notices that they are ambiguous when the step is run next time and all of them match.

Cucumber has the --guess argument which makes it able to pick the "best match" among several ambiguous steps, but it doesn't work for completely identical patterns as in our case. So what I do is that I paste in this monkeypatch into the debugging REPL which simply makes it pick the most recent definition (so the most recent load):

module Cucumber
  module StepMatchSearch
    class AttemptToGuessAmbiguousMatch
      def best_matches(step_name, step_matches)
        [step_matches.last]
      end
    end
  end
end

So, with the above patch, and --guess passed as an argument to cucumber, i.e. something like:

./run_test_suite --iso tails-amd64-6.0.iso -- --guess features/test.feature

then we can load a redefined step and run it manually in the debuggin REPL:

[24] pry(#<Object>)> load 'features/step_definitions/test.rb'
=> true
[25] pry(#<Object>)> step 'foo was successful'
now this step does something different
=> nil

However, due to how Cucumber is implemented, only manually calling step will invoke the redefined step; if you quit the debugging REPL and let Cucumber continue running the scenario, then it will still run the original code for any redefined steps. Note that the difference vs the example above is where the code lives: in the step definition (then Cucumber won't run the modified code) vs a modified function called by the step (then Cucumber will run the modified code, like in the example).

This might be fixable by monkeypatching Cucumber further, but until that is figured out you are limited to calling the relevant steps manually using step in the debugging REPL, which isn't too bad.

Writing new steps

The tools and some guidelines for their usage

In essence, the tools we have for interacting with the Tails instance are:

  • the VM helper class,
  • the Screen helper class,
  • dogtail, and
  • the remote shell.

It should be fairly obvious when to use the VM helper class (stuff relating to the virtual hardware), but there is some overlap between image matching, dogtail and the remote shell.

The Screen class and dogtail:

  • They should be used in all instances where we want to simulate user interaction with the Tails system. For instance, when we start an application we usually want to click our way through the GNOME applications menu to stay true to what an actual user would do.

  • They come in handy when we want to verify some state pertaining to the GUI.

  • In general we prefer dogtail, since it is more precise given its direct access to GUI elements and their state, and images carry a quite heavy maintenance burden. However, Dogtail is not possible to use before GNOME has started, so we rely on image matching for anything before or after that. Also, for some applications it's simply hard to "navigate" to the right element in the hierarchy of elements of some GUI, due to ambiguity and/or lack of labels.

The remote shell:

  • Useful for testing non-GUI state.

  • Useful for reaching some state required before we test stuff depending on user interaction (which should use the Screen class or dogtail!).

  • It is acceptable (but not encouraged) to use the remote shell for simulating user terminal interaction when we need to analyze the commands output. Ideally we would use OCR, but we lack support for it at the moment, and it might be too unreliable in practice.

Limitations and issues

These things are good to know when developing new features, scenarios or steps.

The remote shell

Different behaviour compared to "real" shell

The remote shells have a few surprises in store:

  • pgrep -f detects itself, which can have potentially serious consequences if not dealt with.

  • Minor groups are not set, so e.g. the groups command may not do what you expect, but groups $USER does.

These are the currently known oddities of the remote shell, but there may be more, so beware, and make sure to verify that any commands sent through it do what you want them to do.

Remote shell instability

Although very rare, the remote shell can get into a state where it stops responding, resulting in the test suite waiting for a response forever.

Remote shell interact with other services to start

As it may be needed to interact with the system before a service gets started. In systemd the remote shell is defined as notify type, that means the remote shell can tell systemd, when it is ready. So we just need to make sure that the remote shell is needed to be started before another service.

How to achieve that?

In the service file of the remote shell we define the service, we want to hook on in Before. Than you need to add autotest_wait_for_remote_shell to the boot line; in cucumber you can set @wait_for_remote_shell=true before the step the computer boots. After that step you can interact with the remote shell. If you are done modifying the state, you need to trigger the READY signal (in Cucumber by executingRemoteShell::SignalReady.new($vm)). Now the system with further boot.

As an example, you can look how the Tails detects disk read failures step hooks into the system before gdm.service is started.

Plugging SATA drives

When creating a disk (at least when backed by a raw image) via the storage helper, and then plugging it to a Tails instance as SATA drive, GNOME will report that the drive is failing when inside Tails, and indeed several SMART tests fail. For now, plug hard disks as IDE only (or USB, of course).

Passing state between steps in snapshots

When creating snapshots, anything stored in a variable in any of those steps will only be available in subsequent steps in that scenario, not in other scenarios restoring from that snapshot. The exception is when global variables are used, which is an acceptable workaround, but it is very hard to get right, and to follow. Please try to avoid this until #5847 is solved.

Disabling scenarios that are known to fail

When a scenario is broken for an extended period of time (e.g. when rebasing Tails on a new Debian version, which usually breaks tons of stuff) the current method of temporarily disabling affected tests is simply to remove them in Tails' Git, making sure that we won't forget about them.

Note that such removals should be isolated per commit so that they are easy to revert when whatever was broken is fixed. Hence, such a commit could either:

  • remove a single scenario if something unique to that scenario is broken;

  • remove multiple scenarios possibly spanning multiple features, if they're broken for the same reason, e.g. a step they all use is broken;

  • remove a complete feature, if the complete feature is broken for the same reason.

Each such commit must have an issue created, referencing the commit to revert.

Cucumber tags

We use the following Cucumber tags on scenarios and features:

  • @fragile: this test lacks robustness and sometimes yields false positive failures.

  • @doc: the copy of our website included in the system-under-test affects the outcome of this test.

  • @check_tor_leaks: this instructs the test suite machinery to check that all outgoing connections initiated during this test go through Tor.

  • @slow: this test takes a long time to run (rule of thumb: more than 10 minutes on lizard).

  • @not_release_blocker: we don't abort the release process if this test fails; instead, we create a GitLab issue to fix it in the next release.

Our Jenkins setup uses these tags to decide which tests to run: automated tests in Jenkins.

Tips and tricks

Iterate quickly with --late-patch

Let's say that our test suite is reporting you an error. You want to make a fix and see if it makes the test suite happy again. Usually, you would be required to build the image again, which is slow.

You can do better than this using --late-patch:

  • create a file called, for example late-patch-fix-foobar.txt

  • every line contains two filenames, a source one and a destination one.

  • run the test suite using --late-patch late-patch-fix-foobar.txt

Here is an example of a late-patch file that is well suited to iterate over Tor Connection:

# python files
config/chroot_local-includes/usr/lib/python3/dist-packages/tca/application.py   /usr/lib/python3/dist-packages/tca/application.py
config/chroot_local-includes/usr/lib/python3/dist-packages/tca/ui/main_window.py    /usr/lib/python3/dist-packages/tca/ui/main_window.py
# UI
config/chroot_local-includes/usr/share/tails/tca/main.ui.in /usr/share/tails/tca/main.ui
config/chroot_local-includes/usr/share/tails/tca/time-dialog.ui.in  /usr/share/tails/tca/time-dialog.ui
config/chroot_local-includes/usr/share/tails/tca/tca.css    /usr/share/tails/tca/tca.css
# helpers
config/chroot_local-includes/usr/local/lib/tca-portal   /usr/local/lib/tca-portal
config/chroot_local-includes/usr/local/lib/tails-get-network-time   /usr/local/lib/tails-get-network-time