Testing in Django

I’ve been working with a team of developers on the BOS2 project, a new version of our successful Bristol Online Surveys (BOS) service. This completely new codebase is written from the ground up using a variety of tools and technologies many of  which are new to the developers on the project. For myself, I’ve not had any experience of Cassandra before, and aside from some light-touch Plone work in the past this is my first real Python (Django) project that I’ve worked on.

While I’m enjoying the experience, with a background in Java I’ve come to rely on an excellent testing ecosystem which is one of the areas where I feel Django falls short. It’s not that you can’t do what you want with the Django testing framework but (as this blog post shows) it makes you work harder to get there and it’s often easy to fall into traps as you build your tests. I don’t feel we’ve achieved nirvana with our test framework but we’re getting there.

The development team’s first few scrums were focused on the survey rendering component. We needed to test that the different question types are displayed correctly, confirm that survey pages and navigation work as expected and that the system accepts and stores valid data from a respondent while rejecting invalid data.

The first thing we set up was a test structure with 3 different types of tests.

  1. We have a unit_tests directory which attempt to test various low-level methods such as numbering questions correctly or generating validation rules. We fell into the trap of believing the documentation and used Django’s TestCase to base our unit tests on. Although the TestCase does a bunch of neat things it is really not suitable for unit testing (more on this later).
  2. Second is a integration_tests directory which contains a bunch of scripts to test the rendering of surveys. These scripts all inherit from a simple http client (extended from Django’s TestCase.Client) which handles common survey actions such as form filling or navigating between pages of a survey.
  3. Finally an acceptance_tests directory contains selenium scripts which are used to test browser specific interactions (such as javascript form validation or the behaviour of the browser’s back button). These acceptance tests also cover some of the user stories that form our sprints.

We quick discovered the excellent nose (and django-nose) which greatly improves the standard Django test runner and allows developers to run individual tests rather then having to specify the entire test suite.

Next, our continuous integration server (Jenkins) was set up to run the complete test suite every hour that code changes are pushed to a central repo. Acceptance tests are run using Selenium WebDriver against Google Chome. Code coverage reports are generated (again thanks to nose) for each successful build and any errors trigger an email to the developers.

After we got all this working I wanted to find a way to unit test our javascript code. The answer came in the form of QUnit. To tie this into our testing environment we used Django-QUnit (which I needed to extend) and a simple selenium test which iterates over all the subdirectory links on the qunit webpage checking that all the tests pass.

This setup has worked reasonably well for us although there are still problems. Using Nose to run our tests is really useful as it helps speed up the development cycle (we only need to run the specific tests for the code we’re working on and save running all the tests before we commit/push) but the darker side of this is that it makes integration tests more attractive and so we probably spend less time on unit tests then we should be doing (it’s becoming a problem because the time taken to run all our test was beginning to climb northwards of 30 minutes). Secondly every time we run our unit tests, Django fires up an in-memory database for us. This is handy as it means our test code can easily generate dummy ORM objects, but it makes running individual tests really slow and (more importantly) defeats the purpose of unit tests. They should be run in complete isolation from everything – including the database. Relying on the ORM leads to the trap of writing larger unit tests then necessary.

The solution to these problems is first to politely ask Django not to start up a database when running our unit tests, and secondly the inclusion of the Mock framework which we can use to replace our ORM layer with stubs. With these two amendments in place we can start writing quicker and more effective unit tests.

I hope this post might prove useful to other teams starting out with Django. With my Java mindset it’s been a more frustrating journey then I’d have liked but I now feel we have a much better handle on our testing infrastructure. Our journey is by no means over as we have plenty more avenues to explore:

I thought I’d end with a few videos which I’ve found useful when learning about improving the testing experience in Django: