How we prepare our effective Django Testing

Super well-known companies such as Facebook, Instagram, Google, Apple, and others serve the globe with their services every second of every day. For software platforms like these, there is a rock-solid assurance behind the scene in order to ensure the quality it provides. They test their products very thoroughly and frequently in order to achieve their brand value and quality. It’s similar to a product guarantee when you buy something from Amazon or eBay. 

Usually, testing is a nightmare for developers. It can haunt the developer and may even make them feel like they’re hallucinating if a bug jumps out from the sidebar HTML Django template filter and suddenly surprises them with 500 HTTP error at the next page. To make things worse, here is my favorite Software Engineering theory: The more you fix bugs, more bugs you will find. (Believe it or not, this is a proven fact). Even with Django’s awesome testing framework, testing is still difficult to organize and increase the outcome quality. Here are some of our Django testing practices.

App-level testing (Duh!)

At Tivix, we are opinionated about creating test Python files at the Django app level. Just like models.py, views.py, admin.py Django files in the Django app folder, we create tests.py file to represent a set of Python classes to perform feature test cases. When we run this command:

./manage.py test

All Python files with the ‘test’ prefix filename will be executed that Python can find current list of visible Python paths. Therefore by having tests.py at the app level, Django will run all test cases for each Django app level. If we’re using Django-REST-Framework or any other supplementary framework that also provides its own testing feature, we usually create a separate Django app folder or a sub-folder in each Django app folder to organize the code, including test Python files as well. It gives us the organization of test cases to perform as well as the clean visibility for code coverage later on.

Ye shall have a base

Moreover, we believe in having our own base test class using Django’s base test class in order to create app-level test classes for the following benefits:

  • Helper & Utility methods that will be used throughout testing
  • Test data generation
  • Python inheritance to create specific test data additionally suited to each test case

If the feature needs a complex assertion or meticulous edge cases, it would be cleaner and more efficient to write some testing helpers in order to execute right test logic. By having the base test class, we can leverage this at its child classes. For the testing, Django creates a separate database instance temporarily to prevent a mixture of production and test data in the database. When the testing is done, this test database is removed and gone. When executing test cases, we need to generate some fake model objects. 

Most of the time, Django features we create are often coupled with multiple Django models. Django’s TestCase class provides setUp method to run the custom logic prior to executing our test cases. I prefer generating fake data at a Python level compared to having a Django fixture instead, as I can leverage the test utilities and make fake data more dynamic and flexible. Therefore, we could override setUp method in our base test class to create some fake model data that would be commonly and heavily used. For testing specific models and related logic, the test class subclassing our base test class can put more data generation after calling super method on setUp method.

class LibraryTestCase(BaseTestCase):
    def setUp(self):
        super(LibraryTestCase, self).setUp()
        self.userA = self._create_user("User", "User", "user@best.com")
        self._create_profile(user=self.userA)

Think you wrote lots of test cases? See Code Coverage

Suppose you have authored a fancy new feature in Django; Super-slick, fast, tight, intuitive, and you name it. Also you have written a nice set of through test cases to assure your Django feature, and most importantly, all of them gives you a green light every time you test it. Dear my friend, you and I know that testing never ends. Even if you wrote a thousand test cases for this feature, your feature is not fully tested unless your test cases executed all the code and dependencies that the feature has during the testing; That is the true metrics of how well your feature is tested.

We love the Python library called ‘coverage’: A library that measures how many lines of code are actually executed during your testing. Using coverage is easy. Install it via pip or manually, then run this command:

coverage run manage.py test

The coverage command will be an observer throughout the runtime of manage.py test command to measure executed codes. After testing, you can run this command to get the code coverage results:

coverage report

A general consensus on code coverage is 80% or higher shows you a well-covered testing. Code coverage is useful to find new test cases and overall metrics of your application’s quality assurance.

Testing is what we can give our customers a warranty.

You have seen some of our practices in Django testing. Software is not bound by physics and real-world rules, as it is just comprised of logical instructions to perform an action. To suppress the chaos that users may encounter and be grumpy about, we need to provide a warranty for what we built. Sometimes, testing can be more expensive than developing a feature; however this is a must-show we have to offer. By having an organized test manner and striving for mindfulness to increase a bit more juice to our tested features, we can establish an effective testing method and boast about this is how we do it.