RequestFactory
¶The RequestFactory
shares the same API as
the test client. However, instead of behaving like a browser, the
RequestFactory provides a way to generate a request instance that can
be used as the first argument to any view. This means you can test a
view function the same way as you would test any other function – as
a black box, with exactly known inputs, testing for specific outputs.
The API for the RequestFactory
is a slightly
restricted subset of the test client API:
get()
,
post()
, put()
,
delete()
, head()
and
options()
.follows
. Since this is just a factory for producing
requests, it’s up to you to handle the response.The following is a simple unit test using the request factory:
from django.contrib.auth.models import User
from django.test import TestCase
from django.test.client import RequestFactory
class SimpleTest(TestCase):
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
self.user = User.objects.create_user(
username='jacob', email='jacob@…', password='top_secret')
def test_details(self):
# Create an instance of a GET request.
request = self.factory.get('/customer/details')
# Recall that middleware are not suported. You can simulate a
# logged-in user by setting request.user manually.
request.user = self.user
# Test my_view() as if it were deployed at /customer/details
response = my_view(request)
self.assertEqual(response.status_code, 200)
If you’re testing a multiple database configuration with master/slave replication, this strategy of creating test databases poses a problem. When the test databases are created, there won’t be any replication, and as a result, data created on the master won’t be seen on the slave.
To compensate for this, Django allows you to define that a database is a test mirror. Consider the following (simplified) example database configuration:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'myproject',
'HOST': 'dbmaster',
# ... plus some other settings
},
'slave': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'myproject',
'HOST': 'dbslave',
'TEST_MIRROR': 'default'
# ... plus some other settings
}
}
In this setup, we have two database servers: dbmaster
, described
by the database alias default
, and dbslave
described by the
alias slave
. As you might expect, dbslave
has been configured
by the database administrator as a read slave of dbmaster
, so in
normal activity, any write to default
will appear on slave
.
If Django created two independent test databases, this would break any
tests that expected replication to occur. However, the slave
database has been configured as a test mirror (using the
TEST_MIRROR
setting), indicating that under testing,
slave
should be treated as a mirror of default
.
When the test environment is configured, a test version of slave
will not be created. Instead the connection to slave
will be redirected to point at default
. As a result, writes to
default
will appear on slave
– but because they are actually
the same database, not because there is data replication between the
two databases.
By default, Django will assume all databases depend on the default
database and therefore always create the default
database first.
However, no guarantees are made on the creation order of any other
databases in your test setup.
If your database configuration requires a specific creation order, you
can specify the dependencies that exist using the
TEST_DEPENDENCIES
setting. Consider the following
(simplified) example database configuration:
DATABASES = {
'default': {
# ... db settings
'TEST_DEPENDENCIES': ['diamonds']
},
'diamonds': {
# ... db settings
'TEST_DEPENDENCIES': []
},
'clubs': {
# ... db settings
'TEST_DEPENDENCIES': ['diamonds']
},
'spades': {
# ... db settings
'TEST_DEPENDENCIES': ['diamonds','hearts']
},
'hearts': {
# ... db settings
'TEST_DEPENDENCIES': ['diamonds','clubs']
}
}
Under this configuration, the diamonds
database will be created first,
as it is the only database alias without dependencies. The default
and
clubs
alias will be created next (although the order of creation of this
pair is not guaranteed); then hearts
; and finally spades
.
If there are any circular dependencies in the
TEST_DEPENDENCIES
definition, an ImproperlyConfigured
exception will be raised.
If you want to run tests outside of ./manage.py test
– for example,
from a shell prompt – you will need to set up the test
environment first. Django provides a convenience method to do this:
>>> from django.test.utils import setup_test_environment
>>> setup_test_environment()
setup_test_environment()
puts several Django features
into modes that allow for repeatable testing, but does not create the test
databases; django.test.simple.DjangoTestSuiteRunner.setup_databases()
takes care of that.
The call to setup_test_environment()
is made
automatically as part of the setup of ./manage.py test
. You only
need to manually invoke this method if you’re not using running your
tests via Django’s test runner.
Clearly, doctest
and unittest
are not the only Python testing
frameworks. While Django doesn’t provide explicit support for alternative
frameworks, it does provide a way to invoke tests constructed for an
alternative framework as if they were normal Django tests.
When you run ./manage.py test
, Django looks at the TEST_RUNNER
setting to determine what to do. By default, TEST_RUNNER
points to
'django.test.simple.DjangoTestSuiteRunner'
. This class defines the default Django
testing behavior. This behavior involves:
models.py
and
tests.py
files in each installed application.syncdb
to install models and initial data into the test
databases.If you define your own test runner class and point TEST_RUNNER
at
that class, Django will execute your test runner whenever you run
./manage.py test
. In this way, it is possible to use any test framework
that can be executed from Python code, or to modify the Django test execution
process to satisfy whatever testing requirements you may have.
A test runner is a class defining a run_tests()
method. Django ships
with a DjangoTestSuiteRunner
class that defines the default Django
testing behavior. This class defines the run_tests()
entry point,
plus a selection of other methods that are used to by run_tests()
to
set up, execute and tear down the test suite.
DjangoTestSuiteRunner
(verbosity=1, interactive=True, failfast=True, **kwargs)¶verbosity
determines the amount of notification and debug information
that will be printed to the console; 0
is no output, 1
is normal
output, and 2
is verbose output.
If interactive
is True
, the test suite has permission to ask the
user for instructions when the test suite is executed. An example of this
behavior would be asking for permission to delete an existing test
database. If interactive
is False
, the test suite must be able to
run without any manual intervention.
If failfast
is True
, the test suite will stop running after the
first test failure is detected.
Django will, from time to time, extend the capabilities of
the test runner by adding new arguments. The **kwargs
declaration
allows for this expansion. If you subclass DjangoTestSuiteRunner
or
write your own test runner, ensure accept and handle the **kwargs
parameter.
Your test runner may also define additional command-line options.
If you add an option_list
attribute to a subclassed test runner,
those options will be added to the list of command-line options that
the test
command can use.
DjangoTestSuiteRunner.
option_list
¶This is the tuple of optparse
options which will be fed into the
management command’s OptionParser
for parsing arguments. See the
documentation for Python’s optparse
module for more details.
DjangoTestSuiteRunner.
run_tests
(test_labels, extra_tests=None, **kwargs)¶Run the test suite.
test_labels
is a list of strings describing the tests to be run. A test
label can take one of three forms:
app.TestCase.test_method
– Run a single test method in a test
case.app.TestCase
– Run all the test methods in a test case.app
– Search for and run all tests in the named application.If test_labels
has a value of None
, the test runner should run
search for tests in all the applications in INSTALLED_APPS
.
extra_tests
is a list of extra TestCase
instances to add to the
suite that is executed by the test runner. These extra tests are run
in addition to those discovered in the modules listed in test_labels
.
This method should return the number of tests that failed.
DjangoTestSuiteRunner.
setup_test_environment
(**kwargs)¶Sets up the test environment by calling
setup_test_environment()
and setting
DEBUG
to False
.
DjangoTestSuiteRunner.
build_suite
(test_labels, extra_tests=None, **kwargs)¶Constructs a test suite that matches the test labels provided.
test_labels
is a list of strings describing the tests to be run. A test
label can take one of three forms:
app.TestCase.test_method
– Run a single test method in a test
case.app.TestCase
– Run all the test methods in a test case.app
– Search for and run all tests in the named application.If test_labels
has a value of None
, the test runner should run
search for tests in all the applications in INSTALLED_APPS
.
extra_tests
is a list of extra TestCase
instances to add to the
suite that is executed by the test runner. These extra tests are run
in addition to those discovered in the modules listed in test_labels
.
Returns a TestSuite
instance ready to be run.
DjangoTestSuiteRunner.
setup_databases
(**kwargs)¶Creates the test databases.
Returns a data structure that provides enough detail to undo the changes
that have been made. This data will be provided to the teardown_databases()
function at the conclusion of testing.
DjangoTestSuiteRunner.
run_suite
(suite, **kwargs)¶Runs the test suite.
Returns the result produced by the running the test suite.
DjangoTestSuiteRunner.
teardown_databases
(old_config, **kwargs)¶Destroys the test databases, restoring pre-test conditions.
old_config
is a data structure defining the changes in the
database configuration that need to be reversed. It is the return
value of the setup_databases()
method.
DjangoTestSuiteRunner.
teardown_test_environment
(**kwargs)¶Restores the pre-test environment.
DjangoTestSuiteRunner.
suite_result
(suite, result, **kwargs)¶Computes and returns a return code based on a test suite, and the result from that test suite.
To assist in the creation of your own test runner, Django provides a number of
utility methods in the django.test.utils
module.
setup_test_environment
()¶Performs any global pre-test setup, such as the installing the instrumentation of the template rendering system and setting up the dummy email outbox.
teardown_test_environment
()¶Performs any global post-test teardown, such as removing the black magic hooks into the template system and restoring normal email services.
The creation module of the database backend also provides some utilities that can be useful during testing.
create_test_db
([verbosity=1, autoclobber=False])¶Creates a new test database and runs syncdb
against it.
verbosity
has the same behavior as in run_tests()
.
autoclobber
describes the behavior that will occur if a
database with the same name as the test database is discovered:
autoclobber
is False
, the user will be asked to
approve destroying the existing database. sys.exit
is
called if the user does not approve.True
, the database will be destroyed
without consulting the user.Returns the name of the test database that it created.
create_test_db()
has the side effect of modifying the value of
NAME
in DATABASES
to match the name of the test
database.
destroy_test_db
(old_database_name[, verbosity=1])¶Destroys the database whose name is the value of NAME
in
DATABASES
, and sets NAME
to the value of
old_database_name
.
The verbosity
argument has the same behavior as for
DjangoTestSuiteRunner
.
Code coverage describes how much source code has been tested. It shows which parts of your code are being exercised by tests and which are not. It’s an important part of testing applications, so it’s strongly recommended to check the coverage of your tests.
Django can be easily integrated with coverage.py, a tool for measuring code
coverage of Python programs. First, install coverage.py. Next, run the
following from your project folder containing manage.py
:
coverage run --source='.' manage.py test myapp
This runs your tests and collects coverage data of the executed files in your project. You can see a report of this data by typing following command:
coverage report
Note that some Django code was executed while running tests, but it is not
listed here because of the source
flag passed to the previous command.
For more options like annotated HTML listings detailing missed lines, see the coverage.py docs.
Apr 12, 2017