Skip to content
Snippets Groups Projects
Commit 2e2136d3 authored by Christine Lytwynec's avatar Christine Lytwynec
Browse files

Deprecated unit tests from rake to paver

parent e4d0b4d8
No related merge requests found
Showing
with 903 additions and 349 deletions
......@@ -93,7 +93,7 @@ because the `capa` package handles problem XML.
You can run all of the unit-level tests using the command
rake test
paver test
This includes python, javascript, and documentation tests. It does not, however,
run any acceptance tests.
......@@ -104,44 +104,54 @@ We use [nose](https://nose.readthedocs.org/en/latest/) through
the [django-nose plugin](https://pypi.python.org/pypi/django-nose)
to run the test suite.
You can run all the python tests using `rake` commands. For example,
You can run all the python tests using `paver` commands. For example,
rake test:python
paver test_python
runs all the tests. It also runs `collectstatic`, which prepares the static files used by the site (for example, compiling Coffeescript to Javascript).
You can re-run all failed python tests by running: (see note at end of section)
rake test:python[--failed]
paver test_python --failed
You can also run the tests without `collectstatic`, which tends to be faster:
To test lms or cms python, use:
rake fasttest_lms
paver test_system -s lms
or
rake fasttest_cms
paver test_system -s cms
xmodule can be tested independently, with this:
You can also run these tests without `collectstatic`, which is faster:
rake test_common/lib/xmodule
paver test_system -s lms --fasttest
other module level tests include
or
* `rake test_common/lib/capa`
* `rake test_common/lib/calc`
paver test_system -s cms --fasttest
To run a single django test class:
rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest]
paver test_system -t lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest
To run a single django test:
rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login]
paver test_system -t lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login
To re-run all failing django tests from lms or cms, use the `--failed`,`-f` flag (see note at end of section)
paver test_system -s lms --failed
paver test_system -s cms --failed
To re-run all failing django tests from lms or cms: (see note at end of section)
There is also a `--fail_fast`, `-x` option that will stop nosetests after the first failure.
rake test_lms[--failed]
common/lib tests are tested with the `test_lib` task, which also accepts the `--failed` and `--fail_fast` options. For example:
paver test_lib -l common/lib/calc
or
paver test_lib -l common/lib/xmodule --failed
To run a single nose test file:
......@@ -174,7 +184,7 @@ To run tests for stub servers, for example for
[YouTube stub server](https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py),
you can do one of:
rake fasttest_cms[common/djangoapps/terrain/stubs/tests/test_youtube_stub.py]
paver test_system -s cms -t common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
python -m coverage run --rcfile=cms/.coveragerc `which ./manage.py` cms --settings test test --traceback common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
......@@ -183,28 +193,31 @@ Very handy: if you uncomment the `pdb=1` line in `setup.cfg`, it will drop you i
Note: More on the `--failed` functionality
* In order to use this, you must run the tests first. If you haven't already run the tests, or if no tests failed in the previous run, then using the `--failed` switch will result in **all** of the tests being run. See more about this in the [nose documentation](http://nose.readthedocs.org/en/latest/plugins/testid.html#looping-over-failed-tests).
* Note that `rake test:python` calls nosetests separately for cms and lms. This means that if tests failed only in lms on the previous run, then calling `rake test:python[--failed]` will run **all of the tests for cms** in addition to the previously failing lms tests. If you want it to run only the failing tests for lms or cms, use the `rake test_lms[--failed]` or `rake test_cms[--failed]` commands.
* Note that `paver test_python` calls nosetests separately for cms and lms. This means that if tests failed only in lms on the previous run, then calling `paver test_python --failed` will run **all of the tests for cms** in addition to the previously failing lms tests. If you want it to run only the failing tests for lms or cms, use the `paver test_system -s lms --failed` or `paver test_system -s cms --failed` commands.
### Running Javascript Unit Tests
We use Jasmine to run JavaScript unit tests. To run all the JavaScript tests:
rake test:js
paver test_js
To run a specific set of JavaScript tests and print the results to the console:
rake test:js:run[lms]
rake test:js:run[cms]
rake test:js:run[xmodule]
rake test:js:run[common]
paver test_js_run -s lms
paver test_js_run -s cms
paver test_js_run -s cms-squire
paver test_js_run -s xmodule
paver test_js_run -s common
To run JavaScript tests in your default browser:
rake test:js:dev[lms]
rake test:js:dev[cms]
rake test:js:dev[xmodule]
rake test:js:dev[common]
paver test_js_dev -s lms
paver test_js_dev -s cms
paver test_js_dev -s cms-squire
paver test_js_dev -s xmodule
paver test_js_dev -s common
These rake commands call through to a custom test runner. For more info, see [js-test-tool](https://github.com/edx/js-test-tool).
......@@ -334,11 +347,11 @@ To view test coverage:
1. Run the test suite:
rake test
paver test
2. Generate reports:
rake coverage
paver coverage
3. Reports are located in the `reports` folder. The command
generates HTML and XML (Cobertura format) reports.
......
"""
paver commands
"""
__all__ = ["assets", "servers", "docs", "prereqs", "quality"]
from . import assets, servers, docs, prereqs, quality
from . import assets, servers, docs, prereqs, quality, tests, js_test
"""
Javascript test tasks
"""
import sys
from paver.easy import task, cmdopts, needs
from pavelib.utils.test.suites import JsTestSuite
from pavelib.utils.envs import Env
__test__ = False # do not collect
@task
@needs(
'pavelib.prereqs.install_node_prereqs',
'pavelib.utils.test.utils.clean_reports_dir',
)
@cmdopts([
("suite=", "s", "Test suite to run"),
("mode=", "m", "dev or run"),
("coverage", "c", "Run test under coverage"),
])
def test_js(options):
"""
Run the JavaScript tests
"""
mode = getattr(options, 'mode', 'run')
if mode == 'run':
suite = getattr(options, 'suite', 'all')
coverage = getattr(options, 'coverage', False)
elif mode == 'dev':
suite = getattr(options, 'suite', None)
coverage = False
else:
sys.stderr.write("Invalid mode. Please choose 'dev' or 'run'.")
return
if (suite != 'all') and (suite not in Env.JS_TEST_ID_KEYS):
sys.stderr.write(
"Unknown test suite. Please choose from ({suites})\n".format(
suites=", ".join(Env.JS_TEST_ID_KEYS)
)
)
return
test_suite = JsTestSuite(suite, mode=mode, with_coverage=coverage)
test_suite.run()
@task
@cmdopts([
("suite=", "s", "Test suite to run"),
("coverage", "c", "Run test under coverage"),
])
def test_js_run(options):
"""
Run the JavaScript tests and print results to the console
"""
setattr(options, 'mode', 'run')
test_js(options)
@task
@cmdopts([
("suite=", "s", "Test suite to run"),
])
def test_js_dev(options):
"""
Run the JavaScript tests in your default browsers
"""
setattr(options, 'mode', 'dev')
test_js(options)
......@@ -6,22 +6,6 @@ import os
import errno
from .utils.envs import Env
def get_or_make_dir(directory_path):
"""
Ensure that a directory exists, and return its path
"""
try:
os.makedirs(directory_path)
except OSError as err:
if err.errno != errno.EEXIST:
# If we get an error other than one that says
# that the file already exists
raise
return directory_path
@task
@needs('pavelib.prereqs.install_python_prereqs')
@cmdopts([
......@@ -38,7 +22,7 @@ def run_pylint(options):
for system in systems:
# Directory to put the pylint report in.
# This makes the folder if it doesn't already exist.
report_dir = get_or_make_dir(os.path.join(Env.REPORT_DIR, system))
report_dir = (Env.REPORT_DIR / system).makedirs_p()
flags = '-E' if errors else ''
......@@ -82,7 +66,7 @@ def run_pep8(options):
for system in systems:
# Directory to put the pep8 report in.
# This makes the folder if it doesn't already exist.
report_dir = get_or_make_dir(os.path.join(Env.REPORT_DIR, system))
report_dir = (Env.REPORT_DIR / system).makedirs_p()
sh('pep8 {system} | tee {report_dir}/pep8.report'.format(system=system, report_dir=report_dir))
......@@ -96,7 +80,7 @@ def run_quality():
# Directory to put the diff reports in.
# This makes the folder if it doesn't already exist.
dquality_dir = get_or_make_dir(os.path.join(Env.REPORT_DIR, "diff_quality"))
dquality_dir = (Env.REPORT_DIR / "diff_quality").makedirs_p()
# Generage diff-quality html report for pep8, and print to console
# If pep8 reports exist, use those
......
"""
Unit test tasks
"""
import os
import sys
from paver.easy import sh, task, cmdopts, needs
from pavelib.utils.test import suites
from pavelib.utils.envs import Env
try:
from pygments.console import colorize
except ImportError:
colorize = lambda color, text: text # pylint: disable-msg=invalid-name
__test__ = False # do not collect
@task
@needs(
'pavelib.prereqs.install_prereqs',
'pavelib.utils.test.utils.clean_reports_dir',
)
@cmdopts([
("system=", "s", "System to act on"),
("test_id=", "t", "Test id"),
("failed", "f", "Run only failed tests"),
("fail_fast", "x", "Run only failed tests"),
("fasttest", "a", "Run without collectstatic")
])
def test_system(options):
"""
Run tests on our djangoapps for lms and cms
"""
system = getattr(options, 'system', None)
test_id = getattr(options, 'test_id', None)
opts = {
'failed_only': getattr(options, 'failed', None),
'fail_fast': getattr(options, 'fail_fast', None),
'fasttest': getattr(options, 'fasttest', None),
}
if test_id:
if not system:
system = test_id.split('/')[0]
opts['test_id'] = test_id
if test_id or system:
system_tests = [suites.SystemTestSuite(system, **opts)]
else:
system_tests = []
for syst in ('cms', 'lms'):
system_tests.append(suites.SystemTestSuite(syst, **opts))
test_suite = suites.PythonTestSuite('python tests', subsuites=system_tests, **opts)
test_suite.run()
@task
@needs(
'pavelib.prereqs.install_prereqs',
'pavelib.utils.test.utils.clean_reports_dir',
)
@cmdopts([
("lib=", "l", "lib to test"),
("test_id=", "t", "Test id"),
("failed", "f", "Run only failed tests"),
("fail_fast", "x", "Run only failed tests"),
])
def test_lib(options):
"""
Run tests for common/lib/
"""
lib = getattr(options, 'lib', None)
test_id = getattr(options, 'test_id', lib)
opts = {
'failed_only': getattr(options, 'failed', None),
'fail_fast': getattr(options, 'fail_fast', None),
}
if test_id:
lib = '/'.join(test_id.split('/')[0:3])
opts['test_id'] = test_id
lib_tests = [suites.LibTestSuite(lib, **opts)]
else:
lib_tests = [suites.LibTestSuite(d, **opts) for d in Env.LIB_TEST_DIRS]
test_suite = suites.PythonTestSuite('python tests', subsuites=lib_tests, **opts)
test_suite.run()
@task
@needs(
'pavelib.prereqs.install_prereqs',
'pavelib.utils.test.utils.clean_reports_dir',
)
@cmdopts([
("failed", "f", "Run only failed tests"),
("fail_fast", "x", "Run only failed tests"),
])
def test_python(options):
"""
Run all python tests
"""
opts = {
'failed_only': getattr(options, 'failed', None),
'fail_fast': getattr(options, 'fail_fast', None),
}
python_suite = suites.PythonTestSuite('Python Tests', **opts)
python_suite.run()
@task
@needs(
'pavelib.prereqs.install_python_prereqs',
'pavelib.utils.test.utils.clean_reports_dir',
)
def test_i18n():
"""
Run all i18n tests
"""
i18n_suite = suites.I18nTestSuite('i18n')
i18n_suite.run()
@task
@needs(
'pavelib.prereqs.install_prereqs',
'pavelib.utils.test.utils.clean_reports_dir',
)
def test():
"""
Run all tests
"""
# Subsuites to be added to the main suite
python_suite = suites.PythonTestSuite('Python Tests')
i18n_suite = suites.I18nTestSuite('i18n')
js_suite = suites.JsTestSuite('JS Tests', mode='run', with_coverage=True)
# Main suite to be run
all_unittests_suite = suites.TestSuite('All Tests', subsuites=[i18n_suite, js_suite, python_suite])
all_unittests_suite.run()
@task
@needs('pavelib.prereqs.install_prereqs')
def coverage():
"""
Build the html, xml, and diff coverage reports
"""
for directory in Env.LIB_TEST_DIRS + ['cms', 'lms']:
report_dir = Env.REPORT_DIR / directory
if (report_dir / '.coverage').isfile():
# Generate the coverage.py HTML report
sh("coverage html --rcfile={dir}/.coveragerc".format(dir=directory))
# Generate the coverage.py XML report
sh("coverage xml -o {report_dir}/coverage.xml --rcfile={dir}/.coveragerc".format(
report_dir=report_dir,
dir=directory
))
# Find all coverage XML files (both Python and JavaScript)
xml_reports = []
for filepath in Env.REPORT_DIR.walk():
if filepath.basename() == 'coverage.xml':
xml_reports.append(filepath)
if not xml_reports:
err_msg = colorize(
'red',
"No coverage info found. Run `paver test` before running `paver coverage`.\n"
)
sys.stderr.write(err_msg)
else:
xml_report_str = ' '.join(xml_reports)
diff_html_path = os.path.join(Env.REPORT_DIR, 'diff_coverage_combined.html')
# Generate the diff coverage reports (HTML and console)
sh("diff-cover {xml_report_str} --html-report {diff_html_path}".format(
xml_report_str=xml_report_str, diff_html_path=diff_html_path))
sh("diff-cover {xml_report_str}".format(xml_report_str=xml_report_str))
print("\n")
......@@ -20,6 +20,39 @@ class Env(object):
# Reports Directory
REPORT_DIR = REPO_ROOT / 'reports'
# Test Ids Directory
TEST_DIR = REPO_ROOT / ".testids"
# Files used to run each of the js test suites
# TODO: Store this as a dict. Order seems to matter for some
# reason. See issue TE-415.
JS_TEST_ID_FILES = [
REPO_ROOT / 'lms/static/js_test.yml',
REPO_ROOT / 'cms/static/js_test.yml',
REPO_ROOT / 'cms/static/js_test_squire.yml',
REPO_ROOT / 'common/lib/xmodule/xmodule/js/js_test.yml',
REPO_ROOT / 'common/static/js_test.yml',
]
JS_TEST_ID_KEYS = [
'lms',
'cms',
'cms-squire',
'xmodule',
'common',
]
JS_REPORT_DIR = REPORT_DIR / 'javascript'
# Directories used for common/lib/ tests
LIB_TEST_DIRS = []
for item in (REPO_ROOT / "common/lib").listdir():
if (REPO_ROOT / 'common/lib' / item).isdir():
LIB_TEST_DIRS.append(path("common/lib") / item.basename())
# Directory for i18n test reports
I18N_REPORT_DIR = REPORT_DIR / 'i18n'
# Service variant (lms, cms, etc.) configured with an environment variable
# We use this to determine which envs.json file to load.
SERVICE_VARIANT = os.environ.get('SERVICE_VARIANT', None)
......
"""
TestSuite class and subclasses
"""
from .suite import TestSuite
from .nose_suite import NoseTestSuite, SystemTestSuite, LibTestSuite
from .python_suite import PythonTestSuite
from .js_suite import JsTestSuite
from .i18n_suite import I18nTestSuite
"""
Classes used for defining and running i18n test suites
"""
from pavelib.utils.test.suites import TestSuite
from pavelib.utils.envs import Env
__test__ = False # do not collect
class I18nTestSuite(TestSuite):
"""
Run tests for the internationalization library
"""
def __init__(self, *args, **kwargs):
super(I18nTestSuite, self).__init__(*args, **kwargs)
self.report_dir = Env.I18N_REPORT_DIR
self.xunit_report = self.report_dir / 'nosetests.xml'
def __enter__(self):
super(I18nTestSuite, self).__enter__()
self.report_dir.makedirs_p()
@property
def cmd(self):
pythonpath_prefix = (
"PYTHONPATH={repo_root}/i18n:$PYTHONPATH".format(
repo_root=Env.REPO_ROOT
)
)
cmd = (
"{pythonpath_prefix} nosetests {repo_root}/i18n/tests "
"--with-xunit --xunit-file={xunit_report}".format(
pythonpath_prefix=pythonpath_prefix,
repo_root=Env.REPO_ROOT,
xunit_report=self.xunit_report,
)
)
return cmd
"""
Javascript test tasks
"""
from pavelib import assets
from pavelib.utils.test import utils as test_utils
from pavelib.utils.test.suites import TestSuite
from pavelib.utils.envs import Env
__test__ = False # do not collect
class JsTestSuite(TestSuite):
"""
A class for running JavaScript tests.
"""
def __init__(self, *args, **kwargs):
super(JsTestSuite, self).__init__(*args, **kwargs)
self.run_under_coverage = kwargs.get('with_coverage', True)
self.mode = kwargs.get('mode', 'run')
try:
self.test_id = (Env.JS_TEST_ID_FILES[Env.JS_TEST_ID_KEYS.index(self.root)])
except ValueError:
self.test_id = ' '.join(Env.JS_TEST_ID_FILES)
self.root = self.root + ' javascript'
self.report_dir = Env.JS_REPORT_DIR
self.coverage_report = self.report_dir / 'coverage.xml'
self.xunit_report = self.report_dir / 'javascript_xunit.xml'
def __enter__(self):
super(JsTestSuite, self).__enter__()
self.report_dir.makedirs_p()
test_utils.clean_test_files()
if self.mode == 'run' and not self.run_under_coverage:
test_utils.clean_dir(self.report_dir)
assets.compile_coffeescript("`find lms cms common -type f -name \"*.coffee\"`")
@property
def cmd(self):
"""
Run the tests using js-test-tool. See js-test-tool docs for
description of different command line arguments.
"""
cmd = (
"js-test-tool {mode} {test_id} --use-firefox --timeout-sec "
"600 --xunit-report {xunit_report}".format(
mode=self.mode,
test_id=self.test_id,
xunit_report=self.xunit_report,
)
)
if self.run_under_coverage:
cmd += " --coverage-xml {report_dir}".format(
report_dir=self.coverage_report
)
return cmd
"""
Classes used for defining and running nose test suites
"""
import os
from paver.easy import call_task
from pavelib.utils.test import utils as test_utils
from pavelib.utils.test.suites import TestSuite
from pavelib.utils.envs import Env
__test__ = False # do not collect
class NoseTestSuite(TestSuite):
"""
A subclass of TestSuite with extra methods that are specific
to nose tests
"""
def __init__(self, *args, **kwargs):
super(NoseTestSuite, self).__init__(*args, **kwargs)
self.failed_only = kwargs.get('failed_only', False)
self.fail_fast = kwargs.get('fail_fast', False)
self.run_under_coverage = kwargs.get('with_coverage', True)
self.report_dir = Env.REPORT_DIR / self.root
self.test_id_dir = Env.TEST_DIR / self.root
self.test_ids = self.test_id_dir / 'noseids'
def __enter__(self):
super(NoseTestSuite, self).__enter__()
self.report_dir.makedirs_p()
self.test_id_dir.makedirs_p()
def __exit__(self, exc_type, exc_value, traceback):
"""
Cleans mongo afer the tests run.
"""
super(NoseTestSuite, self).__exit__(exc_type, exc_value, traceback)
test_utils.clean_mongo()
def _under_coverage_cmd(self, cmd):
"""
If self.run_under_coverage is True, it returns the arg 'cmd'
altered to be run under coverage. It returns the command
unaltered otherwise.
"""
if self.run_under_coverage:
cmd0, cmd_rest = cmd.split(" ", 1)
# We use "python -m coverage" so that the proper python
# will run the importable coverage rather than the
# coverage that OS path finds.
cmd = (
"python -m coverage run --rcfile={root}/.coveragerc "
"`which {cmd0}` {cmd_rest}".format(
root=self.root,
cmd0=cmd0,
cmd_rest=cmd_rest,
)
)
return cmd
@property
def test_options_flags(self):
"""
Takes the test options and returns the appropriate flags
for the command.
"""
opts = " "
# Handle "--failed" as a special case: we want to re-run only
# the tests that failed within our Django apps
# This sets the --failed flag for the nosetests command, so this
# functionality is the same as described in the nose documentation
if self.failed_only:
opts += "--failed"
# This makes it so we use nose's fail-fast feature in two cases.
# Case 1: --fail_fast is passed as an arg in the paver command
# Case 2: The environment variable TESTS_FAIL_FAST is set as True
env_fail_fast_set = (
'TESTS_FAIL_FAST' in os.environ and os.environ['TEST_FAIL_FAST']
)
if self.fail_fast or env_fail_fast_set:
opts += " --stop"
return opts
class SystemTestSuite(NoseTestSuite):
"""
TestSuite for lms and cms nosetests
"""
def __init__(self, *args, **kwargs):
super(SystemTestSuite, self).__init__(*args, **kwargs)
self.test_id = kwargs.get('test_id', self._default_test_id)
self.fasttest = kwargs.get('fasttest', False)
def __enter__(self):
super(SystemTestSuite, self).__enter__()
args = [self.root, '--settings=test']
if self.fasttest:
# TODO: Fix the tests so that collectstatic isn't needed ever
# add --skip-collect to this when the tests are fixed
args.append('--skip-collect')
call_task('pavelib.assets.update_assets', args=args)
@property
def cmd(self):
cmd = (
'./manage.py {system} test {test_id} {test_opts} '
'--traceback --settings=test'.format(
system=self.root,
test_id=self.test_id,
test_opts=self.test_options_flags,
)
)
return self._under_coverage_cmd(cmd)
@property
def _default_test_id(self):
"""
If no test id is provided, we need to limit the test runner
to the Djangoapps we want to test. Otherwise, it will
run tests on all installed packages. We do this by
using a default test id.
"""
# We need to use $DIR/*, rather than just $DIR so that
# django-nose will import them early in the test process,
# thereby making sure that we load any django models that are
# only defined in test files.
default_test_id = "{system}/djangoapps/* common/djangoapps/*".format(
system=self.root
)
if self.root in ('lms', 'cms'):
default_test_id += " {system}/lib/*".format(system=self.root)
if self.root == 'lms':
default_test_id += " {system}/tests.py".format(system=self.root)
return default_test_id
class LibTestSuite(NoseTestSuite):
"""
TestSuite for edx-platform/common/lib nosetests
"""
def __init__(self, *args, **kwargs):
super(LibTestSuite, self).__init__(*args, **kwargs)
self.test_id = kwargs.get('test_id', self.root)
self.xunit_report = self.report_dir / "nosetests.xml"
@property
def cmd(self):
cmd = (
"nosetests --id-file={test_ids} {test_id} {test_opts} "
"--with-xunit --xunit-file={xunit_report}".format(
test_ids=self.test_ids,
test_id=self.test_id,
test_opts=self.test_options_flags,
xunit_report=self.xunit_report,
)
)
return self._under_coverage_cmd(cmd)
"""
Classes used for defining and running python test suites
"""
from pavelib.utils.test import utils as test_utils
from pavelib.utils.test.suites import TestSuite, LibTestSuite, SystemTestSuite
from pavelib.utils.envs import Env
__test__ = False # do not collect
class PythonTestSuite(TestSuite):
"""
A subclass of TestSuite with extra setup for python tests
"""
def __init__(self, *args, **kwargs):
super(PythonTestSuite, self).__init__(*args, **kwargs)
self.fasttest = kwargs.get('fasttest', False)
self.failed_only = kwargs.get('failed_only', None)
self.fail_fast = kwargs.get('fail_fast', None)
self.subsuites = kwargs.get('subsuites', self._default_subsuites)
def __enter__(self):
super(PythonTestSuite, self).__enter__()
if not self.fasttest:
test_utils.clean_test_files()
@property
def _default_subsuites(self):
"""
The default subsuites to be run. They include lms, cms,
and all of the libraries in common/lib.
"""
opts = {
'failed_only': self.failed_only,
'fail_fast': self.fail_fast,
'fasttest': self.fasttest,
}
lib_suites = [
LibTestSuite(d, **opts) for d in Env.LIB_TEST_DIRS
]
system_suites = [
SystemTestSuite('cms', **opts),
SystemTestSuite('lms', **opts),
]
return system_suites + lib_suites
"""
A class used for defining and running test suites
"""
import sys
import subprocess
from pavelib.utils.process import kill_process
try:
from pygments.console import colorize
except ImportError:
colorize = lambda color, text: text # pylint: disable-msg=invalid-name
__test__ = False # do not collect
class TestSuite(object):
"""
TestSuite is a class that defines how groups of tests run.
"""
def __init__(self, *args, **kwargs):
self.root = args[0]
self.subsuites = kwargs.get('subsuites', [])
self.failed_suites = []
def __enter__(self):
"""
This will run before the test suite is run with the run_suite_tests method.
If self.run_test is called directly, it should be run in a 'with' block to
ensure that the proper context is created.
Specific setup tasks should be defined in each subsuite.
i.e. Checking for and defining required directories.
"""
print("\nSetting up for {suite_name}".format(suite_name=self.root))
self.failed_suites = []
def __exit__(self, exc_type, exc_value, traceback):
"""
This is run after the tests run with the run_suite_tests method finish.
Specific clean up tasks should be defined in each subsuite.
If self.run_test is called directly, it should be run in a 'with' block
to ensure that clean up happens properly.
i.e. Cleaning mongo after the lms tests run.
"""
print("\nCleaning up after {suite_name}".format(suite_name=self.root))
@property
def cmd(self):
"""
The command to run tests (as a string). For this base class there is none.
"""
return None
def run_test(self):
"""
Runs a self.cmd in a subprocess and waits for it to finish.
It returns False if errors or failures occur. Otherwise, it
returns True.
"""
cmd = self.cmd
sys.stdout.write(cmd)
msg = colorize(
'green',
'\n{bar}\n Running tests for {suite_name} \n{bar}\n'.format(suite_name=self.root, bar='=' * 40),
)
sys.stdout.write(msg)
sys.stdout.flush()
kwargs = {'shell': True, 'cwd': None}
process = None
try:
process = subprocess.Popen(cmd, **kwargs)
process.communicate()
except KeyboardInterrupt:
kill_process(process)
sys.exit(1)
else:
return (process.returncode == 0)
def run_suite_tests(self):
"""
Runs each of the suites in self.subsuites while tracking failures
"""
# Uses __enter__ and __exit__ for context
with self:
# run the tests for this class, and for all subsuites
if self.cmd:
passed = self.run_test()
if not passed:
self.failed_suites.append(self)
for suite in self.subsuites:
suite.run_suite_tests()
if len(suite.failed_suites) > 0:
self.failed_suites.extend(suite.failed_suites)
def report_test_results(self):
"""
Writes a list of failed_suites to sys.stderr
"""
if len(self.failed_suites) > 0:
msg = colorize('red', "\n\n{bar}\nTests failed in the following suites:\n* ".format(bar="=" * 48))
msg += colorize('red', '\n* '.join([s.root for s in self.failed_suites]) + '\n\n')
else:
msg = colorize('green', "\n\n{bar}\nNo test failures ".format(bar="=" * 48))
print(msg)
def run(self):
"""
Runs the tests in the suite while tracking and reporting failures.
"""
self.run_suite_tests()
self.report_test_results()
if len(self.failed_suites) > 0:
sys.exit(1)
"""
Helper functions for test tasks
"""
from paver.easy import sh, task
from pavelib.utils.envs import Env
__test__ = False # do not collect
@task
def clean_test_files():
"""
Clean fixture files used by tests and .pyc files
"""
sh("git clean -fqdx test_root/logs test_root/data test_root/staticfiles test_root/uploads")
sh("find . -type f -name \"*.pyc\" -delete")
sh("rm -rf test_root/log/auto_screenshots/*")
def clean_dir(directory):
"""
Clean coverage files, to ensure that we don't use stale data to generate reports.
"""
# We delete the files but preserve the directory structure
# so that coverage.py has a place to put the reports.
sh('find {dir} -type f -delete'.format(dir=directory))
@task
def clean_reports_dir():
"""
Clean coverage files, to ensure that we don't use stale data to generate reports.
"""
# We delete the files but preserve the directory structure
# so that coverage.py has a place to put the reports.
reports_dir = Env.REPORT_DIR.makedirs_p()
clean_dir(reports_dir)
@task
def clean_mongo():
"""
Clean mongo test databases
"""
sh("mongo {repo_root}/scripts/delete-mongo-test-dbs.js".format(repo_root=Env.REPO_ROOT))
......@@ -73,12 +73,6 @@ namespace :i18n do
end
end
desc "Run tests for the internationalization library"
task :test => [:install_python_prereqs, I18N_REPORT_DIR, :clean_reports_dir] do
pythonpath_prefix = "PYTHONPATH=#{REPO_ROOT}/i18n:$PYTHONPATH"
test_sh("i18n", "#{pythonpath_prefix} nosetests #{REPO_ROOT}/i18n/tests --with-xunit --xunit-file=#{I18N_XUNIT_REPORT}")
end
# Commands for automating the process of including translations in edx-platform.
# Will eventually be run by jenkins.
namespace :robot do
......@@ -95,7 +89,3 @@ namespace :i18n do
end
end
# Add i18n tests to the main test command
task :test => :'i18n:test'
JS_TEST_SUITES = {
'lms' => 'lms/static/js_test.yml',
'cms' => 'cms/static/js_test.yml',
'cms-squire' => 'cms/static/js_test_squire.yml',
'xmodule' => 'common/lib/xmodule/xmodule/js/js_test.yml',
'common' => 'common/static/js_test.yml',
}
# Turn relative paths to absolute paths from the repo root.
JS_TEST_SUITES.each do |key, val|
JS_TEST_SUITES[key] = File.join(REPO_ROOT, val)
end
# Define the directory for coverage reports
JS_REPORT_DIR = report_dir_path('javascript')
directory JS_REPORT_DIR
# Given an environment (a key in `JS_TEST_SUITES`)
# return the path to the JavaScript test suite description
# If `env` is nil, return a string containing
# all available descriptions.
def suite_for_env(env)
if env.nil?
return JS_TEST_SUITES.map{|key, val| val}.join(' ')
else
return JS_TEST_SUITES[env]
end
end
# Run the tests using js-test-tool
# See js-test-tool docs for description of different
# command line arguments
def js_test_tool(env, command, do_coverage)
suite = suite_for_env(env)
xunit_report = File.join(JS_REPORT_DIR, 'javascript_xunit.xml')
cmd = "js-test-tool #{command} #{suite} --use-firefox --timeout-sec 600 --xunit-report #{xunit_report}"
if do_coverage
report_dir = File.join(JS_REPORT_DIR, 'coverage.xml')
cmd += " --coverage-xml #{report_dir}"
end
test_sh("javascript", cmd)
end
# Print a list of js_test commands for
# all available environments
def print_js_test_cmds(mode)
JS_TEST_SUITES.each do |key, val|
puts " rake test:js:#{mode}[#{key}]"
end
end
# Paver migration hack: because the CoffeeScript-specific asset command has been deprecated,
# we compile CoffeeScript ourselves
def compile_coffeescript()
sh("node_modules/.bin/coffee --compile `find lms cms common -type f -name \"*.coffee\"`")
end
namespace :'test:js' do
desc "Run the JavaScript tests and print results to the console"
task :run, [:env] => [:clean_test_files, JS_REPORT_DIR, :install_node_prereqs] do |t, args|
compile_coffeescript()
if args[:env].nil?
puts "Running all test suites. To run a specific test suite, try:"
print_js_test_cmds('run')
end
js_test_tool(args[:env], 'run', false)
end
desc "Run the JavaScript tests in your default browser"
task :dev, [:env] => [:clean_test_files, :install_node_prereqs] do |t, args|
compile_coffeescript()
if args[:env].nil?
puts "Error: No test suite specified. Try one of these instead:"
print_js_test_cmds('dev')
else
js_test_tool(args[:env], 'dev', false)
end
end
desc "Run all JavaScript tests and collect coverage information"
task :coverage => [:clean_reports_dir, :clean_test_files, JS_REPORT_DIR, :install_node_prereqs] do
compile_coffeescript()
js_test_tool(nil, 'run', true)
end
end
# Default js_test is js_test:run
desc "Run all JavaScript tests and print results the the console"
task :'test:js' => :'test:js:run'
# Add the JS tests to the main test command
task :test => :'test:js:coverage'
# js_test tasks deprecated to paver
require 'colorize'
def deprecated(deprecated, deprecated_by, *args)
task deprecated, [:env] do |t,args|
args.with_defaults(:env => nil)
new_cmd = "#{deprecated_by}"
if !args.env.nil?
new_cmd = "#{new_cmd} --suite=#{args.env}"
end
puts("Task #{deprecated} has been deprecated. Using #{new_cmd} instead.".red)
sh(new_cmd)
end
end
# deprecates all js_test.rake tasks
deprecated('test:js', 'paver test_js')
deprecated('test:js:coverage', 'paver test_js -c')
deprecated('test:js:dev', 'paver test_js_dev')
deprecated('test:js:run', 'paver test_js_run')
# Set up the clean and clobber tasks
CLOBBER.include(REPORT_DIR, 'test_root/*_repo', 'test_root/staticfiles')
# Create the directory to hold coverage reports, if it doesn't already exist.
directory REPORT_DIR
def test_id_dir(path)
return File.join(".testids", path.to_s)
end
def run_under_coverage(cmd, root)
cmd0, cmd_rest = cmd.split(" ", 2)
# We use "python -m coverage" so that the proper python will run the importable coverage
# rather than the coverage that OS path finds.
cmd = "python -m coverage run --rcfile=#{root}/.coveragerc `which #{cmd0}` #{cmd_rest}"
return cmd
end
def run_tests(system, report_dir, test_id=nil, stop_on_failure=true)
# If no test id is provided, we need to limit the test runner
# to the Djangoapps we want to test. Otherwise, it will
# run tests on all installed packages.
# We need to use $DIR/*, rather than just $DIR so that
# django-nose will import them early in the test process,
# thereby making sure that we load any django models that are
# only defined in test files.
default_test_id = "#{system}/djangoapps/* common/djangoapps/*"
if system == :lms || system == :cms
default_test_id += " #{system}/lib/*"
end
if system == :lms
default_test_id += " #{system}/tests.py"
end
if test_id.nil?
test_id = default_test_id
# Handle "--failed" as a special case: we want to re-run only
# the tests that failed within our Django apps
elsif test_id == '--failed'
test_id = "#{default_test_id} --failed"
end
cmd = django_admin(system, :test, 'test', test_id)
test_sh(system, run_under_coverage(cmd, system))
end
task :clean_test_files do
desc "Clean fixture files used by tests, .pyc files, and automatic screenshots"
sh("git clean -fqdx test_root/logs test_root/data test_root/staticfiles test_root/uploads")
sh("find . -type f -name \"*.pyc\" -delete")
sh("rm -rf test_root/log/auto_screenshots/*")
end
task :clean_reports_dir => REPORT_DIR do
desc "Clean coverage files, to ensure that we don't use stale data to generate reports."
# We delete the files but preserve the directory structure
# so that coverage.py has a place to put the reports.
sh("find #{REPORT_DIR} -type f -delete")
end
TEST_TASK_DIRS = []
[:lms, :cms].each do |system|
report_dir = report_dir_path(system)
test_id_dir = test_id_dir(system)
directory test_id_dir
# Per System tasks/
desc "Run all django tests on our djangoapps for the #{system}"
task "test_#{system}", [:test_id] => [
:clean_test_files, :install_prereqs,
"#{system}:gather_assets:test", "fasttest_#{system}"
]
# Have a way to run the tests without running collectstatic -- useful when debugging without
# messing with static files.
task "fasttest_#{system}", [:test_id] => [test_id_dir, report_dir, :clean_reports_dir] do |t, args|
args.with_defaults(:test_id => nil)
begin
run_tests(system, report_dir, args.test_id)
ensure
Rake::Task[:'test:clean_mongo'].reenable
Rake::Task[:'test:clean_mongo'].invoke
end
end
task :fasttest => "fasttest_#{system}"
TEST_TASK_DIRS << system
end
Dir["common/lib/*"].select{|lib| File.directory?(lib)}.each do |lib|
report_dir = report_dir_path(lib)
test_id_dir = test_id_dir(lib)
test_ids = File.join(test_id_dir(lib), '.noseids')
directory test_id_dir
desc "Run tests for common lib #{lib}"
task "test_#{lib}", [:test_id] => [
test_id_dir, report_dir, :clean_test_files, :clean_reports_dir, :install_prereqs
] do |t, args|
args.with_defaults(:test_id => lib)
ENV['NOSE_XUNIT_FILE'] = File.join(report_dir, "nosetests.xml")
cmd = "nosetests --id-file=#{test_ids} #{args.test_id}"
begin
test_sh(lib, run_under_coverage(cmd, lib))
ensure
Rake::Task[:'test:clean_mongo'].reenable
Rake::Task[:'test:clean_mongo'].invoke
end
end
TEST_TASK_DIRS << lib
# There used to be a fasttest_#{lib} command that ran without coverage.
# However, this is an inconsistent usage of "fast":
# When running tests for lms and cms, "fast" means skipping
# staticfiles collection, but still running under coverage.
# We keep the fasttest_#{lib} command for backwards compatibility,
# but make it an alias to the normal test command.
task "fasttest_#{lib}" => "test_#{lib}"
end
task :report_dirs
TEST_TASK_DIRS.each do |dir|
report_dir = report_dir_path(dir)
directory report_dir
task :report_dirs => [REPORT_DIR, report_dir]
task 'test:python' => "test_#{dir}"
end
namespace :test do
desc "Run all python tests"
task :python, [:test_id]
desc "Drop Mongo databases created by the test suite"
task :clean_mongo do
sh("mongo #{REPO_ROOT}/scripts/delete-mongo-test-dbs.js")
end
end
desc "Build the html, xml, and diff coverage reports"
task :coverage => :report_dirs do
# Generate coverage for Python sources
TEST_TASK_DIRS.each do |dir|
report_dir = report_dir_path(dir)
if File.file?("#{report_dir}/.coverage")
# Generate the coverage.py HTML report
sh("coverage html --rcfile=#{dir}/.coveragerc")
# Generate the coverage.py XML report
sh("coverage xml -o #{report_dir}/coverage.xml --rcfile=#{dir}/.coveragerc")
end
end
# Find all coverage XML files (both Python and JavaScript)
xml_reports = FileList[File.join(REPORT_DIR, '**/coverage.xml')]
if xml_reports.length < 1
puts "No coverage info found. Run `rake test` before running `rake coverage`."
else
xml_report_str = xml_reports.join(' ')
diff_html_path = report_dir_path('diff_coverage_combined.html')
# Generate the diff coverage reports (HTML and console)
sh("diff-cover #{xml_report_str} --html-report #{diff_html_path}")
sh("diff-cover #{xml_report_str}")
puts "\n"
end
end
# Other Rake files append additional tests to the main test command.
desc "Run all unit tests"
task :test, [:test_id] => 'test:python'
# test tasks deprecated to paver
require 'colorize'
def deprecated(deprecated, deprecated_by, use_id, *args)
task deprecated, [:test_id] do |t,args|
args.with_defaults(:test_id => nil)
new_cmd = "#{deprecated_by}"
if !args.test_id.nil? && use_id
new_cmd = "#{new_cmd} --test_id=#{args.test_id}"
end
puts("Task #{deprecated} has been deprecated. Using #{new_cmd} instead.".red)
sh(new_cmd)
end
end
# deprecates all test.rake tasks
deprecated("test", "paver test", false)
deprecated('test:python', 'paver test_python', false)
deprecated("test_cms", "paver test_system -s cms", true)
deprecated("test_lms", "paver test_system -s lms", true)
deprecated("fasttest_cms", "paver test_system -s cms --fasttest", true)
deprecated("fasttest_lms", "paver test_system -s lms --fasttest", true)
Dir["common/lib/*"].select{|lib| File.directory?(lib)}.each do |lib|
deprecated("test_#{lib}", "paver test_lib --lib=#{lib}", true)
deprecated("fasttest_#{lib}", "paver test_lib --lib=#{lib}", true)
end
deprecated("coverage", "paver coverage", false)
# deprecates i18n:test from i18n.rake
deprecated("i18n:test", 'paver test_i18n', false)
deprecated("clean_reports_dir", "paver clean_reports_dir", false)
deprecated("clean_test_files", "paver clean_test_files", false)
deprecated("test:clean_mongo", "paver clean_mongo", false)
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment