-
Notifications
You must be signed in to change notification settings - Fork 0
Testing Guide
Unit tests will be used to help ensure code quality and enable developers to take advantage of TDD. Unit tests should be written for every compute, transition, merge, and final function in a *.cpp file. The following rules must be adhered to:
Assume that $MADLIB is the top-level MADlib directory.
- You must already have installed Boost version >=1.46 and have built the Boost.Test unit test framework library (i.e.
libboost_unit_test_framework.so
). When configuring or calling cmake for MADlib, make sure to set BOOST_ROOT to this Boost directory if it is not included in the cmake module search path. - All unit test source files will be located in
$MADLIB/src/tests
- Every newly-developed compute, transition, merge, and final function should have a unit test associated with it.**
- Unit tests associated with one module should all be contained in the same source file (e.g. svm_tests.cpp)
- All unit test source files should have a .cpp extension
- Do NOT edit the file runtest.cpp unless if you intend to change the behavior of Boost.Test execution and output
**The extensiveness of unit testing is ultimately left up to the developer, and there is no generic rule for how many and what unit tests should be written. However, we encourage all developers to aggressively test their code, and the Boost.Test framework is simply a tool to facilitate test creation.
To create a unit test file, take a look at dummy_tests.cpp and t_test_tests.cpp (which has an example of using an AnyType fixture) in $MADLIB/src/tests, which were developed as examples. Ultimately, to create a unit test:
- Add
#include <boost/test/unit_test.hpp>
in the source file - Follow the Boost.Test API to create test cases and test suites (Boost Test Library: The Unit Test Framework)
- Once you are done editing your unit test source file, save it and run
make
thenmake test
in the build directory. This will create and run a test suite encompassing all unit tests in $MADLIB/src/tests for each port version. You should see an output like the following (e.g. if you have Postgres 9.2.x and GPDB 4.2.x installed):$ make test Running tests... Test project /data/home/gpdbchina/geeg/git_madlib/madlib/build Start 1: UnitTests-Postgres_9.2 1/2 Test #1: UnitTests-Postgres_9.2 ........... Passed 0.00 sec Start 2: UnitTests-GPDB_4.2 2/2 Test #2: UnitTests-GPDB_4.2 ............... Passed 0.00 sec 100% tests passed, 0 tests failed out of 2 Total Test time (real) = 0.01 sec
After calling make test
you will also see some XML output files located in the main build directory called UnitTests-*.xml that have more detailed information about which Boost checks passed for each individual test. Search these files to see which check failed if the test suite failed.
Unit test scripts are executed by madpack ... install-check
command. They should perform a series of tests on some of the functions of a MADlib module. These scripts should be located in a specific directory and adhere to the following rules:
-
Default location:
./modules/<module_name>/test/
-
Number of test scripts is not limited, but don't go crazy...
-
For each module there is a temp schema created and drop at the end.
-
All test file are executed in alphabetical order in the module-specific temp schema.
-
File names are irrelevant, but the extension should be sql_in, e.g.
any_name.sql_in
. -
Use MADLIB_SCHEMA as the placeholder for the schema name of MADlib objects (as in other SQL_IN files).
-
Any DB objects should be created w/o schema prefix, since this file is executed in a separate schema context.
-
When comparing some test results to known values there may be a need for a user defined error condition. This can be achieved by using
RAISE EXCEPTION
clause in a pl/pgsql block. Example:CREATE OR REPLACE FUNCTION test_madlib_module() RETURNS void AS$$ DECLARE c INT; BEGIN SELECT INTO c count(*) FROM pg_class; IF c > 0 THEN RAISE EXCEPTION 'Invalid record count (%)', c; END IF; END; $$ LANGUAGE plpgsql;
-
Test execution results:
- If any ERROR condition is encountered the run will stop and the result will be considered as 'FAIL'
- If no ERROR condition occurres and the logfile does exist the result will be 'PASS'
- In any other cases the result will be 'ERROR'
-
Make sure your test scripts are comprehensive and test each and every UDF/UDA in your module.