Skip to content

Desktop Client QA Workflow

danimo edited this page Dec 19, 2014 · 7 revisions

This page describe the QA efforts that we do on ownCloud Desktop Client Development

Manual QA

The Desktop Team currently has one dedicated QA engineer. This is how we collaborate.

Bugfixes

Once reported bugs are fixed by one of the developers, the developer who fixed it sets the Ready to Test tag and mentions the SHA sum of the fixing commit. The bug is not closed. The QA engineer picks up the list of bugs with that label and verifies the bug fix. If the bug is really fixed, the QA engineer closes the bug report. If not, the QA engineer removes the Ready to Test tag and comments the bug. The developer who worked on the bug fix picks the bug up again and fixes again.

Releases

For releases, the QA Engineer works based on a testplan. There is a generic testplan, but also, if applicable, release specific parts of the test plan.

To set up the test plans, the QA engineer needs detailed information on what new functionality or bug fixes is available in the new release.

Automatted QA

Unit Tests

We have two collections of unit tests, one originating from csync here and one based on the Qt Testing Framework available here. To include the unit tests to the build, call cmake with the parameter -DUNIT_TESTING=ON.

The tests can be run manually on Linux, but not (yet) on Windows or Mac by calling make test.

State: Both unit test groups are part of the continous integration server. They run on every checkin.

tx.pl Scripts

We maintain a set of perl based scripts to perform integration tests between client and server. The scripts wrap around the command line client owncloudcmd. The basic idea is that the client directory and a test server is filled with a defined set of files through the server's WebDAV interface, completely without sync client utilization. After that, one or more sync run is performed by the script. After the sync run completed, the local and remote file trees are asserted if both are equal, or meat the expectation.

The scripts can be run manually on the developers machine on Linux, but not (yet) on Windows or Mac by calling them directly from the csync/tests/ownCloud directory of the git checkout. Prerequisite is to crate a t1.cfg file based on the template in that directory.

State: The tx.pl scripts are part of the continous integration server. They run on every checkin.

Smashbox

There is a test suite called smashbox which performs a whole variety of tests between server and multiple clients.

It would be great if this test suite could run on a regular basis or even as part of the CI process, depending on the time that is needed to run the whole suite. Some reporting would be great.

State: Not yet running automatically but manually driven by Jakub Moscicki. Automation is WIP.

Performance Tests

Performance tests should be done on a regular base to be able to detect changes in one or the other direction. Only performance improvements should be accepted. For that, maybe every night a defined set of data should be uploaded from one client to a server and synced down to another client. The server can be on the same machine as the clients which minimizes network latency problems.

From the server's access_log logfile timing information can be computed, as the access_log contains proper time stamps and also can log the server processing time of a request, see http://www.ducea.com/2008/02/06/apache-logs-how-long-does-it-take-to-serve-a-request/ . Some characteristic duration values should be stored to be able to compare it with every run to be able to detect a tendency.

To create and handle a reproduceable, potentially huge file tree, we created a script set called dav torture. It can be found here and is described in a blog entry.

State: There is currently no automatted performance testing.

##Algorithm Validation The sync algorithm is a very complex algorithm which needs to be validated especially for edge cases. Edge cases are very rare to happen, but given the huge amount of files we potentially deal with, it is very likely that even the edge cases happen.

###Documentation A first step is proper documentation of the algorithm, so that people can read and understand the idea and chime into the discussion.

The sync alogrithm is documented in the ownCloud Client Documentation.

State: The documentation is started but could be improved.

###Quick Check Based on conversation we had with Prof. Benjamin Pierce in CERN, Genf, we think that sync algorithm validation would make very much sense. That can be achieved utilizing a tool called Quick Check, which does sophisticated testing on a model based approach.

State: We try to get a phone call with Benjamin to discuss pros and cons.