This repository is created as a part of Many Webcams, a ManyBabies2 spinoff.
The code in this repo is used to run an online pilot study that aims to further test the reliability of webcam based eyetracking in infant reseearch. Due to the restrictions of the pandemic, conducting in-lab studies using conventional eye trackers is considerably more difficult than usual. We therefore employ an in-browser solution to determine if webcam-based eye tracking can help partially answer some of our questions regarding implicit ToM measures.
(Todo: Link the publication once it went through the review process)
As this repository contains video files served over github lfs, the git-lfs extension needs to be installed and activated.
Docker and Docker Compose are needed for both deployment and development (you can deploy this app without docker, but we highly recommend running separate containers for different experiments).
If you want to use the visualization tool, you will need an ffmpeg installation and a Python3 installation. You will also need to run
pip install -r requirements.txt
in the data-processing
directory to install the necessary dependencies. This tool will be dockerized at a later date.
This project was developed and deployed on MacOS and Ubuntu systems. The setup for Microsoft Windows could deviate from the described parts.
To make your installation available to the internet for testing you will need two things:
- A (Linux) server that is reachable via the public internet and that you can install software on.
- A (sub-)domain that you can point to the server in order to generate links for the participants.
If you do not have access to a server, you can alternatively run the development/testing setup described below on your computer or laptop. The program will run on a local server on your computer and will provide the same data as the remote setup, but participants need to be present in the lab with the computer (this has only been tested on MacOS and Unix-based systems so far, so some workarounds might need to be applied on Windows machines).
If you are partaking in a manybabies project and need a domain or assistance with setting up this experiment, contact adriansteffan via mail.
If you want to provide a link where participants can upload their trial data if the upload to the server fails, you need to specify the link in config.js
.
After cloning the repository, you can build the project by running
./build-container.sh
in the prod_mb2-webcam-eyetracking directory. This will automatically start the webserver serving the app, you can stop it with
docker-compose down
and later restart it with
docker-compose up -d
in the prod_mb2-webcam-eyetracking directory.
To make the container reachable from the internet, refer to these instructions on how to set up your apache reverse proxy. Depending on your setup, you might want to change the ip mapping in prod_mb2-webcam-eyetracking/docker-compose.yml.
Alternatively, you can use a webserver (like apache) running and configured on your machine. This server needs to be reachable via HTTPS and support PHP.
You can refer to this repository for a preconfigured dockerless setup.
On a fresh install, this would be achieved by running:
apt-get install -y apache2 php && a2enmod ssl
After cloning the repository, run
./build.sh
in the root directory. Afterwards, copy the contents of local-server/webroot
to the webroot of your webserver by running
cp local-server/webroot/* /var/www/html
Finally, the folder for the experiment data needs to be created by running
mkdir /var/www/data && chown -R www-data:www-data /var/www/data
(adjust the filepaths of the commands and in writedata.php
to suit your particular setup)
As webgazers requires the usage of the https protocol, you will need a local server for development. This project comes with a docker-compose.yml file that takes care of the setup and configuration.
In order for https to work, we need an installation of open-ssl to create and sign a ssl certificate for localhost.
After installing open-ssl, run the following commands in the local-server/config
directory:
openssl genrsa -des3 -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
openssl req -new -sha256 -nodes -out server.csr -newkey rsa:2048 -keyout server.key -config <( cat server.csr.cnf )
openssl x509 -req -in server.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out server.crt -days 500 -sha256 -extfile v3.ext
Afterwards, you need to add your newly created CA (rootca.pem) to the list of trusted CAs of your operating system. Instructions for that can be found here.
As a final step, build the docker container for the webserver by running
docker-compose build
in the root directory of the repository.
In order to run the development server, run
docker-compose up -d
in the root directory. You can stop the server with
docker-compose down
When initially cloning the project and after making changes , update the files in the local webroot by running
./build.sh
in the root directory.
After deploying the container, there are a few options for the execution of the online experiment that can be configured using url paramters.
For example, to if you want the output data to be linked to a participant with the id "participant1", and want to choose the stimulus order "Trial order A", you use the following link:
yoururl.com?id=participant1&trial_order=A
The following table gives you an overview of all available parameters:
url parameter | possible values | default value | description |
---|---|---|---|
lang | string ("de", "en", or "ko") | "en" | the language in which the instructions will be displayed |
id | string | a randomly generated uuid | the id that is attached to the output data, used to identify a participant |
trial_order | character ("A" or "B") | a random choice of either "A" or "B" | the choice and order of stimuli as specified by the proposal paper |
key | string | null | A key in the ManyKeys format, used to encrypt the data before being transmitted to the server. If not present, the data will be stored on the server without encryption |
show_aoi | true/false | false | a flag to indicate whether the aois should be overlayed over the stimuli (for debugging purposes) |
download_data | true/false | false | a flag to indicate whether the browser should download a the generated data after the trial finishes |
prevent_upload | true/false | false | a flag to indicate if the upload of trial data to the server should be prevented |
print_data | true/false | false | a flag to indicate if the trial data json should be displayed in the browser after the trial finishes |
The data that was uploaded by the participants browsers can be found in prod_mb2-webcam-eyetracking/data
(or var/www/data of you run the dockerless setup). There are two types of files, generated for every participant:
- A json data file containing all of the experiment data generated by jsPsych. It is named
[id]_[trial_order]_data.json
- Video data files containing the webcam recording for each of the individual trials. They are named
[id]_[trial_order]_[trialname].webm
Note: If you are running ManyKeys, each User will have a seperate folder, filled with ciphered .enc
files. Archive this folder and send it to the respective user for decryption.
To preprocess and visualize the eye tracking data of each participant and facilitate the pre-screening of participant videos, you can use the script provided in the data-processing
directory. The script expects the data of the participants to be located in prod_mb2-webcam-eyetracking/data
, so if you are running a development setup, you will need to move your data there first.
As the preprocessing script grew along with the study requirements in a multi-lab setting, both the script and its usage are messy. The original version used for the study is kept for documentation purposes, however, an improved version of the preprocessing pipeline (including exclusions) can be found in this repository
If you just want to visualize all the data without manually excluding participants, run
python main.py
This will create an output
folder next to the script, with seperate folders for each participant. In there you can find video files that overlay the eyetracking results over the stimulus videos and add the synchronized webcam video as well as beeswarm plots for all stimuli.
If manual inspection and exclusion of certain trials are required, start by running
python main.py t
This will create an output
folder next to the script, with seperate folders for each participant. In there you can find video files that overlay the eyetracking results over the stimulus videos and add the synchronized webcam video.
Next, rename the example_excluded_trials.csv
to excluded_trials.csv
. For every participant, look through the newly rendered videos and specify what trials to include or exclude by putting 'yes' or 'no' in the corresponding fields (make sure to order the trials per row in the order they appeared for the participant).
TODO: specify exact exclusion criteria / specify multi-rater system
After deciding what trials to include/exlcude, run
python main.py p
to generate the files suitable for analysis in R or SPSS.
Finally,
python main.py b
will create the beeswarm plots for all the stimuli videos, excluding all participant trials specified in excluded_trials.csv
.
To reuse parts of this study for your own experiments, programming experience in both Javascript and Python is required.
To adapt the experiment presentation, you will need to be familiar with jsPsych 6.3. We have extended the functionality of the library (see below) to provide additional features important for infant eyetracking studies.
Once you are familiar with the library, you can edit the timeline present in src/experiment.html. In there, you will also find examples of how we used our newly implemented features in the context of jsPsych (webcam video recording, aoi definition, manykeys encryption etc.). Depending on what data you upload, you might also need to adapt the PHP script found at src/write_data.php
Deployment details (dockerfiles, folderstructure etc.) are independent of the experiment itself and can stay the same for any kind of experiment.
The preprocessing script is very closely tied to the study at hand. If you want to preprocess the output in a more general way for other stimuli and paradigms, refer to this repository. The code will still need modification (regarding aois, stimuli naming and files, exclusion criteria), but the changes should be substantially easier to make compared to the code found in this repository.
- jsPsych - A modified version of jspsych-6.3.1 is used for the general trial structure and webgazers integration
- webgazers.js - The eye tracking library used to capture gaze coordinated via a webcam
-
Changed the
webgazer
extension injspsych-6.3.1/extensions/jspsych-ext-webgazer.js
. It now has additional parameters for defining areas of interest (AOI). These AOIs tag all datapoints of the webgazer output that fall into the correspong AOI. With an url paramter flag they can also be displayed for debugging purposes. -
Added a
webcam-recorder
extension injspsych-6.3.1/extensions/jspsych-ext-webcam-recorder.js
. It can be used to record the participants webcam on a trial by trial basis. (Until a better solution is found, the video blobs get saved to window.webcamVideoBlobs for further processing) -
Added a
background-audio
extension injspsych-6.3.1/extensions/jspsych-ext-background-audio.js
. It can be used to add a looping audio file (currently hardcoded) to the background of any trial. -
Changed
jspsych-6.3.1/plugins/jspsych-webgazer-calibrate.js
andjspsych-6.3.1/plugins/jspsych-webgazer-validate.js
to optionally replace the dot with something more attention grabbing and add background audio, making it better suited for infant research.
- Adrian Steffan adriansteffan website
This project is licensed under the GNU GPLv3 - see the LICENSE.md file for details