Skip to content

Commit

Permalink
improved experiment description
Browse files Browse the repository at this point in the history
  • Loading branch information
musslick committed Oct 19, 2024
1 parent c89d720 commit c371d7f
Show file tree
Hide file tree
Showing 5 changed files with 33 additions and 9 deletions.
2 changes: 2 additions & 0 deletions docs/examples/closed-loop-basic/experiment.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ In this part of the example, we will code up two functions, one function ``trial

## Experiment Overview

![stimulus.png](img/stimulus.png)

### Independent and Dependent Variables

The experiment has two independent variables: The number of dots in the first set and the number of dots in the second set. The dependent variable is the participant's response, i.e., whether they correctly identified which set has more dots.
Expand Down
Binary file added docs/examples/closed-loop-basic/img/stimulus.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 16 additions & 1 deletion docs/examples/closed-loop-basic/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,22 @@ This example provides a hands-on approach to understanding closed-loop behaviora
- **Minimal JavaScript knowledge**: Since the behavioral experiments are implemented in JavaScript (via jsPsych), SweetBean will handle much of the complexity for you. The code is generated in Python and converted into JavaScript, so only a minimal understanding of JavaScript is required.
- **A Google account**: You will need a Google account to use Google Firebase and Firestore.

## Overview
## Study Overview

In this example study, we are interested in quantifying participant's ability to differentiate between two visual stimuli. Specifically, we will ask participants to indicate whether the number of dots in a left stimulus is the same as the number of dots in a right stimulus.

![stimulus.png](img/stimulus.png)

Our goal is to predict the participant's response based on the number of dots in the left and right stimuli. We will use two methods of predicting the response:
- a simple logistic regression model
- an equation discovery algorithm ([Bayesian Machine Scientist](https://autoresearch.github.io/autora/user-guide/theorists/bms/))

After each data collection phase, we will fit the logistic regression model and the Bayesian Machine Scientist from the ``autora[theorist-bms]`` package to the collected data. We will then use both models to determine the next set of experimental conditions worth testing. Specifically, we will identify experimental conditions for which the [models disagree the most](https://autoresearch.github.io/autora/user-guide/experimentalists/model-disagreement/), using the ``autora[experimentalist-model-disagreement]`` package.

Critically, we will leverage AutoRA to embed the entire research process into a closed-loop system. This system will automatically generate new experimental conditions, collect data from the web experiment, and update the models based on the collected data.


## System Overview

Our closed-loop system consists of a bunch of interacting components. Here is a high-level overview of the system:
![System Overview](../img/system_overview.png)
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/closed-loop-basic/prolific.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ study_completion_time = 5
# Prolific Token: You can generate a token on your Prolific account
prolific_token = 'my prolific token'

# Completion code: The code a participant gets to prove they participated. If you are using the standard project set up (with cookiecutter), please make sure this is the same code that you have provided in the .env file of the testing zone.
# Completion code: The code a participant gets to prove they participated. If you are using the standard project set up (with cookiecutter), please make sure this is the same code that you have provided in the .env file of the testing zone. The code can be anything you want.
completion_code = 'my completion code'

experiment_runner = firebase_prolific_runner(
Expand Down
21 changes: 14 additions & 7 deletions docs/examples/closed-loop-basic/workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,12 +168,12 @@ firebase_credentials = {

# Simple experiment runner that runs the experiment on firebase
# The runner defines a timeout of 100 seconds, which means that a participant
# has 100 seconds to complete an experiment. Afterward, it will be freed for another participant.
# The sleep time is set to 5 seconds, which means that the runner will check every 5 seconds for data.
# has 5 *minutes* to complete an experiment. Afterward, it will be freed for another participant.
# The sleep time is set to 3 *seconds*, which means that the runner will check every 5 seconds for data.
experiment_runner = firebase_runner(
firebase_credentials=firebase_credentials,
time_out=100,
sleep_time=5)
time_out=5,
sleep_time=3)

# Again, we need to wrap the experiment runner to use it on the state.
# Specifically, the runner compiles the identified conditions (i.e., number of tested dots)
Expand Down Expand Up @@ -300,6 +300,13 @@ python autora_workflow.py

- Try to run the workflow for three cycles. Once completed, you should see the plot (also stored in the file ``model_comparison.png``) that compares the logistic regression model with the Bayesian Machine Scientist model.

!!! hint
Note that you need to wait until the experiment is finished until you see a page with white background. If you end the experiment before, the ``firebase_runner`` will wait the minutes specified in ``time_out`` before it will be available for the next participant, i.e., run. If no more slots are currently available, you should see something like "We are sorry, there has been an unexpected technical issue.
Thank you for your understanding."

!!! hint
You can check which experiments were successfully completed by looking into the Firestore database. In your project on the [Firebase Console](https://console.firebase.google.com/), simply navigate to ``FireStore Database``. The fields in ``autora`` > ``autora_out`` > ``observations`` list all the conditions. "null" means that no data has been collected for that condition yet.

- **Congratulations**, you just set up and ran a closed-loop behavioral study!

Below, we provide some more detailed explanations for the code above.
Expand Down Expand Up @@ -427,11 +434,11 @@ Here, we define the Firebase credentials required to run the experiment on Fireb
```python
experiment_runner = firebase_runner(
firebase_credentials=firebase_credentials,
time_out=100,
sleep_time=5)
time_out=5,
sleep_time=3)
```

We then define the experiment runner that runs the experiment on Firebase. The runner is wrapped with the ``on_state`` decorator, allowing it to update the state object. The runner takes the Firebase credentials, the timeout, and the sleep time as input and returns a ``Delta`` object that updates the state with the experiment data. The ``time_out`` determines the amount of time a participant has available to complete the experiment, while the ``sleep_time`` determines how often the runner checks for experimental data.
We then define the experiment runner that runs the experiment on Firebase. The runner is wrapped with the ``on_state`` decorator, allowing it to update the state object. The runner takes the Firebase credentials, the timeout, and the sleep time as input and returns a ``Delta`` object that updates the state with the experiment data. The ``time_out`` determines the amount of time **in minutes** a participant has available to complete the experiment, while the ``sleep_time`` determines how many **seconds** the runner waits run another check for experimental data.

```python
@on_state()
Expand Down

0 comments on commit c371d7f

Please sign in to comment.