This is an ACT-R model that shows how reinforcement learning parameters (and, in particular, sensititivity to negative feedback) shapes the solving of Raven's Advanced Progressive Matrices (RAPM), a common test of fluid intelligence.
The model was tested against behavioral and fMRI data collected by
Stocco, Prat & Graham. The raw behavioral data in experiments 1-3 is
saved in three folders, experiment1
, experiment2
, and
experiment3
. Comparisong accuracy and response time data across
problem difficulty comes a different experiment from our lab, and is
saved in the firestorm.txt
text file.
The model is developed in ACT-R 7.5, with the "old-style" devices written in Common Lisp. All of the model code is contained in four different Lisp files:
rapm-model.lisp
.rapm-device.lisp
. This is the model's device, which encodes the RAPM problems and interacts with the model.rapm-problems.lisp
. Contains a Lisp-like definition of a RAPM problem, together with functions to analyze them.rapm-simulations
. Contains a set of functions to run large model simulations across parameter space
To run the model, follow these steps:
- Load ACT-R 7.5.x
- Load the
rapm-device.lisp
file. This will automatically load therapm-problems.lisp
file as well. - Load the
rapm-model.lisp
file. This will load the ACT-R model code. - Before running the model, initialze both model and device by
calling the
(rapm-reload)
function. The function will properly initialize the device, reset the model, and connect the model's visual system to the device's interface.
The rapm-simulations.lisp
file contains many handy functions for
running simulations and saving the results on a file. When saving
results, each run on an simulated experiment (by default, 16 4-feature
RAPM problems) will be saved as a single line.
Large-scale simulations across parameter space are handled by
generating multiple Lisp files that run simulations on different
portions of the parameter space. The Python script generate-test.py
will generate such files across a modifiable list of parameters (you
might need to change the script's specific paths to fit your own
system). The Python script generates a number of scripts corresponding
to the different regions in which the parameter space is partitioned
(by default, 64 different scripts). Each script is named
test-<N>.lisp
file, with N
being a counter from 1 to the max
number of partitions.
A series of four shell scripts manages the various Lisp files:
run-sims.sh
will launcha new instance of SBCL (by default; modify the Python script to use a different Lisp interpreter) on each Lisp test file. Each process' PID will be saved to apids.txt
file.kill-sims.sh
will abort all the SBCL processes spawned byrun-sims.sh
. The script will kill, in series, all the processes with a PID listed inpids.txt
(you should run this assudo
).merge.sh
will merge all the generated files into a single text file, and the zip it.partial.sh
will produce a file namedpartial.txt
, which is like the file produced bymerge.sh
but carefully handles partially completed simulations. This is useful to inspect results before all the simulations are complete.
No unit testing yet. But testing functions and testing problems are sparsed here and there.
The following publications are based on the model:
- Stocco, A., Prat, C. S., & Graham, L. K. (2021). Individual Differences in Reward‐Based Learning Predict Fluid Reasoning Abilities. Cognitive Science, 45(2), e12941, https://doi.org/10.1111/cogs.12941.