Files that you need to change (look at the *.example files!):
- .env (define the password for the db user, only use letters and numbers)
- beacon2-ri-api/training-ui-files/secret.py (use the script in the same directory to generate a key)
You need to run reindex.py everytime you recreate your DB (you can run it on the beacon container or on the host):
- To run on container: docker compose exec beacon python3 beacon/reindex.py
- To run on host: python3 reindex.py
# you need to download the appropriate python modules
Additional configurations:
- Nginx:
- When the containers for the API and UI are up and running you may want to make them available for external use over HTTPS, for that install nginx and use the configuration that better suits you on nginx_confs directory.
- Use the simple.conf file if you only intend to have a Beaconv2 running, beacon_w_test is used if you also configure a test environment running on another instance, in that case the other instance needs to use the beacon_test_instance config.
- You'll need to change the server name and the paths to the certificates.
- Careful with the favicon.ico path (it is in beacon2-ri-api/deploy by default). Nginx user (www-data) needs to have read permissions on it and execute permissions on ALL the directories in the path.
You should have installed:
- Docker
- Docker Compose
- MongoDB Database Tools (specifically
mongoimport
to add the dummy data to the database) - Python 3
All of the commands should be executed from the deploy directory.
cd deploy
docker-compose up -d --build
With mongo-express
we can see the contents of the database at http://localhost:8081.
To load the database we execute the following commands:
docker cp /path/to/analyses.json deploy_db_1:tmp/analyses.json
docker cp /path/to/biosamples.json deploy_db_1:tmp/biosamples.json
docker cp /path/to/cohorts.json deploy_db_1:tmp/cohorts.json
docker cp /path/to/datasets.json deploy_db_1:tmp/datasets.json
docker cp /path/to/genomicVariationsVcf.json deploy_db_1:tmp/genomicVariations.json
docker cp /path/to/individuals.json deploy_db_1:tmp/individuals.json
docker cp /path/to/runs.json deploy_db_1:tmp/runs.json
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/datasets.json --collection datasets
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/analyses.json --collection analyses
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/biosamples.json --collection biosamples
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/cohorts.json --collection cohorts
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/genomicVariations.json --collection genomicVariations
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/individuals.json --collection individuals
docker exec deploy_db_1 mongoimport --jsonArray --uri "mongodb://root:[email protected]:27017/beacon?authSource=admin" --file /tmp/runs.json --collection runs
This loads the JSON files inside of the data
folder into the MongoDB database container.
You can create the necessary indexes running the following Python script:
docker exec beacon python beacon/reindex.py
This step consists of analyzing all the collections of the Mongo database for first extracting the ontology OBO files and then filling the filtering terms endpoint with the information of the data loaded in the database.∫
You can automatically fetch the ontologies and extract the filtering terms running the following script:
docker exec beacon python beacon/db/extract_filtering_terms.py
If you have the ontologies loaded and the filtering terms extracted, you can automatically get their descendant and semantic similarity terms running the following script:
docker exec beacon python beacon/db/get_descendants.py
Check the logs until the beacon is ready to be queried:
docker-compose logs -f beacon
You can query the beacon using GET or POST. Below, you can find some examples of usage:
For simplicity (and readability), we will be using HTTPie.
Querying this endpoit it should return the 13 variants of the beacon (paginated):
http GET http://localhost:5050/api/g_variants
You can also add request parameters to the query, like so:
http GET http://localhost:5050/api/individuals?filters=NCIT:C16576,NCIT:C42331
You can use POST to make the previous query. With a request.json
file like this one:
{
"meta": {
"apiVersion": "2.0"
},
"query": {
"requestParameters": {
"alternateBases": "G" ,
"referenceBases": "A" ,
"start": [ 16050074 ],
"end": [ 16050568 ],
"variantType": "SNP"
},
"filters": [],
"includeResultsetResponses": "HIT",
"pagination": {
"skip": 0,
"limit": 10
},
"testMode": false,
"requestedGranularity": "record"
}
}
You can execute:
curl \
-H 'Content-Type: application/json' \
-X POST \
-d '{
"meta": {
"apiVersion": "2.0"
},
"query": {
"requestParameters": {
"alternateBases": "G" ,
"referenceBases": "A" ,
"start": [ 16050074 ],
"end": [ 16050568 ],
"variantType": "SNP"
},
"filters": [],
"includeResultsetResponses": "HIT",
"pagination": {
"skip": 0,
"limit": 10
},
"testMode": false,
"requestedGranularity": "record"
}
}' \
http://localhost:5050/api/g_variants
But you can also use complex filters:
{
"meta": {
"apiVersion": "2.0"
},
"query": {
"filters": [
{
"id": "UBERON:0001256",
"scope": "biosamples",
"includeDescendantTerms": false
}
],
"includeResultsetResponses": "HIT",
"pagination": {
"skip": 0,
"limit": 10
},
"testMode": false,
"requestedGranularity": "count"
}
}
You can execute:
http POST http://localhost:5050/api/biosamples --json < request.json
And it will use the ontology filter to filter the results.