-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leaf term not used for library_preparation_protocol.library_construction_method.ontology #334
Comments
We agreed we should make these updates, I believe we should be able to update these through the UI but we should time box it, there is a risk as these are likely to be DCP1 projects, many of which are on old schema and have never had updates applied to them. |
We need to break this issue up into 16 manual update in UI --> export sub-tasks (for each project in the table). |
These DCP1 datasets need to be manually reexported since they're already in Complete state. They will no longer go back to the submittable state (valid) once updates are done either via UI or API. For doing the updates, we could simply just create a script for convenience. |
@clairerye the oldest library_preparation_protocol schema version we have in the prod data is https://schema.humancellatlas.org/type/protocol/sequencing/6.1.0/library_preparation_protocol , it has library_construction_method. If this is urgent, we could create a script to update the schema version to https://schema.humancellatlas.org/type/protocol/sequencing/6.2.0/library_preparation_protocol and the library_contruction_method's new value. Then trigger the reexport. Or we could wait for bulk updates support: ebi-ait/dcp-ingest-central#147 |
Converted this to an epic to capture all dcp1 updates.
Best to work together with a dev to do this via a script! |
@jacobwindsor to have a meeting with the wranglers today, including discussion about this. @Wkt8 to send an invite |
Had a chat with jacob about using different ontology terms depending on combination of 'method' and 'end bias' - hopefully unblocked! |
This PR contains the scripts for updating projects. It is broken into two stages: I have ran the first stage on the first submission and it is currently exporting: https://contribute.data.humancellatlas.org/submissions/detail?uuid=85e72912-9f91-4489-8169-3b43cc65a16a |
The above submission is now exported and I have removed the appropriate files from the staging area. Waiting to confirm if this is correct and everything seems okay with this project before proceeding with others. @Wkt8 @aaclan-ebi Project UUID: |
@Wkt8 confirmed that the new metadata is correct. @ESapenaVentura confirmed that the terra bucket is correct |
Protocol |
Please find a CSV below of the applied patches to each protocol.
The following 15 submissions are now exporting:
|
I have cleaned up the terra staging area for all but 5 submissions. Three submissions ( The below two projects are not DCP1 projects so no terra cleanup needed AFAIK.
|
All finished except https://contribute.data.humancellatlas.org/submissions/detail?uuid=d1610c4a-76c6-4b69-af63-c74af869fa75 Waiting to finish exporting |
@jacobwindsor to look at the project that's still exporting it might be stuck |
Wrangler action to send product import forms about the 15 update tickets |
Only 2616 of the 2618 assays were exported by this job. I will force this job to EXPORTED and the submission and then retry later |
All submissions have now been exported and cleaned up if DCP1 |
Worth checking with Alegria and Jeff if there is a faster way of doing the import forms for the updated proejcts. |
All of the export forms have now been sent! |
Child terms were added to these ontology terms after the data was ingested which specifies the end bias explicitly.
It is now possible to use the more specific term for the ontology_ids below.
For context see:
HumanCellAtlas/dcp2#13 and
https://docs.google.com/spreadsheets/d/1Wk7SGxEz00AkNokYv3YlFHJVF9U6THKcrvgHARsnau8/edit#gid=0
0882881d-bc39-4b85-b557-e874b93124eb
2945bb1f-90de-42a3-afa1-f57a62c853f0
58df9607-ab66-48e0-a47b-1f897baae139
6f399c41-797f-4f69-8719-cbd468478e68
71efd7ce-0ec0-4423-9eb2-9bd42f40a33f
8b4f9b9b-a1c1-40ec-aab9-ebb61918c01c
910266c3-64b1-4a3d-a4fe-844be494ffd1
952603f3-cf07-46cf-a439-299f0e71dbca
ac399b75-3cc1-4bf0-8d19-8c29c2545402
c2dacc52-da61-49a6-ac4f-6a684ae45d4f
dc19bb22-ae7b-431b-9b8b-7b49799a8fcd
e953a093-f5ab-46df-9223-a492f4775d44
f151bb74-b149-4992-9728-923f1943968f
fa99959f-faa2-4d69-a092-48333e59f5f3
88bb0331-4a61-4268-b17b-2310fb47bcb8
ab819eae-9eb3-4f12-8b0e-cd4204702512
The text was updated successfully, but these errors were encountered: