You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be really nice if the GUI would report what tests have been run on a resource, similar to what YummyData does, so that we can learn how resources can be made more FAIR.
The text was updated successfully, but these errors were encountered:
I currently disabled the commits, because the reports where changing at each run an some could get quite big, there are so many unvalid RDF file out there! But the system works well
This could be easily updated to report other metadata
Ideally we would reuse an existing library that would generate this report (python would be perfect, Java or any other language would work if we can make it an executable).
If you have some direct pointers to existing tools, or even just direct link to the code files where some services implemented it (e.g. YummyData) I could take a look
Otherwise we would need to implement one.
The problem is that there are already multiple "FAIR validator" tools, but, as far as I know, most of them are not reusable, or even accessible (disclaimer: I host one of them, but cannot reuse it.).
Which is ironic as the core of the FAIR ecosystem does not seems FAIR itself, and cannot even be peer reviewed.
Honestly in my opinion it could be implemented as a simple python script/library (1 file, less than 500 lines most probably), and be easily improvable incrementally by the community to make the process of validating FAIRness more FAIR.
Reporting of tests done
It would be really nice if the GUI would report what tests have been run on a resource, similar to what YummyData does, so that we can learn how resources can be made more FAIR.
The text was updated successfully, but these errors were encountered: