Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Criteria used to assess opennness of projects/tools/platforms? #13

Closed
bmkramer opened this issue May 9, 2018 · 15 comments
Closed

Criteria used to assess opennness of projects/tools/platforms? #13

bmkramer opened this issue May 9, 2018 · 15 comments
Labels
Create Make something new Curate Update or otherwise improve some piece of information mozsprint To remain compatible with other Mozsprint repos question Further information is requested Review Share your thoughts on something existing

Comments

@bmkramer
Copy link

bmkramer commented May 9, 2018

Has it already been decided what criteria will be used to assess openness of projects/tools/platforms to be included in the roadmap? (and perhaps also to identify areas where improvement is recommended)?

For instance, the [Principles of Open Scholarly Infrastructure] (http://dx.doi.org/10.6084/m9.figshare.1314859) might be useful for assessing the infrastructure of tools and platforms. Perhaps in combination with aspects of the FORCE11 principles of the scholarly commons (see also preprint) for both the openness of resulting research objects (Open, FAIR and citable) and openness to participation.

Full disclose: this ties in closely to some of the work we are ourselves planning to do over the next months*, and it would be great to see if we can collaborate!

*See the proposals for FORCE2018 that @JeroenBosman and I put in:

@Daniel-Mietchen Daniel-Mietchen added Create Make something new question Further information is requested Review Share your thoughts on something existing Curate Update or otherwise improve some piece of information labels May 9, 2018
@Daniel-Mietchen
Copy link
Member

@bmkramer I'm certainly looking forward to interaction along these lines!

@Daniel-Mietchen Daniel-Mietchen added the mozsprint To remain compatible with other Mozsprint repos label May 9, 2018
@HeidiSeibold
Copy link

I think this is a very important question which we should probably discuss very early on. For me, important aspects are:

  • Open source and with an open license. If the basis of the project isn't open it's not particularly good for open practices. This is maybe a bit strict.
  • Freely available (at least to those who are unable to pay the cost). In a stricter sense this may include not having to "pay" with personal information or having to look at ads.
  • FAIR. I feel like the FAIR principles can be valuable for tools. The also need to be Findable, Accessible, Interoperable, and Reusable.

@Daniel-Mietchen
Copy link
Member

@HeidiSeibold
Those are good starting points. Some things I'd add would be the Open Definition as well as some measures for transitioning from or interfacing with non-open tools or infrastructure.

The question of defining what is meant by "open" also popped up at drdvo/OWLTEH#8 , and there are related questions on Ask Open Science (pinging our issue #6 ):

@drdvo
Copy link

drdvo commented May 10, 2018

You might also find useful the 5R permissions, in relation to the notion of 'Open Educational Resources' (OER) but relevant more generally to definitions of open content.

@Daniel-Mietchen
Copy link
Member

@bmkramer In a chat around our Mozsprint, @felliott just brought up your graph at https://user-images.githubusercontent.com/16495178/39816678-e8c85458-5369-11e8-9e09-b09ad6b1fae8.png . Do you have an editable version of that, so we could go and try to filter that by whatever openness criteria we come up with?

@bmkramer
Copy link
Author

bmkramer commented May 10, 2018

Yes, I have, it was made in powerpoint (ahem) - I'll look it up on my harddrive and figure out how to upload/share in a moment.

Some background on what is currently on that figure (and what isn't), see also here:

The logo's were the tools and platforms (not necessarily all open!) that we asked about in our 2015-2016 survey on tool/platform usage, ordered per the different phases of the research workflow that we also used in the survey.

The grey lines are tools/platforms that people had indicated using together, either within or between 'adjacent' workflow phases. This co-use represented raw data, later on we did some more thorough analysis to identify tools/platforms preferentially used together - those results are shown here in an interactive table.

Finally, the green highlight depicts a hypothethical workflow formed by tools and platforms across the workflow that had been identified as mostly compliant with the (then) version of the principles of the scholarly commons, during a workshop held in San Diego in Sept 2016. (@Daniel-Mietchen - you may remember :-)

Enough talk, off to search my harddrive!

[edited to add: this is taking a bit longer, apologies. Hope to have it for you by tomorrow, to use on the second sprint day. The first day is well and truly over in my timezone....]

@bmkramer
Copy link
Author

bmkramer commented May 11, 2018

OK, here's the editable powerpoint. Contrary to what you might hope/expect, the circles surrounding the tools together with the grey lines connecting them are a static image, because it was generated as a network image in Gephi (based on our survey results).

101innovations_workflow_puzzle.pptx

An additional suggestion, if you will be scoring tools along openness criteria, it could be useful to add that info to our database of 400+ tools/platforms, e.g. in column L in the 'Data' tab. (we are behind in updating this list, so feel free to add tools suggestions)
http://bit.ly/innoscholcomm-list

Hope this is helpful!

@xolotl
Copy link
Member

xolotl commented May 11, 2018

Just a note to tie this issue together with two others that are focusing on visualizing the open science tool ecosystem (#11) and locating data about it in wikidata (#7).

@Daniel-Mietchen
Copy link
Member

I've just browsed the spreadsheet for a while and think we could do a number of things with it:

  • add some columns in the Data sheet to cover some dimensions of openness, e.g.
    • open source
    • code location
    • code license
    • open data
    • data location
    • data license
  • add a row (perhaps initially just under the metadata rows 2 and 3) with the corresponding Wikidata properties
  • add a column (perhaps initially to the right of column B) with the Wikidata item for the respective tool or website
  • translate concepts in the free-form text (e.g. in columns F, H or J) once into the corresponding Wikidata items

On that basis, we could then filter for those that meet any of the openness criteria, convert the table into a format compatible with QuickStatements, and edit/ create the corresponding Wikidata entries.

@felliott
Copy link
Contributor

I'll start looking at converting it to QuickStatements.

@Daniel-Mietchen
Copy link
Member

In cases where there is uncertainty about matches between terms and Wikidata concepts, we could use the Mix'n'match tool, which allows users to do such matching.

For an example, see https://tools.wmflabs.org/mix-n-match/#/list/662/unmatched (Medical Subject Headings for Disciplines and Occupations)

For the manual, see https://meta.wikimedia.org/wiki/Mix%27n%27match/Manual .

@wesm
Copy link

wesm commented Jul 8, 2018

I don't see "governance" mentioned in this thread so far. Projects may be open source and with permissive licenses, but if they do not have open governance (in the style of Apache Software Foundation projects, for example; with structures in place to thwart "corporate takeovers"), it can be problematic longer term.

@dwhly
Copy link

dwhly commented Jul 10, 2018

Good thread.

Has it already been decided what criteria will be used to assess openness of projects/tools/platforms to be included in the roadmap?

The only decision that was made initially was that projects be science focused, open source and non-profit. These points (and whether to add others) are certainly open for discussion. We might consider two categories: hard requirements and best practices. The ecosystem is still relatively small, with most projects being either the only representative for a certain function or perhaps one of two. We don't want to exclude folks beyond what we agree are the absolutes.

(and perhaps also to identify areas where improvement is recommended)?

Perhaps these might be the best practices?

I can see that it might be a useful way of calibrating projects or scoring them-- maybe as much for their own benefit as anyone else's. i.e. what are the best practices, and what should you strive for -- and what are the trade-offs.

@bmkramer
Copy link
Author

Hi Dan, thanks for the reply!

Thinking ahead towards the August workshop, it might be a good opportunity to do some sort of assessment or scoring exercise, both for the discussion it might generate, as you indicate, but also as a test case for how to operationalize these kind of principles and practices. (We started something like that in the scholarly commons workshop in San Diego two years ago).

One approach I really like is the one taken by Zenodo in how they outline to which extent they comply with the FAIR principles (http://about.zenodo.org/principles/). Such a 'self-assessment', might be a way to make this less about judging and more about being transparant and accountable (and as you say, maybe as much for the benefit of providers themselves as for anyone else).

Anyway, just thinking out loud...

@xolotl
Copy link
Member

xolotl commented Sep 18, 2018

This issue was moved to OpenScienceRoadmap/roadmap#1

@xolotl xolotl closed this as completed Sep 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Create Make something new Curate Update or otherwise improve some piece of information mozsprint To remain compatible with other Mozsprint repos question Further information is requested Review Share your thoughts on something existing
Projects
None yet
Development

No branches or pull requests

8 participants