Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Impact-based thresholds for OA campaigns are confusing #10

Closed
CooperSmout opened this issue Jul 28, 2020 · 3 comments
Closed
Labels
discussion Discussion topic about project strategy, direction, etc

Comments

@CooperSmout
Copy link
Member

CooperSmout commented Jul 28, 2020

The problem
Multiple people have commented that the impact-based thresholds we're currently using for the 3 flagship open access campaigns (#5 #6 and #7) are too confusing and holding back pledges, so I thought we could discuss possible alternatives here.

Background
I went with this method (described in detail here) because a much simpler metric -- based on the number of pledges -- might fail to capture the value (citations) that maintains the legacy publishing system (note that there was some previous discussion on this topic in the Pubreform forum and the platform repository). Specifically, imagine that the threshold triggers almost exclusively due to pledges by researchers who aren't very active or publishing in high-impact journals. This would mean that high-impact journals can carry on as usual (because we fail to extract a critical mass of 'value' from legacy journals) and those few high-impact researchers who joined the campaign would now be forced to change their behaviour without sufficient protection (i.e., without a critical mass of value going into the OA journals). So to address this, I thought it better to use a citation-based metric to quantify 'support' -- i.e. the proportion of citations that reference articles (or other research outputs) produced by pledgers in the last X years (controlling for time since publication -- see the About page for details. Because the metric is complex, I thought we could clarify it with a short animated video (but we haven't won any funding to pay for this yet).

Given that the OA campaigns haven't been overly successful, I think it's worth considering alternatives that might be simpler and reduce barriers to entry.

Current pledge settings
The following text boxes pop up when someone clicks 'Pledge' on the campaign page:

Pledge textbox 1
I pledge to uphold the above Criteria if and when the signatories to this campaign equate to the following proportion of impact in my research field (see the 'About' page for calculation details). You can adjust this setting at any time during beta development. Note that selecting 0% means your pledge will activate immediately.
Chosen value: X (slider from 0 to 100)

Pledge textbox 2
This pledge will apply to research outputs for which I am an author in the following position/s (only select positions for which you are sure you will be able to comply with the pledge). You can change this setting at any time during beta development.
First / Middle / Last

Possible solutions
Maybe we could use a simple eligibility criteria to ensure that pledgers are somewhat influential in their field? Some options:

  • Minimum one publication in a journal with an impact factor of X or more (this could be difficult, as impact factors vary wildly across disciplines)
  • Minimum one publication in a journal with a Scimago quartile rank of 2 or more
  • Minimum H-index

This would then give us a simple count of the number of pledges, which we could use to activate pledges.

Additional considerations
If we move to one of the above solutions, how will we deal with pledgers who don't want their pledge to apply to all of their papers (e.g. only those that are first-author)? Do we simply remove this option and make the pledges unanimous? An analysis of the Cost of Knowledge boycott showed that many authors renegged on their pledges when it came to middle-author papers, so the idea here was to allow people the flexibility to decide where and how their pledge will apply.

Alternatively, we could just leave the campaigns as is, and see what happens over time. The new campaigns might bring additional advertising/interest, and with it additional pledges to the OA campaigns. All thoughts welcome :)

@CooperSmout CooperSmout changed the title Pledge textboxes: Open Access Campaigns Impact-based thresholds for Open Access campaigns Jul 28, 2020
@CooperSmout CooperSmout changed the title Impact-based thresholds for Open Access campaigns Threshold calculation for open access campaigns Jul 29, 2020
@CooperSmout CooperSmout added live discussion Discussion topic about project strategy, direction, etc labels Jul 29, 2020
@CooperSmout CooperSmout changed the title Threshold calculation for open access campaigns Issue: Impact-based thresholds for OA campaigns are confusing Sep 16, 2020
@MalikaIhle
Copy link

to keep things simple:

Pledge textbox 1
I pledge to uphold the above Criteria if and when the signatories to this campaign equate to the following proportion of impact in my research field (see the 'About' page for calculation details). You can adjust this setting at any time during beta development. Note that selecting 0% means your pledge will activate immediately.
Chosen value: X (slider from 0 to 100)

I would say pick a number in the pledge text (say 100). As it says 'from my research field' I would have the pledger select their field (say tier 2 from such list: https://docs.google.com/spreadsheets/d/1APu3Hi5UtexvISTGkk1D1uZK-7Efjpj9_sQBq_dH38I/edit#gid=222728450) so that the pledge activates when 100 people from their field made the same pledge.

Pledge textbox 2
This pledge will apply to research outputs for which I am an author in the following position/s (only select positions for which you are sure you will be able to comply with the pledge). You can change this setting at any time during beta development.
First / Middle / Last

perhaps just say first or last author in the pledge text and remove the option to choose?

As for a criteria related to people's influence or career stage (considering the high turn over of ECR), survivorship of people once the pledge activates, ect - I try and read many of the post there: https://gitlab.com/publishing-reform/discussion/-/issues/78 but I don't have any good suggestion.

I do see that the main problem here is the trade off between keeping things simple for people to pledge, and making the pledge reflect the actual complexity of the system but I'm still very naive to all those considerations.
Would perhaps asking people to select whether their contract is permanent or fixed termed be helpful?

@CooperSmout
Copy link
Member Author

CooperSmout commented Sep 28, 2020

I would say pick a number in the pledge text (say 100). As it says 'from my research field' I would have the pledger select their field

This would certainly be much simpler than classifying people using their previous publications, as per the original idea. And picking a single number of pledges for their activation (e.g. 100) would be simpler again, and more akin to the new generation of campaigns under design right now.

perhaps just say first or last author in the pledge text and remove the option to choose?

This would also greatly simplify things. The motivation for letting people choose was that circumstances differ, and people should be able to choose which publications count under their pledge. This is the real benefit of using an impact-based threshold (compared to a simple headcount) -- we can calculate impact using only those publications that people have pledged to withhold in the future (based on their past publications).

The Cost of Knowledge (CoK) boycott failed in the sense that quite a few of the signatories broke their pledges by publishing in Elsevier journals. It seems likely that many of these cases were for middle-author papers, because they have the least amount of say in where it gets published, and so I thought if we give people the option to choose they would be more likely to uphold their pledges (more discussion on CoK in the pubreform forum, in which @dmitriz also notes the inflexibility of the pledge settings as a possible problem). But if we get rid of the impact-based thresholds and just use a simple headcount instead, then it also makes sense to get rid of the option to choose authorship positions (because we won't be measuring at the article-level). And if we do that, I like your idea to just remove middle-authorship from the pledge altogether.

Would perhaps asking people to select whether their contract is permanent or fixed termed be helpful?

It would be useful to know, as this speaks to the turnover problem raised elsewhere. But I can't think of a simple way to use it, without returning to complicated metrics.

the main problem here is the trade off between keeping things simple for people to pledge, and making the pledge reflect the actual complexity of the system

Exactly. I'm still confident that the impact-based thresholds are a 'technically correct' solution to the collective action problem (or as close as possible given the constraints), but none of that counts for anything if it's preventing people from pledging. Of course, there are countless other reasons that people might not be signing right now, so it might be too early to give up on them just yet. Unfortunately we don't have enough data to figure it out, but hopefully the new campaigns will help there, e.g. if they bring in additional pledges to the OA campaigns, it might suggest that visibility is a limiting factor. We could also post new OA campaigns with these simpler settings, and compare hit rates to the original campaigns? I'm thinking we should do this for #7 anyway, because I think it would be good to run a campaign specifically for sharing pre/postprints, which is the first of two criteria in that campaign.

@CooperSmout
Copy link
Member Author

perhaps just say first or last author in the pledge text and remove the option to choose?

This would also greatly simplify things. The motivation for letting people choose was that circumstances differ, and people should be able to choose which publications count under their pledge.

Thinking more about this. I just took a look at the pledge data for each of the OA campaigns, which shows that 'first-author only' is the most popular option for all three campaigns. Here's the tally of different pledge settings at time of writing:

Platinum #5 Gold #6 Green #7
First only 27 25 37
First + last 8 9 25
First + middle + last 3 4 8
First + middle 0 0 1

What's striking is that nobody made a pledge that didn't include first-author papers, so it would make sense to go with 'first-author' as the default setting for future campaigns. The green OA campaign seems to have more support for alternative authorship positions (especially last author), presumably reflecting the fact that more journals are compliant with this pledge than the other campaigns -- i.e. people can be more confident about their influence on co-authors down the track. I suppose for that campaign we could give people the option to 'upgrade' their pledge from first-author to include other positions? Just as long as doing so doesn't confuse people and stop them from pledging. Ideally, I'd like to split-test this (and other features) as we go, e.g. give half of visitors the option to upgrade and see if it increases/decreases the conversion rate from visits to pledges, but of course that will require lots of work on the platform development side of things.

@CooperSmout CooperSmout changed the title Issue: Impact-based thresholds for OA campaigns are confusing Discussion: Impact-based thresholds for OA campaigns are confusing Dec 2, 2020
@CooperSmout CooperSmout removed the live label Dec 2, 2020
@FreeOurKnowledge FreeOurKnowledge locked and limited conversation to collaborators Aug 31, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
discussion Discussion topic about project strategy, direction, etc
Projects
None yet
Development

No branches or pull requests

2 participants