Skip to content
This repository has been archived by the owner on Oct 4, 2022. It is now read-only.

Score aggregation on SEO and readability analysis could be simplified and/or improved. #2185

Open
hansjovis opened this issue Mar 4, 2019 · 1 comment

Comments

@hansjovis
Copy link
Contributor

hansjovis commented Mar 4, 2019

How the overall readability score is computed can be improved. It now works like this:

  1. Translate each assessment score to a category:
    either "error", "feedback", "good", "ok" or "bad".
  2. Give each assessment a penalty score, based on the category.
    • "good", "error" and "feedback" get no penalty.
    • "ok" scores get a penalty of 2.
    • "bad" scores get a penalty of 3 when the user's language is fully supported, or 4 if not.
  3. Base the overall analysis score based on the sum of the individual assessment penalty scores.
    • The available scores are either "needs improvement" (30), "ok" (60) or "good" (90).
    • The thresholds change based on whether the language is supported.
      • If it is fully supported, the thresholds are more lenient.
      • If not, the thresholds are more stringent, giving lower scores at lower penalty thresholds.

It can be worthwhile to simplify this a bit, or provide some explanation on why this is computed this way.

@igorschoester
Copy link
Member

I would like to broaden the scope of this issue to include the SEO scores too.

For a previous scaling idea, see the linked #401 issue.

@hansjovis hansjovis changed the title Readability score aggregation could be simplified and/or improved. Score aggregation on SEO and readability analysis could be simplified and/or improved. Mar 5, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants