-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
App Security Score inconsistencies #1940
Comments
👋 @mat42290 |
Thanks for brining this up. Do you have any thoughts on alternative implementations? |
Honestly I thought that the previous implementation wasn't so bad, that is to say : 100 - X * number_of_highs - Y * number_of_mediums + Z * number_of_secures with some arbitrary X, Y and Z values. You then capped the score to 10 if it went below that. What about lowering that cap value to 0 or 1 ? I think using the whole range between 0 and 100 would be beneficial and allow to better compare apps with low scores. I guess you were not satisfied with this implementation because you changed it but in my opinion even if it is pretty basic that's the way to go. I think that any pondering based on criticities will always result in inconsistencies between apps with just a few findings and apps with a lot of findings. Actually a big selling point of this implementation (imo) is that it is very easy to give a goal to the developers of an app. For example their app got a 20/100. Let's say they aim at getting 70/100 before releasing any app. If they know that fixing 1 High will grant them X points, they can easy plan and prioritize what needs to be done. Following the same logic I think it could be great to advertize about how the score is calculated somewhere in the documentation (or even in the webpage with the scan results) to make it clear for everyone. EDIT :There might also be something to do with CVSS scores there. Maybe you could adapt the X and Y values based on the average CVSS scores of the findings of the corresponding criticity. For example : Where A is the maximum CVSS score possible for High findings (should be 10) and B is the maximum CVSS score possible for Medium findings (depends on where you placed the barrier between high and medium, I think it's 6.9 but haven't checked) But that might make the formula more complex for no real gain. |
Let me take a look at this. |
May I know, Is this issues solved in latest version? |
@michaelkyawsan no, unfortunately the scoring system is really just a weighted average. E.g. if you have one high finding, it is "worse" than any number of high findings and some additional medium findings. @ajinabraham please advise. Would you accept a PR that reverts the scoring to pre-#1881?
high = len(findings.get('high'))
warn = len(findings.get('warning'))
sec = len(findings.get('secure'))
- total = high + warn + sec
- score = 0
- if total > 0:
- score = int(100 - (
- ((high * 1) + (warn * .5) - (sec * .2)) / total) * 100)
+ score = 100 - (high * 15) - (warn * 10) + (sec * 5)
if score > 100:
score = 100
+ elif score < 0:
+ score = 10 |
how about this algorithom
|
|
Do you want to send a PR? |
|
Thanks, I will test this out. |
ENVIRONMENT
EXPLANATION OF THE ISSUE
I'm noticing some inconsistencies in security scores since the release of 3.5.0.
I ran a static analysis on 2 applications and got these results :
With this new security_score formula
the security scores obtained are 39/100 and 36/100. So basically the first one which has way more findings receives a better score ?
I understand the intention to ponder the score based on the ratio of the findings' criticities but that case seems to be a bit extreme.
EDIT :
Another example that shows the pitfalls of this formula :
The text was updated successfully, but these errors were encountered: