Improvements and tweaks to the package score #2591
Replies: 3 comments 2 replies
-
Link to score calculation: https://github.com/SwiftPackageIndex/SwiftPackageIndex-Server/blob/main/Sources/App/Core/Score.swift I do want to preface this entire post by saying I don't think that the scoring system is fundamentally flawed, and I am not strong in these feelings, but the above abstract from the thread emphasises that there is disparity on what each person would describe as a "good package" and thus what SPI ranks highly currently and what I might rank highly can sometimes be different. Dependencies are inherently a personalised decision, the scoring needs to act as the most balanced decision for the average user and project. And also, this is just my opinion. License CompatibilityWe currently give 10 points to any package which has a recognised license, and one which we deem suitable for the app store. We give 3 points to a recognised license but one that is incompatible with the app store, 0 points if you use a custom or altered license, and 0 points if you don't have a license at all. Personally I don't think this balance is right. Simply look at the results of this search (https://swiftpackageindex.com/search?query=license%3Aother) and see just how many popular and well-used packages have no license and are being kneecapped in our rankings. License is important, and I think most people need a lesson in that, but I think the scoring on this right now lacks nuance. Not everybody is releasing to the App Store. Not every missing license is a complete red flag (sometimes just needs an extra check). Personally, I'd remove the license check. Platform CompatibilityApple continues to encourage products that support multiple platforms across it's ecosystem. Finding dependencies that allow this can be a challenge, and I think SPI could benefit from providing a boost to packages which support more than just one platform. For me when I'm searching for Logging packages and see that it supports every Apple platform except watchOS, I begin to question what weird APIs are they using that it prevents a simple logging capability? To me, for that particular search, it's a sign of smell. In fact I regularly use Linux as a method of identifying packages without complex dependencies (usually going to cause me less headaches). I don't think this is an easy one to solve, but I would love to see bonus points for supporting all Apple platforms. Support for Linux/Android/Windows is a very specific use case and so (while valuable to me) doesn't fit into the "balanced user" I prefaced above. Other TidbitsWe currently add 10 points if you have more than 5 releases, 0 points if you don't hit that threshold. I think there's a big difference between a package with 0 releases (which actually makes it harder to add to your project) vs a package with only one or two releases. I think we should -3 points if you haven't marked a release. We currently dock points for having more than 5 dependencies, but this includes test dependencies. I understand it's hard to tell the difference right now, but should I be penalised in the rankings before I use SnapshotTesting or Quick/Nimble in my unit tests? I love the open PR which looks to give extra points for packages with tests, this shows an effort towards maintaining a level of quality. Though as with many other checks it doesn't always pass the sniff test - just because I have a test target doesn't actually mean I have any tests. The same is true for documentation which I love and value, but is it good documentation? Is it complete documentation? How can we prove this? Is it even possible? Is there a way to automatically review/evaluate the quality of the README? Does it include helpful information to make an informed decision, and to ease the installation process? Could we penalise packages that only have the SwiftPM template README? or packages with no README at all? This isn't something we get right overnight, but I hope this brain dump does potentially add some value to the conversation. Feel free to add counterpoints and discuss below! Again this is only an opinion. |
Beta Was this translation helpful? Give feedback.
-
I think this is a great step for the swift package index to make this score more visible, and agree with most of @Sherlouk's comments above. Drawing parallels with other ecosystems, I find the scoring system of pub.dev to be pretty good. They encompass several key checks, some of which could serve as inspiration here:
Not all of these would be feasible for SPI of course. Adding lint/formatting checkers would be great idea. A choice between SwiftLint and SwiftFormat though |
Beta Was this translation helpful? Give feedback.
-
I just added a package to the index for the first time, so these are my thoughts in encountering the packing scoring system for the first time: For each category:
For the package as a whole:
And then some questions about a couple of the categories: For the Contributors category, it doesn't seem that a package that has many contributors is inherently better than one with a small number of contributors. I think a single person or two could create an excellent, focused package where they stay on top of bug fixes and enhancements, but it ends up with a lower score because it really doesn't need other contributors. It's also not clear know how the Dependencies category works. Does the score go up or down when the number of dependencies goes up? |
Beta Was this translation helpful? Give feedback.
-
Enhancing package score comes up every now and then, most recently in this thread where @Sherlouk talks about the license being an indicator:
I'd love to gather more thoughts on this, and a thread here seems like a good place to do that to keep the forums thread focused as much as possible on the Packages page.
Also, @cyndichin is working in this area over the next few weeks as part of the Swift Mentorship Programme, and this discussion will be helpful to that project too.
Beta Was this translation helpful? Give feedback.
All reactions