-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge confidence value for OSS maintainers #9
Comments
Hi @thibaudcolas, thanks very much for your feedback! We definitely would love to weaponize this capability to help open source maintainers. I think you're already thinking along the same lines like this:
Re: lint packages like A related challenge is identifying packages where a low confidence score doesn't mean "don't merge". e.g. there are many cases where if a release has broken a lot of people but not you, then you should still be cautious. But there are cases like |
Well that’s all very exciting, I’m looking forward to see what you’ll build from there! |
Nice work on the merge confidence feature! This is very interesting and feels like there’s lots that could be done with this data like you show in your blog post.
From the perspective of a package maintainer – I think I would be very interested in having access to the merge confidence data for my packages’ updates. Is this something you’ve considered? For example, as a maintainer of draftjs_exporter, when I make a release I would be interested in knowing whether users’ test suites are passing with this new release, whether people have managed to merge the upgrade, etc. Projects like Prettier, or any and all linters generally, could also be projects where it’s very interesting to know how much breakage there is with each release, as the breakage is somewhat inevitable.
Currently there isn’t really a good way to do this at scale in the package management ecosystem. Some package repositories have quantitative info about adoption in the form of "download statistics" for packages, but that’s about it. Some projects publish alpha/beta/RC releases in the hopes of collecting feedback from users, but that’s all very ad-hoc and qualitative rather than quantitative. Merge confidence feels like it could automate this feedback loop.
The text was updated successfully, but these errors were encountered: