as a long-time NeurIPS reviewer: the problem is that most submissions are kind of "meh", and everyone knows it. They're usually well-done papers (in the sense that the math checks out and the empirical evaluation is is done well enough), but it's pretty obvious that the paper isn't going to be influential. Mostly, these are papers of the "this is a cute idea that'll give you 0.1-2% improvements on some benchmarks people usually look at" variety. So depending on how strongly a reviewer believes that this might actually become influential if someone REALLY, REALLY digs down on this idea or not, you'll give it a "barely passing" or a "barely rejecting" grade. That's what most of these 60% usually are.
I don't think the problem here is actually the "broken peer review system". I think this is a fairly natural development: if your PhD advisor insists that you have a few "good papers" before they'll agree to let you graduate, and "publication venue" is used as a proxy for "is a good paper" (which is very reasonable, because it might take years before better proxies such as "citation count" give a good signal), then you'll submit your paper to top tier venues and hope for the best. I don't really know how to fix this: Establishing things like TMLR (Transactions of Machine Learning Research -- a rolling-release journal meant to have roughly the quality of NeurIPS) might be a good way forward. Students get their stamp of approval by publishing there and NeurIPS can significantly increase their acceptance threshold. But once you do that, you'll risk that TMLR doesn't count as "good enough venue" anymore....
I don't think the problem here is actually the "broken peer review system". I think this is a fairly natural development: if your PhD advisor insists that you have a few "good papers" before they'll agree to let you graduate, and "publication venue" is used as a proxy for "is a good paper" (which is very reasonable, because it might take years before better proxies such as "citation count" give a good signal), then you'll submit your paper to top tier venues and hope for the best. I don't really know how to fix this: Establishing things like TMLR (Transactions of Machine Learning Research -- a rolling-release journal meant to have roughly the quality of NeurIPS) might be a good way forward. Students get their stamp of approval by publishing there and NeurIPS can significantly increase their acceptance threshold. But once you do that, you'll risk that TMLR doesn't count as "good enough venue" anymore....