Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a bright-eyed science undergraduate, I went to my first conference thinking how amazing it would be to have all these accomplished and intelligent people in my field all coming together to share their knowledge and make the world a better place.

And my expectations were exceeded by the first speaker. I couldn't wait for 3 full days of this! Then the second speaker got up, and spent his entire presentation telling why the first person was an idiot and totally wrong and his research was garbage because his was better. That's how I found out my field of study was broken into two warring factions, who spent the rest of the conference arguing with each other.

I left the conference somewhat disillusioned, having learned the important life lesson that just because you're a scientist doesn't mean you aren't also a human, with all the lovely human characteristics that entails. And compared to this fellow, the amount of money and fame at stake in my tiny field was miniscule. I can only imagine the kinds of egos you see at play among the scientists in this article.



You might be surprised. There’s an old saying that the fighting is so intense because the stakes are so low.


I worked for a university for many years and I can confirm this. I have never seen such negativity, scheming, and fighting in my many professional years since. All because in the end, they were fighting over nothing and they knew it. But they needed to feel like what they were doing was important.




This is why industry > academia, imo.

At the end of the day, all of the noise of negativity and bad press is being drowned out by incredible demos. I don't know what to chalk this up to if not jealousy. Most people in the ML-o-sphere are ignoring it.

At the end of the day, all that matters is: are users using what you built?


I mean.... That matters to someone in the industry, yes. But not necessarily someone in academia.

You chose industry over academia, and that's fine. It lines up with your values. But realize that not everyone shares those values and beliefs. To some, the act of discovering a new thing is much more important than the users using said discovery. And that lines up with academia more so than the industry.

Both are different. Both are valid.


It's fine that you like industry better than academia, so do I, but you'd better count your lucky stars that scientists exist.

> At the end of the day, all that matters is: are users using what you built?

How would you measure Isaac Newton's advances in calculus and mechanics or Einstein's general theory of relativity, against say, a web app with a billion users?


The two examples you picked are of incredibly and unusually useful advances in science. I have a friend in grad school who told me he deliberately didn't want to work on anything useful!

If you want to steelman the GP's argument, you should compare the web app with e.g. some niche in pure math. There the trade off between novelty/interest and usefulness to people today is more clear.

I think the two are incomparable and both useful, but it's disingenuous to strawman the GP as saying web apps are more useful than relativity.


As a former student of CS I could pick hundreds of examples of algorithms and data structures, which are now baked into the standard libraries of all programming languages, and therefore in web apps, which were invented in universities. Same with AI - industry is now collecting the fruits, but the groundwork research was absolutely indispensable and very few in industry were doing it until FB or Google set up their research institutes (and we could have another debate on whether those are academic, industrial or somewhere in-between).

Yes, I picked those two examples for the effect or as a reduction to the absurd (not a strawman), because going only by the immediate or tangible value of what one "builds" (science isn't even built, but rather discovered) is not a good way to dismiss academia.


People balked at "imaginary" numbers for like a hundred years as a "niche math toy" until it became super useful in physics.


One of the most intense and fun user bases I had was in HPC at an academic healthcare research institute. I've also worked in high energy research.

When most folks think of academia they think faculty, but staff vastly outnumber then. Contrary to popular belief there are legions of cold, level headed, engineers that get shit done.

A lot of the research isn't some random study of something that may or may not be useful in half a century or more, it's often immediately applicable and winds up in products or shaping government policy on a global scale. Especially the well funded ones.

But we don't hear about that stuff. We hear what the media and tech companies are currently trying to cram down our throats.


> At the end of the day, all that matters is: are users using what you built?

Ah yes, the Kardashian model of success


Absolutely. That's why PredPol was such a great success.


None of it would be possible without academia though. Industry just applies academic research.


I have no insight into the natural sciences, but I've spent a couple of years in computer science academia. With that in mind:

> None of it would be possible without academia though. Industry just applies academic research.

Meh, that vastly oversells academic research. Very little of academic research in computer science is actually used in the industry. It's not that the industry is ignorant, but rather that the majority of academic work is useless: They create artificial problems [1] and solve them in shoddy ways, with hand-picked benchmark results, and frequently without even publishing the source code.

It's probably not surprising, given that the typical incentive is to get a PhD. So you need a "problem" that can reliably be solved in 3-5 years and which allows you to produce 5-10 conference papers with your name on it.

[1] I'm not talking about theoretical fields – my comment is purely about supposedly practical research.


Sadly you are so correct.

I was once watching a VC interview a snooty machine vision scientist at Johns Hopkins who was talking up how well his research was at recognizing three d things. So the VC pulled out his cellphone and took a photo of a box on the table. He asked the professor to have the software highlight the rectangular solid. Whoop. He never heard back. The software in the lab that was supposedly so great couldn't do a very basic task that wasn't from its preapproved set of tasks.

I do think that academia can be the source of some great ideas, but they often end up believing their own BS.


I woudl say that the Ads model at Google represents a truly non-academic set of discoveries. https://static.googleusercontent.com/media/research.google.c... and https://static.googleusercontent.com/media/research.google.c... are two of the most significant papers published in ML (and truly underappreciated, IMHO) and represent decades of people time developing new ideas in industry.

I worked at Google and there's just tons of stuff that never actually existed in academia and was created, launched, and then replaced by something better entirely within the company without any publications!


Are you counting research departments in industry as academia?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: