Hacker Newsnew | past | comments | ask | show | jobs | submit | hdivider's commentslogin

Let me articulate the thing which I believe is on many people's minds:

What is the chance the president will order a nuclear strike on Iran as this war proceeds?

We would hope the odds are vanishingly small, because doing so would be profoundly disadvantageous. But the same was true for initiating this war in the first place. The logic -- such as it is -- of some people in power may lead them to conclude once more that shock and awe can succeed. We've already struck the country with powerful conventional weapons at scale and it has not led to a weakening of Iranian resolve.

All the above said, my personal hope of course is this will never happen. I'm curious what other folks think however.


No chance. A nuclear strike on Iran won't achieve anything that a large number of conventional strikes would.

My real question is not whether it would achieve anything meaningful, but what would be the side effects of such a strike on allies in the region.

I don't have a remotely decent mental model of fallout etc from modern nuclear weapons - my assumptions are they're still toxic enough to be a bloody terrible idea anywhere near someone you like.


I think the main concern would be escalation, e.g. Netanyahu feeling emboldened to use his weapons too. And of course Putin, to try to shock Ukrainian forces and population (good luck).

Alliances might get reshuffled as everyone realizes they need to reassess their nuclear defense and deterrence. It would fundamentally change the nature of modern warfare, not for the better. Let us hope this never happens.


You're assuming the current president operates on rationale. He simply would love to be the guy who uses a tactical nuke.

How much would you wager? It's easy to to say what you're saying because it's popular.

If you watch action and not social media bs, the probability is close to 0%.


The shock and awe from a nuclear strike is unmatched

Yeah, that "shock and awe" would probably destroy any remaining US alliances.

Wouldn't*

Even if clearly one side is correct without any doubt whatsoever, beyond any question? Such as 2+2=4 -- we should accept a situation where some people insist this is not true? It seems irrational.

Arithmetic is only true axiomatically, which is a fancy way of saying that 2+2=4 is merely an opinion.

I agree entirely. HN tends to be incredibly nitpicky and dystopian. I think it's because so many HNers work in dystopian software-only companies, not doing much in the physical world, away from the algorithms.

Incredible technological innovation is on the horizon. That's why we are not doomed this century. We can make it.

*hits 'reply', knowing there will be nitpicky comments because of course on HN these days, no positive point shall be left standing.


This is far bigger than people think.

So much advanced equipment is just sitting there in labs, waiting for humans to finally go and make experiments. Which they eventually get round to, sort of, when they can secure funding and when the grad student isn't ill or making mistakes or framing the problem the wrong way.

AI-driven labs can iterate 'good enough' hypotheses way faster than human R&D systems. Automated labs are going to be a major source of discovery.


> Eve independently screened some 1,600 chemicals and modelled how their structure related to their activity to predict which ones were worth testing. King and his group armed the robot with background knowledge and a machine-learning framework for developing hypotheses. Eve then used those elements to design experiments to test these hypotheses and, crucially, performed them itself.

> King plans to use the system — which occupies one-fifth of floor space than Eve does — to model how genes, proteins and small molecules interact in cells. Part of that will involve taking around 10,000 mass-spectrometry measurements each day.

The throughput here is astounding, especially when driven by researchers who really know how to chart a path. I feel every time a critical feedback loop is made both faster and cheaper, it makes everyone participating better. I wonder whether we will see many more "whiz kid" scientific researchers than we have today.


> So much advanced equipment is just sitting there in labs, waiting for humans to finally go and make experiments. Which they eventually get round to, sort of, when they can secure funding and when the grad student isn't ill or making mistakes or framing the problem the wrong way.

That's not really what the article is about though. Short of staffing it with humanoid robots, existing labs and their equipment will continue to be unused.


There are groups that are actively working on automating conventional labs like this. Most of the efforts I know about use non-humanoid mobile robots or even just a six-axis arm on a rail and some lab space reconfiguration

I don't really see why most existing equipment would be usable in this way. When you automate a thing you often have to rethink the entire problem. But more generally, automation is for _repeatable_ things and a lot of research is... not that.

The expensive equipment is usually a small (but crucial!) part of research activity, which involves things like talking to a lot of people, getting permission to do weird or new things, going out into the environment and collecting things in very specific ways, storing and transporting them carefully, observing, etc. Building or modifying existing lab instruments, doing various things with animals that are not co-operative ... and CLEANING. Who does all the cleaning?

Definitely use cases when you have a specific protocol you want to scale, but I'm also not sure how safe I would feel around AI with a license to experiment and access to dangerous reagents, high temperatures, etc. Or, god help us, an oligonucleotide synthesizer. Which is definitely going to happen (if it has not already).


>Who does all the cleaning?

In some cases that would be the same person that does the most advanced innovative and/or creative work.

The idea behind the fully automated system is that fewer hired hands are needed for efforts that are routine enough. But not zero, you still need one person who can do everything at a minimum, if called upon for mission-critical operation.

In the case of the creative work and planning where it is out of the league for AI, these things need to always be done too, but they are not exactly "routine".

Once most of the tedious routine tasks are well-automated though, then the human brain behind the lab can finally relax a bit, with eurekas flowing at the same rate without needing a full 40 or 50 addititonal hours at the bench any more, while even more results are generated than they could do single-handedly too.

Which gives them the time to do the cleaning also, otherwise they would need two humans to serve their only automated system.


Probably like a ATLAS or a unitree robot.They are beginning to get very good.

> AI-driven labs can iterate 'good enough' hypotheses way faster than human R&D systems.

Is there evidence of that?


I really hope you're right. The challenge with Linux still seems to be practicalities -- like in particular, does Zoom run well on most distributions?

Reports seem to be of system crashes and degraded performance. I imagine there are lots of 'it works for me' stories, but think: for Linux to eat into Windows user market share (which I would greatly support), critical things like Zoom have to work at least as reliably as on Windows. For nontechnical users who would never figure out which incantations to type into the terminal to fix it -- because they have their next meeting in 15 minutes.


I installed PopOS (22) and zoom worked fine right off the bat. So did steam and all my steam games. Heck even my printer worked. (It has since become more temperamental and now only works with one of the 3 print dialogues on my Linux box...)

My game controller worked, my BT headset, the media keys on my keyboard even worked.

Lots of stuff was mildly broken but no more so than it was on Windows. It is just differently broken.


How many hours has Zoom put into making the client stable on Windows and Mac?

How many hours have they put into the Linux client?

My guess is the answer to these questions indicate more of how it got there than anything the distros or upstream components can do.


> How many hours has Zoom put into making the client stable on Windows and Mac?

Users don't really care, do they?


I'm not talking about that. I was replying to a claim that Zoom is less stable on a platform, as if that somehow happened for free and not as a result of a team tracking and fixing bugs on the application side, likely over the course of years.


> like in particular, does Zoom run well on most distributions?

It works fine (tested on Arch), but at the very least you should run that kind of malware as a separate user, or better yet, in a VM.


Limiting it to a browser tab is sufficient :)


Even my Starbook, so... literally made for Linux, doesn't do things like going to sleep when I close the lid. It made me switch back to my Mac because despite being able to, I have a life and little time for my main work device to decide to not work randomly.

Linux is never, and I mean never going to be a legitimate alternative to Windows or MacOS on the desktop under the current paradigm. "Switch to X desktop or distro" means less than zero to 99.9% of computer users (probably a few more nines in there too).

"Oh but the Steam Machine!" essentially no one who uses that will actually care what the OS is, it's a shell and a very specific one to do a single task, no-one is buying it as a general purpose machine they can do their taxes on.


Yes, precisely. And then as I anticipated, the "it works for me" stories, even here in this thread. Wish we could get past this steady-state in the Linux ecosystem.

Imagine a Linux distro largely displaced Windows and Mac simply due to usability, security, reliability, and the fact that there's no monstrous corporation pulling the strings. That would be awesome.


Works fine on recent Ubuntu and Fedora, both Wayland and X.

"Fine" and not amazing because occasionally I have screen sharing issues, but that's like once in a blue moon? Could be down to my specific configuration, but it's allegedly more stable than my coworker's zoom on Mac.


Zoom works fine for me on Ubuntu. Or at least, it's no more flaky than it is on Mac.


I mean... Windows legitimately doesn't work. I work at one of the mag7 and it's a running jokes while using windows that suddenly everyone's microphone quits. We then have to restart. This has been going on years. Our colleagues on Linux don't have such problems.

It's just that we accept windows issues as "that's how computers are". While Linux is expected to work


I haven't used Zoom in years, but Teams in the browser on Linux runs better than Teams natively on Windows. Which is odd, since I understand it is just an electron app on Windows, so it is effectively running in the browser anyway. Still, those of us on Linux have way fewer audio and connectivity issues.


Here's a standard-structure, VC-funded, exit-oriented startup to consider: make video calls reliable. As in, you provide a guarantee and pay the customer if the call didn't work.

Wired headphones could be one part of the solution. They're just far more reliable (if they don't break, which they will). But if the reliability of video calls can be improved so that it's literally as reliable as talking to someone next to you in a quiet room, I bet lots of people would pay for it. There is so much latent frustration about unreliable calls, even with the best setup, even in NASA, in DoD, corporations, zoom and other platforms fail to perform reliably in so many cases.


Funny you mention it; I actually have been thinking of this as a startup/solution for ages (especially since covid). I realized that it's likely a fair bit more difficult (you'd need significant control of both software as well as hardware stacks.)

If you or anyone's seriously interested in pursuing it, feel free to reach out to the email address in my profile page.


> make video calls reliable. As in, you provide a guarantee and pay the customer if the call didn't work.

Microsoft would be ruined, haha. Over the past week, I had about a 30% chance of the call not working and a a 80% chance of the screenshare not working


I saw "cleanroom as a service" and thought great! Don't need to build a facility to do materials science or photonics or certain aerospace R&D...but nope, not that kind of cleanroom. :)


It's curious to me why we have no theory of intelligence. By which I mean an actual hard and verified theory, as in physics for gravity, electromagnetism, quantum mechanics.

Intelligence is simply not well-understood at a mathematical level. Like medieval engineers, we rely so heavily on experimentation in AI. We have no idea how far away from the human level we actually are. Or how far above the human level we can get. Or what, if anything, the limits of intelligence are.


By now you would have to say it’s because “intelligence” is no more well defined than “consciousness” or “the soul”.

A more concrete idea like “learning” has been very strongly defined and quantifiable, which is maybe why progress in a theory of learning is so much more advanced than a theory of “intelligence“.


Who is more intelligent: a twenty-something influencer making money from her bedroom, or a grad student barely making ends meet?

Who is more intelligent: a politician, or a high school teacher?

What is intelligence, anyway?


We have a pretty good answer to your questions, they are called IQ tests. It’s not like measuring intelligence is uncharted territory.

https://www.scientificamerican.com/article/i-gave-chatgpt-an...

https://www.reddit.com/r/singularity/comments/1p5f0b1/gemini...

Gemini 3 Pro has an IQ of 130 now but we keep moving the goalposts and being like “not THAT intelligence, we mean this other intelligence”. I suspect, and history shows us this will be the case, that humans will judge AIs as not human and not intelligent and not needing rights way past the point where they should have rights, even when vastly superior to human intelligence.


IQ tests only measure the ability to pass IQ tests, they say very little about intelligence. MMA fighters might be among the most intelligent people on the planet, playing 4D bullet chess with each part of their body at light speed, while scoring a flat 100 at IQ tests (the average).


IQ tests are nonsense. The more IQ tests you take the better at them you get. And who is "we", you pretentious dirtbag.


https://g.co/gemini/share/2c358c58555f

That doesn’t make them invalid. Who takes IQ tests over and over?


You're missing my point. A test that you score better and better on each time isn't measuring intelligence, is it?



I think this is the equivalent of a non-nuclear physicist asking, "why do we have no theory of nuclear physics?" in the late 1930s. Some people do, they're just not sharing it.


This is a good counter in my view to the singularity argument:

https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

I think if we obtain relevant-scale quantum computers, and/or other compute paradigms, we might get a limited intelligence explosion -- for a while. Because computation is physical, with all the limits thereof. The physics of pushing electrons through wires is not as nonlinear in gain as it used to be. Getting this across to people who only think in terms of the abstract digital world and not the non-digital world of actual physics is always challenging, however.


Wait till the Chinese land on the Moon first in this new space race. There will be a Sputnik moment, massive additional investment, and this will inevitably impact sci-fi. Just like in the previous space race, we had to fall quite a bit behind first before we wake up -- and then, we go all-out.

I also don't agree with the general dystopian or cynical view quite prevalent here on HN these days, frankly. It's always been so, but it seems to have gotten darker, such that I think a lot of old-timers like me pretty much avoid HN these days. It's not all bleak, especially when you get away from these screens and out into the real world. Looking outward, rather than inward, can lead to the kind of desire for discovery and progress which underpinned the Apollo era. The world out there is in extreme disarray too -- but to an optimist, it presents opportunity to do good.


> It's not all bleak, especially when you get away from these screens and out into the real world.

Real world seems to be getting bleaker, especially for the young ones coming up today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: