it also has stealth. This is a complete disaster. The only purpose of this stealth ship is to steal leaders and or go inside cave lairs and blow them up.
I wonder is Iran would have gone different if we had captured the Ayatollah instead of killing him. A stealth drop ship like this would have allowed that to happen. The reason why regimes are more likely to negotiate when you capture their leaders is because you might release them. (not a good day for the usurper.)
I don't think whatever is negotiated with Iran's current regime would actually be honored by them. They may commit something to get their leader back, but won't be keeping the promises.
Their self stated goal is destruction of Israel and US. They could have chosen peace and not have funded proxies across the middle east. Their choice of aggression by whatever means they have at their disposal just shows what their long term strategy would be.
They have shown the intend. They just didn't have the capacity to follow through. Once they gain the capacity, they could go extreme lengths. Just see how they attacked their neighbors who were not party to the war.
A very good response to the parent comment and summary of the current situation.
AIUI the Iranian attack on Arab countries is strategic, increasing energy costs pressures the US to stop military action. However the US and allies were prepared with set aside oil reserves, increasing supplies from other sources, and reducing Iran's ability to interfere with shipping.
Major warfare always has tragic effects, but against regimes actively pursuing destruction of other nations, return of fire is a rational response.
The limiting factor for quantum computers is keeping them cold. Is this triple superconductor high temperature too? If not, it's not going to change things much.
I did a bunch of research on similar Tc superconductors back during my
PhD.
7K is considered “warm” from a cryogenics point-of-view because you can just dunk your sample into a dewar of liquid helium at 4.2K. You can even get it cooler, down to about 1K, using evaporative cooling techniques. [1]
It’s getting to lower temperatures than this when things start getting complicated. Eg a closed-cycle evaporative He3 system can get you down to 200 mK, or you can bite the bullet and use a dilution fridge down to around 10mK.
That's no solution at all, and the tech is barely even lower than a passing option — both require detailed speed & time measurement, and having gps throttle governing 100% of the time is subject to all kinds of new issues where GPS isn't fully functioning, e.g., in dense trees, tunnels, cities....
And as to passing, the closer the two vehicles are in speed the loooonger it takes one to pass the other. Unless you get down to an absurd accuracy, one driver will notice he's got a little bit of pace on the other guy and will try to pass. And even with 0.0001% accuracy and 100% uptime (not going to happen), there will still be passing issues as some trucks may have issues where they aren't quite up to speed, but just a few km/hr under, and you're right back to the long passes.
Either outlaw passing, or allow it to happen at a reasonable pace.
It looks like they do. https://simonwillison.net/2025/May/25/claude-4-system-prompt...
They patch it in the prompt and they eventually address it in the re-enforcement training. It seems the eventual goal is to patch all of these tiny "glitches" so as to hide the lack of cognition.
Thanks, Excellent catch! Everyone is saying this is a "brain teaser." However, this reminded me of the LLM that thought it was the golden gate bridge. I hadn't been able to say it (or think it) succinctly. From Claude, "when we turn up the strength of the “Golden Gate Bridge” feature, Claude’s responses begin to focus on the Golden Gate Bridge. Its replies to most queries start to mention the Golden Gate Bridge, even if it’s not directly relevant." Here's the link for those interested. https://www.anthropic.com/news/golden-gate-claude
The magic of LLMs is that one llm can learn everything and then we can clone it. However, if we don't know ahead of time which one will be the best one, then we should probably keep a lot of version with real (mathematically calculated) diversity. Ironically, the DEI peeps were right all along.
Producing fewer "Compiler errors" and more "broken code errors" is a fundamental failure. The cost of detecting compiler errors is lower than detecting broken code. If the cost of detecting and fixing broken code increases at the same rate as LLMs "improve" then their net benefit will remain fixed. I asked my five year old the above "brain teaser" and he got it right. I did a follow up of what should he wash at a car wash if he walked there, he said, "my hands." Chat answered with more giberish.
I agree it is a fundamental failure of the current state of models. I believe it is solvable. The nuance is just that "solving" the problem might not look like what we think of as a solution. Hence the asymptote.