Correct. It depends. For example, it might depend on what the collaboration is likely to result in. Perhaps it would be more likely to be moral there were some boundaries in place, like "no mass domestic surveillance" or "no fully autonomous weapons".
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
The US military has deployed fully autonomous weapons since at least 1979, and potential adversaries are now doing the same. For better or worse that ship has sailed.
Look, a dumb bomb is a fully autonomous weapon once it's launched. Let's be real: an LLM making decisions on who to target and when and where to launch munitions represents a meaningful change in our concept of autonomous weapons.
So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?
Those questions are moot. There are situations where it's simply impossible to have a human in the loop because reaction time is too slow or the environment is too dangerous or communication links are unreliable. Russia is deploying fully autonomous weapons to attack Ukraine today and they will be selling those weapons (or licensing the technology) to their allies. There is no option to stop. And let's please not have any nonsense suggestions that we can somehow convince Russia / China / Iran / North Korea to sign a binding, enforceable treaty banning such weapons: that's never going to happen.
There's always an option to stop. We can choose civility over barbarity, stop trying to kill people over 1000+ year old dick waving contests, and stop threatening each other with doomsday weapons because your grandpa shot my grandpa. Just because our leaders are too stupid and cowardly doesn't mean there's no option.
Not sure you're aware, but the joke may be on you. It's apparently Putin who's convinced Trump and the Mullahs (not the band) to choose civility over babarity by allowing a superyacht of one of his cronies to pass through the Hormuz.[0]
Russian trolling at its finest, truly. This timeline keeps raising the bar on the absurdity quotient.
I wasn't aware that the US was throwing away its moral compass for the just cause of frustrating Putin's expansionism. The new story seems to be Putin gets to do what he wants, and so do we.
If you think there's something wrong with giving our warfighters the most effective weapons to carry out their assigned missions with minimum casualties then your moral compass is completely broken. Personally I favor a less interventionist foreign policy but that has to be addressed through the political process. Not by unaccountable individual defense contractor employees making arbitrary policy decisions.
You should know that every single veteran I know ruthlessly mocks Hegseth for trying to use this term non-comedically. It’s a synonym for someone who takes their service way too seriously/makes it their whole identity. It’s almost exclusively used to mock people.
We aren’t Russian and Putin is not our leader. We can choose how we behave and operate. This is like saying we should use chemical weapons if someone else deploys one. You’re speaking as if it’s all so binary. “Do what they do or you lose.”
It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences. Chemical weapons have seen minimal use after WWI largely because they're not very militarily effective. Autonomous kinetic weapons actually work. Right now Ukrainians are building autonomous weapons to defend themselves against Russian autonomous weapons. For Ukrainians it is binary: do what they do or you lose. Would you prefer that they lose? And don't presume to tell us that the Russians can be persuaded to stop by non-violent means, that would be completely delusional.
>It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences.
This is a deeply flawed argument that has an obvious application back at you, but either way if you’re going to stoop to personal attacks I think we’re done here.
Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”
No. Their comment was:
“Any AI researcher who continues to work here is morally compromised.”
But, “…doing this kind of work with the federal government.” is added context that was not there and is based on your own interpretation.
The language of the parent comment charges that simply working at a company that is engaging in this makes one complicit in an immoral act, and the complicity itself is immoral. I disagree with all of that.
Yes. Working at a company explicitly profiting off of doing clearly immoral acts is wrong. It doesn’t mean working for a company contracted with the federal government is always wrong.
In a logical or mathematical sense, sure, but when it's the US government and a huge surveillance-tech company it's pretty necessarily immoral (at least in an American context where harming liberty is immoral - other cultures disagree).
Like the guy in an old clip saying "What is my crime? Enjoying a meal? A succulent Chinese meal?" while being arrested for trying to pay with a stolen credit card. The succulence of the meal has nothing to do with it, and that it's your own government has nothing to do with it. It's just a sad way to try to distract from what's actually wrong with helping build tools for mass surveillance and autonomous murder.
I don't think that was intentional, but invading countries while trying to distract them with negotiations, randomly assassinating leaders and hoping everything just turns out well, threatening to "destroy civilizations", targeting bridges and more, all while aiding and abetting Israel which is intentionally destroying pharmaceutical, educational, and other such civilian institutions is all 100% intentional.
In some ways worse than bombing the school was the effort to implicitly deny it. The school was near a military facility, and itself was a military facility in the past. US intelligence screwed up. They should have simply acknowledged what happened and why. Their response just reeked of cowardice and malice at the highest level.
You'll have to live with it somewhere else. Neither HN's administrators nor readership will tolerate that kind of behavior. If you intend to participate on Hacker News over the long term, please take up the suggestion by the other poster to review the guidelines and adhere to them.
Of course it doesn't! I acknowledge that I have no first amendment right to speak in this forum, none at all. I merely observe that the people who run the forum are themselves champions of free speech, within limits of course.
That’s insane. There should be a big team of people at AMD whose whole job is just to dogfood their stuff for training like this. Speaking of which, Amazon is in the same boat, I’m constantly surprised that Amazon is not treating improving Inferentia/Trainium software as an uber-priority. (I work at Amazon)
> “Are we afraid of our competitors? No, we’re completely unafraid of our competitors,” said Taylor. “For the most part, because—in the case of Nvidia—they don’t appear to care that much about VR. And in the case of the dollars spent on R&D, they seem to be very happy doing stuff in the car industry, and long may that continue—good luck to them.
Where's the scope for an L7 promo in "Fixed a bunch of tiny issues that were making it hard to use Tranium/Inferentia with PyTorch"?
Amazon's compensation strategy, in which you primarily get a raise years in the future for tricking your management chain into promoting you is definitely bearing its rotten fruit.
Excellent link.
So the best solution is to take the authors observation and add the average seasonal lag to arrive at the „real“ observed spring, summer, fall and winter.
I don’t think that is a good example. No one is debating whether LLMs can generate completely new sequences of tokens that have never appeared in any training dataset. We are interested not only in novel output, we are also interested in that output being correct, useful, insightful, etc. Copying a sequence from the user’s prompt is not really a good demonstration of that, especially given how autoregression/attention basically gives you that for free.
> That means the group of characters it outputs must have been quite common in the past. It won't add a new group of characters it has never seen before on its own.
My only claim is that precisely this is incorrect.
You couldn’t design a better system for incentivizing leaks if you were trying. Hell, the CEO literally said as much. Not sure how you can conclude the markets aren’t the problem.
Yeah I had to reread that part... I was like, no way the CEO of Polymarket publicly said on record that it incentivizes leaks. Had to check to make sure I wasn't on the onion.
Wow, it took some time for me to dig the interview out(0). I think it's stupid that Atlantic did not link to it, and that they misrepresented the context.
I agree that company info being leaked is whatever. No one is hurt by knowing that Apple is working on a foldable phone; maybe an exec loses his million dollar bonus and can't upgrade his yacht this year, and the market can operate off of that knowledge.
But the flip side is that there's no way to distinguish between leaked company info and leaked government info, and up until this era of history, there was rarely financial incentive for anyone to leak govt info, and even if there were, it was almost impossible to do so completely anonymously.
I'm not necessarily agreeing with the article. Who knows if that actually happened? But the incentives make it more plausible than ever.
Rhetorical question: why do non-insiders still bet in these markets? Surely, after all of the focus on insiders, people will begin to realize that betting without insider knowledge is a fool’s gambit..
Because the way these companies make money is incentivizing the behavior of gambling addicts. It's just like asking why people will continue taking drugs if it's known to harm them.
You act like they all act rationally with maximum information.
Kids are growing up with the culture of sports betting, meme stocks, Robinhood for easy investing (even if you can’t afford a single share of a stock), virtual items and loot boxes, “blind box” products, etc. the entire economy runs on taking advantage of people with gambling compulsions / addictions.
And to answer your rhetorical-but-not-really question, not all people know they are “the fish” (referring to the quote from the movie Rounders).
Are we automatically discarding everything that might or might not have been written or assisted by an LLM? I get it when the articles are the type of meaningless self improvement or similar kind of word soup. However, if hypothetically an author uses LLM assistance to improve their styling to their liking, I see nothing wrong with that as long as the core message stands out.
I've seen so many LLM-generated articles by this point that obviously had no human editing done beforehand — just prompt and slap it onto the Web — that it makes me wonder every time. If I read this article, will I actually learn only truth? Or are there some key parts of this article that are actually false because the LLM hallucinated them, and the human involved didn't bother to double-check the article before publishing it?
If someone was just using the LLM for style, that's fine. But if they were using it for content, I just can't trust that it's accurate. And the time cost for me to read the article just isn't worth it if there's a chance it's wrong in important ways, so when I see obvious signs of LLM use, I just skip and move on.
Now, if someone acknowledged their LLM use up front and said "only used for style, facts have been verified by a human" or whatever, then I'd have enough confidence in the article to spend the time to read it. But unacknowledged LLM use? Too great a risk of uncorrected hallucinations, in my experience, so I'll skip it.
reply