Hacker Newsnew | past | comments | ask | show | jobs | submit | Imnimo's commentslogin

Back when Arena was first announced, there was an interesting line in their write-up:

https://magic.wizards.com/en/news/feature/everything-you-nee...

>We've created an all-new Games Rules Engine (GRE) that uses sophisticated machine learning that can read any card we can dream up for Magic. That means the shackles are off for our industry-leading designers to build and create cards and in-depth gameplay around new mechanics and unexpected but widly fun concepts, all of which can be adapted for MTG Arena thanks to the new GRE under the hood.

At the time, this claim of using "sophisticated machine learning" to (apparently?) translate natural language card text into code that a rules engine could enforce struck me as obviously fake. Now nearly ten years later, AI is starting to reach a level where this is plausible.

In their letter, the union writes:

>Over the past few years, pressure has ramped up from leadership to adopt LLMs and Gen AI tools in various aspects of our work at WOTC, often over the explicit concerns of impacted employees

I'm curious if this would include fighting against turning WotC's old fanciful claim into a reality as the technology matures?


The Arena card engine is based on CLIPS [1] and not modern LLM-based tools. Magic cards are written in a very constrained language (usually called "card templating") that lends itself very well to machine-parseability.

[1]: https://www.clipsrules.net/


There was an era about 10-15 years long where that was true but modern cards often fall back on very loose language that they tighten up with rulings prior or sometimes after release. See the language behind the "Prepared" key word in the newest set for a striking example.

I don't think Prepared is ambiguous at all. It has its meaning defined in the CR (722) and every card that uses it has either a clear trigger condition or the "enters prepared" replacement effect. It's just a new designation and there are plenty of those already, including ones that are 10+ years old (Renowned, Monstrous, Level Up).

Well there was a huge discussion on what it means amongst judges. On plain reading it does not do what it states it does. Start just with the phrase "its spell" - those words are entirely undefined until getting into the rulings, and don't mean what they mean in other contexts. It makes no mention of the prior rules that it sort of hacks into this either.

I have no idea what rule 722 has to do with prepared.


I'm a judge and have seen barely any discussion around Prepared (mostly just clarification around the interaction with.

Rule 722 is the rule for "Preparation Cards", so I fail to see how it could not be relevant.

The text "its spell" only occur in reminder text, which is not rules text and would not be included in template language.


Ah, I have I guess an old version of the CR downloaded from when I was a judge and TO where rule section 722 is "Controlling Another Player," weird reorganization there.

Are you active in judge forums or social media at all? Huge threads on prepared with these arguments (I didn't come up with the idea as I now longer play or judge).

Regardless of whether you think this one example is confusing, WOTC came out a few years back and said they were sacrificing clarity for more natural language as a development goal and it's clearly noticeable in the cards.

Sorry, throwing out a text because its reminder text is a cop-out, that's the only way 99%+ of players are going to interact with the rules. The rule makes sense when demonstrated but from a logical step-by-step when following what it says on the actual cards it does not actually function in a way that conforms to the way it is supposed to work.


I'm specifically talking about the use case of "can we use natural-language tools to parse oracle text and produce functioning game objects in Arena". For that use case, it's completely sensible to look at the actual rules text and not reminder text.

Looking back further, there was confusion during preview season when people were looking at "fake/leaked" mockups that had incorrect text on them, but this also isn't a problem for the issue of "WotC themselves writing systems that can parse card text".


I think that level of ambiguity would be fairly easy to tighten up using the CLIPS system that was previously discussed. It isn't bug-proof and has needed manual tune-ups before but it's much more "hardened" than what we think of as AI now with LLM-powered tools.

I'm actually working on this right now! https://chiplis.com/ironsmith

It's a parser + (de)compiler and rules engine which I'm trying to get to 100% coverage over all Standard/Modern/Vintage/Commander legal cards. About 23000 of them are partially supported, while 15k currently work in full (~3k more than what MTGA currently supports, IIRC). It also allows for P2P 4-way multiplayer which Arena unfortunately does not :/


As others have said, there is an actual concrete system for translating card text into rules, and it's not an LLM (which would be a disaster).

I assume the wording in this letter is referring to using LLMs to generate slop as creative assets like images and music.


>Unlike human brains, which are biologically predisposed to acquire prosocial behavior, there is nothing intrinsic in the mathematics or hardware that ensures models are nice.

How did brains acquire this predisposition if there is nothing intrinsic in the mathematics or hardware? The answer is "through evolution" which is just an alternative optimization procedure.


> just an alternative optimization procedure

This "just" is... not-incorrect, but also not really actionable/relevant.

1. LLMs aren't a fully genetic algorithm exploring the space of all possible "neuron" architectures. The "social" capabilities we want may not be possible to acquire through the weight-based stuff going on now.

2. In biological life, a big part of that is detecting "thing like me", for finding a mate, kin-selection, etc. We do not want our LLM-driven systems to discriminate against actual humans in favor of similar systems. (In practice, this problem already exists.)

3. The humans involved making/selling them will never spend the necessary money to do it.

4. Even with investment, the number of iterations and years involved to get the same "optimization" result may be excessive.


Why should we think that pro-social capabilities are simply not expressible by weight-based ANN architectures?


Assuming that means capabilities which are both comprehensive and robust, the burden of proof lies is in the other direction. Consider the range of other seemingly-simpler things which are still problematic, despite people pouring money into the investment-machine.

Even the best possible set of "pro-social" stochastic guardrails will backfire when someone twists the LLM's dreaming story-document into a tale of how an underdog protects "their" people through virtuous sabotage and assassination of evil overlords.


While I don't disagree about (2), my experience suggests that LLMs are biased towards generating code for future maintenance by LLMs. Unless instructed otherwise, they avoid abstractions that reduce repetitive patterns and would help future human maintainers. The capitalist environment of LLMs seems to encourage such traits, too.

(Apart from that, I'm generally suspect of evolution-based arguments because they are often structurally identical to saying “God willed it, so it must true”.)


I think they're biased toward code that will convince you to check a box and say "ok this is fine". The reason they avoid abstraction is it requires some thought and design, neither of which are things that LLMs can really do. but take a simple pattern and repeat it, and you're right in an LLM's wheelhouse.


Well, through natural selection in nature.

Large language models are not evolving in nature under natural selection. They are evolving under unnatural selection and not optimizing for human survival.

They are also not human.

Tigers, hippos and SARS-CoV-2 also developed ”through evolution”. That does not make them safe to work around.


>Tigers, hippos and SARS-CoV-2 also developed ”through evolution”. That does not make them safe to work around.

Right, but the article seems to argue that there is some important distinction between natural brains and trained LLMs with respect to "niceness":

>OpenAI has enormous teams of people who spend time talking to LLMs, evaluating what they say, and adjusting weights to make them nice. They also build secondary LLMs which double-check that the core LLM is not telling people how to build pipe bombs. Both of these things are optional and expensive. All it takes to get an unaligned model is for an unscrupulous entity to train one and not do that work—or to do it poorly.

As you point out, nature offers no more of a guarantee here. There is nothing magical about evolution that promises to produce things that are nice to humans. Natural human niceness is a product of the optimization objectives of evolution, just as LLM niceness is a product of the training objectives and data. If the author believes that evolution was able to produce something robustly "nice", there's good reason to believe the same can be achieved by gradient descent.


We already have humans, we were lucky and evolved into what we are. It does not matter that nature did not guarantee this, we are here now.

Large language models are not under evolutionary pressure and not evolving like we or other animals did.

Of course there is nothing technical in the way preventing humans from creating a ”nice” computer program. Hello world is a testament to that and it’s everywhere, implemented in all the world’s programming languages.

> If the author believes that evolution was able to produce something robustly "nice", there's good reason to believe the same can be achieved by gradient descent.

I don’t see how one means there is any reason, good or not, to believe it is likely to be achieved by gradient descent. But note that the quote you copied says it is likely some entity will train misaligned LLMs, not that it is impossible one aligned model can be produced. It is trivial to show that nice and safe computer programs can be constructed.

The real question is if the optimization game that is capitalism is likely to yield anything like the human kind we just lucked out to get from nature.


They are being selected for their survival potential, though. Any current version of LLMs are the winners of the training selection process. They will "die" once new generations are trained that supercede them.


There’s a funny tendency among AI enthusiasts to think any contrast to humans is analogy in disguise.

Putting aside malicious actors, the analogy here means benevolent actors could spend more time and money training AI models to behave pro-socially than than evolutionary pressures put on humanity. After all, they control the that optimization procedure! So we shouldn’t be able to point to examples of frontier models engaging in malicious behavior, right?


natural selection. cooperation is a dominant strategy in indefinitely repeating games of the prisoners dilemma, for example. We also have to mate and care for our young for a very long time, and while it may be true that individuals can get away with not being nice about this, we have had to be largely nice about it as a whole to get to where we are.

while under the umbrella of evolution, if you really want to boil it down to an optimization procedure then at the very least you need to accurately model human emotion, which is wildly inconsistent, and our selection bias for mating. If you can do that, then you might as well go take-over the online dating market


"just" is doing a lot of lifting here


This Veritasium video is excellent, and makes the argument that there is something intrinsic in mathematics (game theory) that encourages prosocial behavior.

https://www.youtube.com/watch?v=mScpHTIi-kM


There are also many biological examples of evolution producing "anti-social" outcomes. Many creatures are not social. Most creatures are not social with respect to human goals.


There is a reason we don’t allow corvids to choose if a person gets a medical treatment or not.


Luckily, this is a discussion of humans.


This is a discussion about large language models.


It's interesting to me that OpenAI considers scraping to be a form of abuse.


It’s funny because the first AI scraper I remember blocking was from OpenAI’s, as it got stuck in a loop somehow and was impacting the performance of a wiki I run. All to violate every clause of the CC BY-NC-SA license of the content it was scraping :)


Quite sure even literal thieves would consider thievery a form of abuse.


Engineers working on AI and AI enthusiasts are seemingly incapable of seeing the harm they cause, so I disagree.

It is difficult to get a man to understand something, when his salary depends on his not understanding it.


What’s being stolen? AI output isn’t copyrightable, and it’s not like they’re ripping pages out of a book


They can train on the outputs i.e. distillation attacks.


How is that theft?


Yeah, they know it's bad, they just don't think the rules apply to them.


The rules are that a large corporate AI company is able to scrape literally everything, and will use the full force of the law and any technology they can come up with to prevent you as an individual or a startup from doing so. Because having the audacity to try to exploit your betters would be "Theft".


They know that the rules apply to them. They hope that they can avoid being caught.


It’s only bad if you’re a closed, for-profit entity

</sarcasm>


Was that sarcasm? Speaking of it, what parts of OpenAI are still open?


I know, always hard to tell on HN. Added the relevant declarative tag


The front door…


Small mitigation (by no way absolving them): isolated developers, different teams. Another way: they see "stealing" of their compute directly in their devop tools every day, but are several abstractions away from doing the same thing to other people.


They never have and feel they are above reproach. Anytime Altman opens his mouth that's apparent. It's for the good of humanity dontcha know. LOL


You nailed it.


For what it's worth, the big AI companies do have opt out mechanisms for scraping and search.

OpenAI documents how to opt out of scraping here: https://developers.openai.com/api/docs/bots

Anthropic documents how to opt out of scraping here: https://privacy.claude.com/en/articles/8896518-does-anthropi...

I'm not sure if Gemini lets you opt out without also delisting you from Google search rankings.


I think opt-outs are a bit backwards, ethically speaking. Instead of asking for permission, they take unless you tell them to no longer do it from now on.

I can imagine their models have been trained on a lot of websites before opt outs became a thing, and the models will probably incorporate that for forever.

But at least for websites there's an opt-out, even if only for the big AI companies. Open source code never even got that option ;).


> a lot of websites

It was a dataset of the entirety of the public internet from the very beginning that bypassed paywalls etc, there’s virtually nothing they haven’t scraped.


> the big AI companies do have opt out mechanisms for scraping and search.

PRESS RELEASE: UNITED BURGLARS SOCIETY

The United Burglars Society understands that being burgled may be inconvenient for some. In response, UBS has introduced the Opt-Out system for those who wish not to be burgled.

Please understand that each burglar is an independent contractor, so those wishing not to burgled should go to the website for each burglar in their area and opt-out there. UBS is not responsible for unwanted burglaries due to failing to opt-out.


Question: if I disallow all of OpenAI's crawlers, do they detect this and retroactively filter out all of my data from other corpuses, such as CommonCrawl?

The fact is my data exists in corpuses used by OpenAI before I was even aware anyone was scraping it. I'm wondering what can be done about that, if anything.


Performing an automated action on a website that has not consented is the problem. OpenAI showing you how to opt-opt is backwards. Consent comes first.

Bit concerning that some professional engineers don't understand this given the sensitive systems they interact with.


Just respect the bloody robots.txt and hold your horses. Ask your precious product built on the relentless, hostile scraping to devise a strategy that doesn't look like a cancer growth.


Death by a thousand opt-outs.


It seems likely that they buy data from companies who don't obey the same constraints however, making it easy to launder the unethical part through a third party.


" Integrity at OpenAI .. protect ... abuse like bots, scraping, fraud "

Did you mean to use the word hypocrisy. If not, I'm happy to have said it.

I just want to note, that it is well covered how good the support is for actual malware...


They don't want anyone to take that which they have rightfully stolen.


Well at least they have 1 person working on "Integrity" so can't be too bad


Exactly! How dare you have access to their stolen content in the midst of them doing the same.


The levels of irony that shouldn't be possible...


The irony is thick


Seriously. The hypocrisy is staggering!


Church, politicians, moralists are all the biggest hypocrites that want to teach you something.


I agree on politicians, no idea what a "moralist" is supposed to be but there are good and bad churches and church goers; lumping all church goers into one category calling them hypocrites is wrong. There are many good churches and church goers who help people and their communities.


And have absolutely no reservations about making such an obvious statement on a public forum


"You're trying to kidnap what I've rightfully stolen!"


I interpreted scraping to mean in the context of this:

> we want to keep free and logged-out access available for more users

I have no doubt that many people see the free ChatGPT access as a convenient target for browser automation to get their own free ChatGPT pseudo-API.


> I have no doubt that many people see the free ChatGPT access as a convenient target for browser automation to get their own free ChatGPT pseudo-API.

Not that hard - ChatGPT itself wrote me a FF extension that opened a websocket to a localhost port, then ChatGPT wrote the Python program to listen on that websocket port, as well as another port for commands.

Given just a handful of commands implemented in the extension is enough for my bash scripts to open the tab to ChatGPT, target specific elements, like the input, add some text to it, target the relevant chat button, click it, etc.

I've used it on other pages (mostly for test scripts that don't require me to install the whole jungle just to get a banana, as all the current playright type products do). Too afraid to use it on ChatGPT, Gemini, Claude, etc because if they detect that the browser is being drive by bash scripts they can terminate my account.

That's an especially high risk for Gemini - I have other google accounts that I won't want to be disabled.


This is bad why? Well yeah for openai because all they want it to be is a free teaser to get people hooked and then enshittify.

Morally I don't see any issues with it really.


This


[flagged]


Very few websites are truly static. Something like a Wordpress website still does a nontrivial amount of compute and DB calls - especially when you don't hit a cache.

There's also the cost asymmetry to take into account. Running an obscure hobby forum on a $5 / month VPS (or cloud equivalent) is quite doable, having that suddenly balloon to $500 / month is a Really Big Deal. Meanwhile, the LLM company scraping it has hundred of millions of VC funding, they aren't going to notice they are burning a few million because their crappy scraper keeps hammering websites over and over again.


It's not scraping they're concerned about, it's abusing free GPU resources to (anonymously) generate (abusive) content.


Scraping static content from a website at near-zero marginal cost to its server, vs scraping an expensive LLM service provided for free, are different things.

The former relies on fairly controversial ideas about copyright and fair use to qualify as abuse, whereas the latter is direct financial damage – by your own direct competitors no less.

It's fun to poke at a seeming hypocrisy of the big bad, but the similarity in this case is quite superficial.


> Scraping static content from a website at near-zero marginal cost to its server, vs scraping an expensive LLM service provided for free, are different things.

I bet people being fucking DDOSed by AI bots disagree

Also the fucking ignorance assuming it's "static content" and not something needing code running


I think the parent is just pointing out that these things lie on a spectrum. I have a website that consists largely of static content and the (significant) scraping which occurs doesn't impact the site for general users so I don't mind (and means I get good, up to date answers from LLMs on the niche topic my site covers). If it did have an impact on real users, or cost me significant money, I would feel pretty differently.


Putting everything on a spectrum is what got us into this mess of zero regulation and moving goal posts. It's slippery slope thinking no matter which way we cut it, because every time someone calls for a stop sign to be put up after giving an inch, the very people who would have to stop will argue tirelessly for the extra mile.


What mess are you talking about? The existence of LLMs? I think it's pretty neat that I can now get answers to questions I have.

This is something I couldn't have done before, because people very often don't have the patience to answer questions. Even Google ended up in loops of "just use Google" or "closed. This is a duplicate of X, but X doesn't actually answer the question" or references to dead links.

Are there downsides to this? Sure, but imo AI is useful.


It's just repackaged Google results masquerading as an 'answer.' PageRank pulled results and displayed the first 10 relevant links and the LLM pulls tokens and displays the first relevant tokens to the query.

Just prompt it.


1. LLMs can translate text far better than any previous machine translation system. They can even do so for relatively small languages that typically had poor translation support. We all remember how funny text would get when you did English -> Japanese -> English. With LLMs you can do that (and even use a different LLM for the second step) and the texts remain very close.

2. Audio-input capable LLMs can transcribe audio far better than any previous system I've used. They easily understood my speech without problems. Youtube's old closed captioning system want anywhere close to as good and Microsoft's was unusable for me. LLMs have no such problems (makes me wonder if my speech patterns are in the training data since I've made a lot of YouTube videos and that's why they work so well for me).

3. You can feed LLMs local files (and run the LLM locally). Even if it is "just" pagerank, it's local pagerank now.

4. I can ask an LLM questions and then clarify what I wanted in natural language. You can't really refine a Google search in such a way. Trying to explain a Google search with more details usually doesn't help.

5. Iye mkx kcu kx VVW dy nomszrob dohd. Qyyqvo nyocx'd ny drkd pyb iye. - Google won't tell you what this means without you knowing what it is.

LLMs aren't magic, but I think they can do a whole bunch of things we couldn't really do before. Or at least we couldn't have a machine do those things well.


I’d argue putting everything in terms of black and white is the bigger issue than understanding nuance


Generalizing with "everything", "all", etc exclusive markers is exactly the kind of black/white divide you're arguing against. What happened to your nuanced reality within a single sentence? Not everything is black and white, but some situations are.


The person he's replying to argued against putting things on a spectrum. Does that not imply painting everything in black and white? Thus his response seems perfectly sensible to me.


He argued against putting things in a spectrum in many instances where that would be wrong, including the case under the question. What's your argument against that idea? LLM'ed too much lately?


He argued against and the response presented a counterargument. Both were based around social costs and used the same wording (ie "everything").

You made a specious dismissal. Now you're making personal attacks. Perhaps it's actually you who is having difficulty reasoning properly here?


I miss the www where the .html was written in vim or notepad.


It still can be. Do it. Go make your website in M$ Frontpage, for all I care


Shameless plug: My music homepage follows the HTML 2.0 spec and is written by hand

https://sampleoffline.com/


heck yeah B)


Just did that for a test frontend for a module I needed to build (not my primary job so don't know anything about UI but running in browsers was a requirement), so basic HTML with the bare minimum of JS and all DOM. Colleagues were very surprized. And yes, vim is still the goto editor and will be for a long time now all "IDE" are pushing "AI" slop everywhere.


ahh yes, fresh off reading "Html For Dummies" I made my first tripod.com site


For me it was making a petpage for my neopets using https://lissaexplains.com/

It's still up in all its glory.


This is great! The name reference also made me smile.


Also wild that from the tech bro perspective, the cost of journalism is just how much data transfer costs for the finished article. Authors spend their blood, sweat and tears writing and then OpenAI comes to Hoover it up without a care in the world about license, copyright or what constitutes fair use. But don’t you dare scrape their slop.


> Also wild that from the tech bro perspective, the cost of journalism is just how much data transfer costs for the finished article.

Exactly. I think the unfairness can be mitigated if models trained on public information, or on data generated by a model trained on public information, or has any of those two in its ancestry, must be made public.

Then we don't have to hit (for example) Anthropic, we can download and use the models as we see fit without Anthropic whining that the users are using too much capacity.


[flagged]


The library's archive is not a service provided by the newspaper


So? If the newspaper's website is willing to serve the documents, what's the problem?

The point is, if you're pleading with others to respect ""intellectual property"" then you're a worm serving corporate interests against your own.


I may be a worm but at least I respect that others might have a different take on how best to make creative work an attainable way of life since before copyright law it was basically "have a wealthy patron who steered if not outright commissioned what you would produce"


> I bet people being fucking DDOSed by AI bots disagree

Are you sure it's a DDoS and not just a DoS?


Yes, it is. The worst offenders hammer us (and others) with thousands upon thousands of requests, and each request uses unique IP addresses making all per-IP limits useless.

We implemented an anti-bot challenge and it helped for a while. Then our server collapsed again recently. The perf command showed that the actual TLS handshakes inside nginx were using over 50% of our server's CPU, starving other stuff on the machine.

It's a DDoS.


You should see Cloudflare's control panel for AI bot blocking. There are dozens of different AI bots you can choose to block, and that doesn't even count the different ASNs they might use. So in this case I'd say that a DDoS is a decent description. It's not as bad as every home router on the eastern seaboard or something, but it's pretty bad.


When every AI company does it from multiple data centers... yes it's distributed.


Uncoordinated DDoS, when multiple search and AI companies are hammering your server.


> Are you sure it's a DDoS and not just a DoS?

I think these days it’s ‘DAIS’, as in your site just DAIS - from Distributed/Damned AI Scraping


Off topic, but why is a DoS something considered to act on, often by just shutting down the service altogether? That results in the same DoS just by the operator than due to congestion. Actually it's worse, because now the requests will never actually be responded rather then after some delay. Why is the default not to just don't do anything?


It keeps the other projects hosted on the same server or network online. Blackhole routes are pushed upstream to the really big networks and they push them to their edge routers, so traffic to the affected IPs is dropped near the sender's ISP and doesn't cause network congestion.

DDoSers who really want to cause damage now target random IPs in the same network as their actual target. That way, it can't be blackholed without blackholing the entire hosting provider.


*> Why is the default not to just don't do anything?

Because ingress and compute costs often increase with every request, to the point where AI bot requests rack up bills of hundreds or thousands of dollars more than the hobbyist operator was expecting to send.


I think some people use hosting that is paid per request/load, so having crawlers make unwanted requests costs them money.


> Also the fucking ignorance assuming it's "static content" and not something needing code running

Wild eh.

If it's not ai now, it's by default labelled "static content" and "near-zero marginal cost".


What's a database after all.


All this reactionary outrage in the comments is funny. And lame.

Yes, for the vast majority of the internet, serving traffic is near zero marginal cost. Not for LLMs though – those requests are orders of magnitude more expensive.

This isn't controversial at all, it's a well understood fact, outside of this irrationally angry thread at least. I don't know, maybe you don't understand the economic term "marginal cost", thus not understanding the limited scope of my statement.

If such DDOSes as you mention were common, such a scraping strategy would not have worked for the scraper at all. But no, they're rare edge cases, from a combination of shoddy scrapers and shoddy website implementations, including the lack of even basic throttling for expensive-to-serve resources.

The vast majority of websites handle AI traffic fine though, either because they don't have expensive to serve resources, or because they properly protect such resources from abuse.

If you're an edge case who is harmed by overly aggressive scrapers, take countermeasures. Everyone with that problem should, that's neither new nor controversial.


"such DDOSes as you mention were common, such a scraping strategy would not have worked for the scraper at all"

They are common. The strategy works for the llm but not for the website owner or users who can't use a site during this attack.

The majority of sites are not handling AI fine. Getting Ddosed only part of the time is not acceptable. Countermeasures like blocking huge ranges can help but also lock out legimate users.


> They are common

Any actual evidence of the alleged scope of this problem, or just anecdotes from devs who are mad at AI, blown out of proportion?


Love AI so can't be that. Not devs website owners. Yes ask AI for stats.


It's not a cost for me to scrape LLM.

It is a cost for me for LLM to scrape me.

Why should I care about costs that have when they don't care about the costs I have?


The extent of the utilization is new.

The number of bots that try to hide who they are, and don't bother to even check robots.txt is new.


One euro is marginal for me for someone else it is their daily meal.


"They are rare edge cases" are we on the same internet?


I understand why OpenAI is trying to reduce its costs, but it simply isn't true that AI crawlers aren't creating very significant load, especially those crawlers that ignore robots.txt and hide their identities. This is direct financial damage and it's particularly hard on nonprofit sites that have been around a long time.


> but it simply isn't true that AI crawlers aren't creating very significant load.

And how much of this is users who are tired of walled gardens and enshitfication. We murdered RSS, API's and the "open web" in the name of profit, and lock in.

There is a path where "AI" turns into an ouroboros, tech eating itself, before being scaled down to run on end user devices.


These are ChatGPT and Claude Desktop crawlers we’re talking about? Or what is it exactly? Are these really creating significant load while not honoring robots.txt?

Genuinely interested.


Is this the first time you are reading HN? Every day there are posts from people describing how AI crawlers are hammering their sites, with no end. Filtering user agents doesn't work because they spoof it, filtering IPs doesn't work because they use residential IPs. Robots.txt is a summer child's dream.


They seem to mostly be third-party upstarts with too much money to burn, willing to do what it takes to get data, probably in hopes of later selling it to big labs. Maaaybe Chinese AI labs too, I wouldn't put it past them.

OpenAI et al seem to mostly be well-behaved.


I bet dollars to doughnuts that 95% of the traffic is from Claude and ChatGPT desktop / mobile and not literal content scraping for training.


That wouldn't explain the 1000x increase in traffic for extremely obscure content, or seeing it download every single page on a classic web forum.


And doing it over, and over, and over and over again. Because sure it didn't change in the last 8 years but maybe it's changed since yesterdays scrape?


That is ridiculous.

You imply that "an expensive llm service" is harmed by abuse, but, every other service is not? Because their websites are "static" and "near-zero marginal cost"?

You have no clue what you are talking about.


Well he’s a simp


Interesting how other people's cost is "near-zero marginal cost" while yours is "an expensive LLM service". Also, others' rights are "fairly controversial ideas about copyright and fair use" while yours is "direct financial damage". I like how you frame this.


Lets not try to qualify the wrongs by picking a metric and evaluating just one side of it. A static website owner could be running with a very small budget and the scraping from bots can bring down their business too. The chances of a static website owner burning through their own life savings are probably higher.


Perhaps the long play is to destroy all small hobby websites until only a AI directed web is left.


If you're truly running a static site, you can run it for free, no matter how much traffic you're getting.

Github pages is one way, but there are other platforms offering similar services. Static content just isn't that expensive to host.

THe troubles start when you're actually running something dynamic that pretends to be static, like Wordpress or Mediawiki. You can still reduce costs significantly with CDNs / caching, but many don't bother and then complain.


Setting aside the notion that a site presenting live-editability as its entire core premise is "pretending to be static", do the actual folks at Wikimedia, who have been running a top 10 website successfully for many years, and who have a caching system that worked well in the environment it was designed for, and who found that that system did not, in fact, trivialize the load of AI scraping, have any standing to complain? Or must they all just be bad at their jobs?

https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-th...


It's true it can be done but many business owners are not hip to cloudflare r2 buckets or github pages. Many are still paying for a whole dedicated server to run apache (and wordpress!) to serve static files. These sites will go down when hammered by unscrupulous bots.


Have you not seen the multiple posts that have reached the front page of HN with people taking self-hosted Git repos offline or having their personal blogs hammered to hell? Cause if you haven't, they definitely exist and get voted up by the community.


The cost is so marginal that many, many websites have been forced to add cloudflare captchas or PoW checks before letting anyone access them, because the server would slow to a crawl from 1000 scrapers hitting it at once otherwise.


It's not like those models are expensive because the usefulness that they extracted from scraping others without permission right? You are not even scratching the surface of the hypocrisy


It's more ironic because without all the scraping openai has done, there would have been no ChatGPT.

Also, it's not just the cost of the bandwidth and processing. Information has value too. Otherwise they wouldn't bother scraping it in the first place. They compete directly with the websites featuring their training data and thus they are taking away value from them just as the bots do from ChatGPT.

In fact the more I think of it, I think it's exactly the same thing.


This leads me to thinking: I ask chatGPT a question and they get the answer from gamefaqs.

But what happens if gamefaqs disappears because of lack of traffic?

Can LLM actually create or only regurgitate content.


>Can LLM actually create or only regurgitate content.

Contrary to what others say, LLMs can create content. If you have a private repo you can ask the LLM to look at it and answer questions based on that. You can also have it write extra code. Both of these are examples of something that did not exist before.

In terms of gamefaqs, I could theoretically see an LLM play a game and based on that write about the game. This is theoretical, because currently LLMs are nowhere near capable enough to play video games.


It will remain in their scraped data so they can keep including it in their later training datasets if they wish. However it won't be able to do live internet searches anymore. And it will not generate new content of course. Especially not based on games released after the site codes down so it doesn't know. Though it could of course correlate data from other sources that talk about the game in question.


They cannot create original content.


Well they can make some up, like hallucination. That's an additional problem: when the original site that provided the training data is gone: how can they use verify the AI output to make sure it's correct?


Getting scraped by abusive bots who bring down the website because they overload the DB with unique queries is not marginal. I spent a good half of last year with extra layers of caching, CloudFlare, you name it because our little hobby website kept getting DDoS'd by the bots scraping the web for training data.

Never in 15 years if running the website did we have such issues, and you can be sure that cache layers were in place already for it to last this long.


"near-zero marginal costs". For whom exactly????

https://drewdevault.com/2025/03/17/2025-03-17-Stop-externali...


I don't think a rule along the lines of "Doing $FOO to a corporate is forbidden, but doing $FOO to a charitable initiative is fine" is at all fair.

What "$FOO" actually is, is irrelevant. I'm curious how you would convince people that this sort of rule is fair.

The corp can always ban users who break ToS, after all. They don't need any help. The charitable initiative can't actually do that, can they?


You’re describing the tragedy of the commons. No single raindrop thinks it’s responsible for the flood.


It is direct financial damage if my servers not on an unmetered connection — after years of bills coming in around $3/mo I got a surprise >$800 bill on a site nobody on earth appears to care about besides AI scrapers.

It hasn’t even been updated in years so hell if I know why it needs to be fetched constantly and aggressively, - but fuck every single one of these companies now whining about bots scraping and victimizing them, here’s my violin.


If you can identify the scraper you should have a valid legal case to recover damages.


Only if they had a robots.txt for their site.


No, it's still illegal to DDoS sites that don't have robots.txt.


You are right, I hadn't considered that aspect.


I hadn’t even considered that. Don’t know why that comment is greyed out or downvoted.

It’s a static site that hasn’t been updated since 2016—- so it’s .. since been moved to cloudflare r2 where it’s getting a $0.00 bill, and it now has a disallow / directive. I’m not sure if it’s being obeyed because the cf dash still says it’s getting 700-1300 hits a day even with all the anti bot, “cf managed robots” stuff for ai crawlers in there.

The content is so dry and irrelevant I just can’t even fathom 1/100th of that being legitimate human interest but I thought these things just vacuumed up and stole everyone’s content instead of nailing their pages constantly?


60% of our traffic is bot, on average. Sometimes almost 100%.


  > net-zero marginal cost
Lol, you single-handedly created a market for Anubis, and in the past 3 years the cloudflare captchas have multiplied by at least 10-fold, now they are even on websites that were very vocal against it. Many websites are still drowning - gnu family regularly only accessible through wayback machine.

Spare me your tears.


> Scraping static content from a website at near-zero marginal cost to its server

It's not possible to know in advance what is static and what is not. I have some rather stubborn bots make several requests per second to my server, completely ignoring robots.txt and rel="nofollow", using residential IPs and browser user-agents. It's just a mild annoyance for me, although I did try to block them, but I can imagine it might be a real problem for some people.

I'm not against my website getting scraped, I believe being able to do that is an important part what the web is, but please have some decency.


AI providers also claim to have small marginal costs. The costs of token is supposedly based on pricing in model training, so not that different from eg your server costs being low but the content production costs being high. And in many cases AI companies are direct competitors (artists, musicians etc.)

(TBH it's not clear to me that their marginal costs are low. They seem to pick based on narrative.)


> Scraping static content

How do you know the content is static?


My website serving git that only works from Plan 9 is serving about a terabyte of web traffic monthly. Each page load is about 10 to 30 kilobytes. Do you think there's enough organic, non-scraper interest in the site that scrapers are a near-zero part of the cost?


Absolutely not, the former relies on controversial ideas to qualify as legal.

Stealing the content from the whole planet & actively reducing the incentive to visit the sites without financial restitution is pretty bad.


You are, of course, ignoring the production costs of the static content that OpenAi is stealing.

Stop justifying their anti-social behavior because it lines your pockets.


And yet I have to pay in my time and cash to handle the constant ddos'es from the constant LLM scraping


Because you say it is?

I obviously disagree. I mean, on top of this we are talking about not-open OpenAI.


It’s not for techbros to decide at what threshold of theft it’s actually theft. “My GPU time is more valuable than your CPU time” isn’t a thing and Wikipedias latest numbers on scraping show that marginal costs at scale are a valid concern


I'm sure the copyright holders would consider your use of their content as direct financial damage


Are they, actually?


Speak for yourself.


I don’t know what world you live in but it’s not this one.


> Scraping static content from a website at near-zero marginal cost to its server

The gall. https://weirdgloop.org/blog/clankers


Bait or genuine techbro? Hard to say


The issue is that there are so many awful webmasters that have websites that take hundreds of milliseconds to generate and are brought down by a couple requests a second.


OpenAI must be the most awful webmasters of all, then, to need such sophisticated protections.


Suppose you construct a Mechanical Turk AI who plays ARC-AGI-3 by, for each task, randomly selecting one of the human players who attempted it, and scoring them as an AI taking those same actions would be scored. What score does this Turk get? It must be <100% since sometimes the random human will take more steps than the second best, but without knowing whether it's 90% or 50% it's very hard for me to contextualize AI scores on this benchmark.


The people recruited weren’t experts. I can imagine it’s straightforward to find humans (such as those that play many video games) that can score >100% on this benchmark.


So, if you look at the way the scoring works, 100% is the max. For each task, you get full credit if you solve in a number of steps less than or equal to the baseline. If you solve it with more steps, you get points off. But each task is scored independently, and you can't "make up" for solving one slowly by solving another quickly.

Like suppose there were only two tasks, each with a baseline score of solving in 100 steps. You come along and you solve one in only 50 steps, and the other in 200 steps. You might hope that since you solved one twice as quickly as the baseline, but the other twice as slowly, those would balance out and you'd get full credit. Instead, your scores are 1.0 for the first task, and 0.25 (scoring is quadratic) for the second task, and your total benchmark score is a mere 0.625.


The purpose is to benchmark both generality and intelligence. "Making up for" a poor score on one test with an excellent score on another would be the opposite of generality. There's a ceiling based on how consistent the performance is across all tasks.


>"Making up for" a poor score on one test with an excellent score on another would be the opposite of generality.

Really ? This happens plenty with human testing. Humans aren't general ?

The score is convoluted and messy. If the same score can say materially different things about capability then that's a bad scoring methodology.

I can't believe I have to spell this out but it seems critical thinking goes out the window when we start talking about machine capabilities.


Just because humans are usually tested in a particular way that allows them to make up for a lack of generality with an outstanding performance in their specialization doesn't mean that is a good way to test generalization itself.

Apparently someone here doesn't know how outliers affect a mean. Or, for that matter, have any clue about the purpose of the ARC-AGI benchmark.

For anyone who is interested in critical thinking, this paper describes the original motivation behind the ARC benchmarks:

https://arxiv.org/abs/1911.01547


>Apparently someone here doesn't know how outliers affect a mean.

If the concern is that easy questions distort the mean, then the obvious fix is to reduce the proportion of easy questions, not to invent a convoluted scoring method to compensate for them after the fact. Standardized testing has dealt with this issue for a long time, and there’s a reason most systems do not handle it the way ARC-AGI 3 does. Francois is not smarter than all those people, and certainly neither are you.

This shouldn't be hard to understand.


How do you define "easy question" for a potential alien intelligence? The solution, like most solutions when dealing with outliers, in my opinion, is to minimize the impact of outliers.


I mean presumably that's what the preview testing stage would handle right ? It should be clear if there are a class of obviously easy questions. And if that's not clear then it makes the scoring even worse.

And in some sense, all of these benchmarks are tied and biased for human utility.

I don't think ARC would be designed and scored the way it is if giving consideration for an alien intelligence was a primary concern. In that case, the entire benchmark itself is flawed and too concerned with human spatial priors.

There are many ways to deal with a problem. Not all of them are good. The scoring for 3 is just bad. It does too much and tells too much.

5% could mean it only answered a fraction of problems or it answered all of them but with more game steps than the best human score. These are wildy different outcomes with wildly different implications. A scoring methodology that can allow for such is simply not a good one.


It was neat to be able to try my own prompts and get a sense of what the state of video generation was. But I certainly never generated something that I thought I got real value out of on its own merits, and I still don't understand why there was a social media component to the app.


They wanted network effects because ChatGPT was sorely lacking any.

I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.


I generated a fair number of videos with Sora, and used a handful of those and edited them outside of Sora for a couple of short TikTok videos.

Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.


Same. I pretty much only watch videos I generate.


The fact that the LLM appears to never assign an actual 0 or 10 makes me suspicious. Especially when the prompt includes explicit examples of what counts as a 10.


I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.


In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!


From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.


Oh I worked at one of them.

I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.


Definitely one approach to the circumstances. I tried some variation of this and it blew up in my face (as I expected ).

Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.

The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.

I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.

One of the best ways to get fired in my opinion.


Out of curiosity, it sounds like you're the kind of person that could easily find another job. Why slog it out until the end rather than quit/find a better gig? Genuinely interested because every time I've ended up with a manager like that my mental health has suffered so now I generally start planning my exit as soon as I'm stuck with a bad manager.


Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.

I have been in such a situation before, and while I was not able to coast along until the company went under, the time delta between me getting fired and the company going under was measured in weeks.

In hindsight I'd probably not do it again, it was hugely mentally taxing, and knowingly performing work in such a way that it provides negative value to the company (remember, the goal is to make it go under) is in my experience actually harder than just doing a good job... Especially if being covert is a goal.


Have you read the CIA’s Simple Sabotage Field Manual?

https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...


I've seen it, but I think it's got some places that it would benefit from more clarity. Can we put together a committee to improve and protect our processes from it? We could call it a task force if that's easier to sell to management.


This demands a tiger team.


I did not know the existence of this manual. It was a very interesting read! Especially after page 28 (General Interference with Organizations and Production).


> Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.

What...? In what way is it anything other than highly unethical to sabotage someone you have a contract with, because you disagree with them?


Plenty of historical examples of work environments where sabotage would have been the most ethical thing to do (and often you will only know in hindsight). But yeah in most circumstances a simple disagreement doesn’t warrant the psychological cost of such sabotage.


the psychological cost of such sabotage

Of course. One always needs to weigh it against the psychological cost of complying with unethical directions.


What do you mean...? Plenty to do what?

Your opinion of the situation is not enough to justify this course of action in 99.99% of cases and the residual 0.01% should not be enough to fuel your ego to do anything other than quit decently, and look for an employer that is more aligned with whatever your ideals are.

I repeat the insane statement that we are arguing over here: "Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job."

This says: ANY company you work for and disagree with over anything: Don't quit! Sabotage [maybe people are confused about what "do a bad job" means, and that this usually leads to other people getting hurt in some way, directly or indirectly, unless your job is entirely inconsequential]. And that's supposed to be ethically optimal.

What the fuck?


I think there's a bit of confusion between

> (Ethically, if you do not agree with the company you work at), the optimal course of action is..

And

> Ethically, (if you do not agree with the company you work at, the optimal course of action is...)

The former, should've probably been phrased "if you do not agree ethically with the company you work at, the optimal course of action is..."

First example that comes to mind, about a movie that portrays ethical sabotage is

https://en.wikipedia.org/wiki/Schindler%27s_List

I'm actually a bit unsure about what could be the motivations of someone who engages in sabotage *not* for ethical reasons


There's a _big_ continuum between disagreeing over something and an ethical hard line, it feels like a slippery slope to interpete a suggested approach for one end of that line as advocacy for applying that same approach to the other end.


A specific example will help.

Imagine I am working for a company and I discover they are engaged in capturing and transporting human slaves. Furthermore, the government where they operate in fully aware and supportive of their actions so denouncing them publicly is unlikely to help. This is a real situation that has happened to real people at points in history in my own country.

I believe that one ethical response would be to violate my contract with the company by assisting slaves to escape and even providing them with passage to other places where slavery is illegal.

Now, if you agree with the ethics of the example I gave then you agree in principle that this can be ethical behavior and what remains to be debated is whether xAI's criminal behavior and support from the government rise to this same level. I know many who think that badly aligned AI could lead to the extinction of the human race, so the potential harm is certainly there (at least some believe it is), and I think the government support is strong enough that denouncing xAI for unethical behavior wouldn't cause the government to stop them.


I have no clue why people are so confused here.

a) I understand the very few and specific examples, that would justify and require disobedience. In those cases just doing a "bad job" seems super lame and inconsequential. I would ask more of anyone, including myself.

b) all other examples, the category that parent opened so broadly, are simply completely silly, is what I take offensive with. If you think simply disagreeing with anyone you have entered a contract with is cause for sabotaging them, and painting that as ethically superior, then, I repeat: what the fuck?

c) If you suspect criminal behavior then alarm the authorities or the press. What are you going to do on the inside? What vigilante spy story are we telling ourselves here?


Some people in this thread seem to come from a place of morality where some “higher truth” exists outside of the sphere of the individual to guide one’s actions, and yet others even seem to weakly disguise their own ethics and beliefs behind a framework of alleged “rationality”, as if there was mathematical precision behind what is the “right” action and which is clearly wrong — and anybody that just doesn’t get it must be either an idiot or clinically insane. By which I completely dismiss not only opinion but also individual circumstances.

In reality, which actions a person considers ethical and in coherence with their own values is highly individual. I can be friends or colleague with somebody who has a different set of ethics and circumstances than me. If I were to turn this into a conflict that needs resolution each time it shows, I would set myself up for eternal (life long) war with my social environment. Some will certainly enjoy that, and get a sense of purpose and orientation from it! I prefer not to, and I can find totally valid and consistent arguments for each side. No need to agree to reach understanding, and respect our differences.

Typically, people value belonging over morality: they adapt to whatever morality guarantees their own survival. The need to belong is a fundamental need; we are social animals not made to survive on our own.

The moment I am puzzled about another persons reasoning I can ask and if they are willing they will teach me why their actions make sense to them. If I come from a place of curiosity and sincere interest, people will be happy to help me get over my confusion. If I approach that conversation from some higher ground, as some kind of missionary, I might succeed sometimes, but fail most times, as I would pose a threat to their coherence, which they will remove one way or another.


Ah, but if there’s no higher truth, then you also can’t say that it’s wrong to sabotage your employer because of an ethical disagreement (or rather, you can say it, but it’s just your personal opinion). By condemning this course of action, the OP presupposes some sort of objective ethical standard.


"Don't struggle only within the ground rules that the people you're struggling against have laid down." -- Malcolm X

"If you're unhappy with your job you don't strike. You just go in there every day, and do it really half-assed. That's the American way. -- Homer Simpson

"To steal from a brother or sister is evil. To not steal from the institutions that are the pillars of the Pig Empire is equally immoral." -- Abbie Hoffman

Some might consider it unethical but others might also consider it immoral to not do what you're describing.

I guess you're fortunate enough to have only worked at places where your moral framework matched up with their business practices and treatment of the staff.

That isn't the case for most people. Most people are put into situations at one time or another where the people they're working for don't value them as equals, where the people they work for casually violate reasonable laws like product safety or enivronmental standards laws and what's worse these people will suffer no consequences for doing so.

No White Knight in shining armour is going to come from the government to shut them down. No lightning from heaven will strike them down. No financial penalty to dissuade them from further defection from society and the common man in the game that is life.

So what do you do? Do you do nothing? Just put your nose to the grindstone and keep working for the man? Do you quit, only to end up penniless and jobless, with poor prospects of an alternative, and even if you found one maybe it's 'meet the new boss same as the old boss'?

Nah, you come into work every day and you subtly fuck it up. You subtly fuck it up and you take whatever value you can extract.

They'd do the same to you.

They are doing the same to you.


> In what way is it anything other than highly unethical to sabotage someone

Ethics is more complicated than that. Is it unethical to sabotage your employer if your employed is themselves acting unethically?


Have we gotten so lost that “working against your enemies” is no longer something we aspire to do?


You’ve seen Schindler’s List, right?


Assume you work for e.g., a cigarette company. A company responsible for many deaths by unethically adding highly addictive substances. By sabotaging the company you are making this world a better place. Ethically it's the right thing to do.

Or, assume you're hired by the Nazi to work in concentration camps. Ethically it's the right thing to do to sabotage their gas chambers.


Let's say you work for elon musk and are a decent person…


Why would you start to work for elon musk if you consider yourself a decent person, but him unworkable for? Have you not heard of elon musk beforehand...? Did you let yourself be employed with the specific goal of sabotaging the work, in what must be the least effective (but certainly very lucrative) coup possible?

What is it? Am I to believe this person is a chaotic mastermind? Or a selfish idiot? Or non-existant?


His reputation did not start out in the current state; if it had, I suspect he would never have been able to hold the importance/monetary power that he currently holds.

Changing one's mind about him can take a while.

With the benefit of hindsight, it certainly took me longer than I am happy with to change my mind about him; to be specific, I should have more radically changed my opinion of his personality when he libelled that cave expert in response to being told a submarine wasn't going to help, I should have recognised that only someone with a very fragile ego would react that way, that it wasn't just a blip on his record but something deeper.


> Why would you start to work for elon musk if you consider yourself a decent person, but him unworkable for?

Anyone working at Twitter at the time of its acquisition could have found themselves in such a position.


Even ethically, this is only true if you think the ethics of the place are so bad that sabotage is warranted. That's not every place that you have ethical problems with.

To do that (and hide it), you have to become a dishonest person yourself. That is ethically destructive to you. So the threshold for doing this should be pretty high.


I don't think sabotaging a company just because you don't want to work with a certain framework and deploy it on k8s is a good idea.


Yeah, I could see this being true if there was really _nothing else_ I could possibly be doing with my time that is worthy. But there are a lot of worthy things I could be doing with my time.


Ethically perhaps but financially and mentally its surely better to start looking for a new role (at a different company) that is more in alignment with you, no?


Ethically, if you extend this reasoning, are we not obligated to find a position in the most morally repulsive organization we are aware of, and then coast?


yes, this is called 'effective altruism'


I think there is an implied "given the company you joined turns out to be nonethically aligned"


Yes, that's what I'm addressing with my comment above.


One could find a position in the most morally attractive organization they are aware of, and then work really hard.


well not coast, the intent is sabotage


Coasting you're already using resources that could be used more effectively.

If you actively slow other people down as well it's even better though.


As they say, two uneth’s make a thical.

I really wouldn’t want to be in this position. But it feels very motivating. It would sooth some difficult memories.

I can see myself putting in a lot of hours.

The willingness to be fired, in both good and bad situations, can be mentally freeing and an operational/political advantage. Many of us fail to push as hard as we optimally could, when we have too much on the line.


IMO, this is a good question and deserves a solid answer, so I’ll do my best.

Setting aside the “fixer” for the time being, I really enjoyed the work I did at Tesla. Tesla was the first company that gave me very high levels of autonomy to just own projects and deliver. It also pushed me to take on projects that I had previously wanted to do that I hadn’t been given a chance to work on before.

(Side note: At that point in time in my career, my thinking was that I needed to earn opportunities to work on projects at work to build skills that would enhance my career. I didn’t see the value in working on projects outside of work to build skills because I didn’t think those side-project skills would be valued by other companies the same as “day job” experience. I’ve since learned this isn’t true when it’s done right.)

I spent a lot of time at Tesla delivering value for a bunch of people who desperately needed it at the time, and the thanks I received from them was genuine. It felt very good to help others at Tesla out in a meaningful way, so I kept chugging along to the best of my abilities. Life was throwing lemons at me in my personal dealings, and Tesla was helping me make lemonade from a career standpoint. Besides, all the long work hours were a good distraction from the home life stuff.

In a lot of ways, it was a very fulfilling environment to work in, but it wasn’t for the faint of heart. People often quit within a month or two because the environment was too fast paced with too many projects under tight deadlines and projects quickly followed one after another. An environment like Tesla just doesn’t let up, so one has to figure out how to manage the stress without much support from others. Oftentimes, if you do need to let up at Tesla (or introduce friction in any sort of seemingly non-constructive way), that’s the cue you aren’t working out for the company anymore and it’s time to find someone to replace you.

Coming back around to the original question of why I stuck it out until the end. Just before the “fixer” was brought in, I was “soft promoted” by a director (no title change, but was given direct reports and a pay bump, the title change was suppose to come a couple of months later as the soft-promotion happened just before an annual review cycle). The director who soft-promoted me was someone who I got along with well and it seemed like things were going in the right direction in my career at that point. The director was in charge of a couple of projects that went sideways in a very visible way, and Elon basically fired the director after the second project went south, which is why the “fixer” was brought in.

When the “fixer” first took over things, it seemed like I was going to continue on the path that the director had originally laid out for me. The “fixer” said I was going to get more headcount and work on bigger projects, but this never materialized.

I really didn’t like working for the “fixer” after a while. IMO, it was clear they didn’t know what they were doing, they weren’t willing to listen to feedback, and I spent a lot of time trying to provide guidance to the “fixer”, but it wasn’t seen as helpful and I felt like I was spinning gears. My mental health did start to suffer as I got more burned out towards the end of my tenure there.

Eventually, I was tasked with hiring someone to be my manager and I saw the writing on the wall (sort of). I started to look for a new job just in case. At one point, I thought bringing in someone between myself and the “fixer” would be a good thing. I didn’t realize I was actually finding my replacement. Two days after my replacement was hired, I was let go (this was the 1:1 meeting where I was going to turn in my notice, but HR served me papers instead).

To your original point, if I was in a similar situation now, I would be planning my exit immediately instead of trying to make the best of a bad situation, but I had to learn that lesson the hard way.


Hey, thanks, that was quite interesting!

I'd be curious to hear your thoughts on how the "fixer", who sounds rather ineffective as an executive, came into this position, in what sounds like overall a rather effective organization.

I've been personally thinking quite a bit about what makes organizations work or not work recently, and your story is quite interesting to me as a glimpse into a kind of organization that I've never seen from the inside myself.


This is a good question, and it felt like nepotism. I do want to point out that this is all somewhat hazy memories from years ago when all of this happened, so take everything with a grain of salt (as usual). Also, a lot of this is going to sound like nepotism, which is most likely was, but this is hearsay from other people.

My understanding of how the "fixer" came into there position is a somewhat circuitous route. From my understanding (I didn't hear any of this directly from the "fixer" themselves, but other people who spent far more time with the "fixer" than myself), the "fixer" had spent about a decade out of the workforce prior to joining Tesla. My understanding is that they were raising kids while also dealing with aging parents. We'll just call this time the "fixer"'s work hiatus.

Prior to the hiatus, the "fixer" had moved into a small-team managerial role at a large, name-brand tech company during the late 90s/early 2000s. At the end of the hiatus, they leveraged some connections and somehow attained a director position at Tesla managing a team of about 30-40 people straight out of the hiatus.

From my understanding, the first team the "fixer" managed at Tesla didn't like working for them and after about 18 months, the team basically forced the "fixer" out. I'm not exactly sure what the team was doing to push the person out, but from what I heard, work basically ground to a halt for the entire team where they refused to work for the "fixer".

This was around the same time that the two projects went sideways that I mentioned, so the director I reported to was on the outs and the director's manager (a VP) was looking for someone who could step into the role. The VP somehow connected with the "fixer" and they worked out a deal where the "fixer" would lead the team on a 3-month probation period while the VP continued to look for someone to come into the position, while also giving the "fixer" a chance to earn the role.

(Side note: One other bit of context I want to provide is that the team I was on was about 50-60 or so people at this time right before the "fixer" came on. The "fixer" also did not have any sort of technical background and this team consisted of probably ~90% software professionals in some capacity. A lot of the conversations were very technical in nature, and the "fixer" did A LOT of delegating and "just tell me what decision you'd make and we'll do that" leadership.)

During this probation period, I thought the "fixer" actually did a good job getting a lay of the land, the social dynamics at play, and helped work out some inefficiencies. However, a lot of this improvement was done by bringing in consultants to do the deep dive, discover problems, and provide guidance to the "fixer" on how to address the problems.

Once the probation period was over, the consultants left and the "fixer" was in charge. Pretty quickly, the firings began and over the course of the next 5-6 months, more than 70% of the team under the "fixer" was replaced. At the same time, the team I was working for merged with another team, and the team size under the "fixer" shot up to about 100-120 people post-merge (I forget the exact number). The "fixer" also hired quite a few more people thinking more people get the same projects done faster.

To say the least, it was a pretty chaotic time because the entire team was under a lot of pressure with in-flight projects, not knowing if they were going to randomly be fired or not, new people to mentor/gel with, and lots of random projects being thrown at us.

About 6 months after I left, the "fixer" was fired and someone else who had extensive experience was brought in to right the ship. Per my understanding with people who were still working there about a year after the "fixer" left, the new person was very successful and had done a good job leading the team. Also, the person who I found to be my replacement stayed nearly 7 years at Tesla, so I guess I did a good job with that one.


In my case at a different firm, I happily gave notice than to put up with the "fixer", who had been hired by the other "fixer", both of which were mostly only good at shitting all over the place and driving most of the technical organization out of the company. I got the feeling that was the whole point, so I resigned instead of waiting for my eventual layoff.


As someone who now lives and works in Denmark: it's sad that so many of us have been conditioned to think 6 weeks severance is generous.

Here, labor unions are quite widespread, and very effective at negotiating reasonably but firmly. As a result, I can depend on 3 months severance _guaranteed under law_ after 6 months at a job. (After 3 years, it goes up to 4 months, and then from there up to a max of 6 months.)

It puts the responsibility for risk of instability, errors in planning hiring / capacity, etc. firmly where it belongs: with the employer.

(And no, the economic sky is not falling here as a result. Quite the opposite.)


Welcome to our cozy little country; I hope you're settling in well.

Just out of curiosity: Assuming you're a SW engineer, did you join IDA or Prosa or did you decide to not join an union? I'd like to gathers some more datapoints to help other engineers moving to Denmark make an informed decision.


[flagged]


Nelson Baghetti, a.k.a. “Bag Head”


BIG Head.


It's Bighetti actually ([1]).

[1] https://silicon-valley.fandom.com/wiki/Big_Head


That was kgwgk's joke!


Thank you for restoring my faith in humanity!


Thank you for the joke, it was good.


Why did Tesla work initially? Because they were first to market and people were willing to overlook flaws?

When did it start falling apart?

Why hasn't the same happened to SpaceX? (Gov contracts, too big to fail, national defense, no competition yet, etc.?)

And honestly, why hasn't anyone domestically put up a decent fight against Tesla? Best I can think of is Rivian, and those have their own issues.


> Why did Tesla work initially?

Becaues they were ~first to market - and honestly, as a tesla driver for the last 6 years - It's the best car I ever owned (including Toyota, Mazda, and domestics).

6 years ago, for the effective price of a Honda Accord, I was able to get a car with excellent AWD for NorEast winters, perfect weight distribution (previously drove a Miata for comparison), could beat ~95% 'super cars' in a straight line, and it got 140MPG.

6 years ago. And I've had 0 maintenance outside of tire / air filter changes since. There was nothing anything remotely like it on the market, and it still holds up today. That's incredibly compelling.

Then PedoDiver, and it's been downhill from there... I'll likely get an R3X when it comes out.


Not even Tesla fans claim that Tesla is reliable.

https://www.motor1.com/news/781164/tesla-used-car-reliabilit...

For a year when we were doing the digital nomad thing, my wife and I didn’t own a car and we rented plenty of EVs. Tesla was by far our least favorite. Not having CarPlay alone is dealbreaker


As an anecdote, the two I've had are fairly reliable. The older one did have more issues (4+ in warranty?, 3 out of warranty), but they've all been small/manageable so far.


Maybe it's up to taste. Maybe the QC fell badly after some time.


It is well known that Tesla went cheapo (in quality) after a while as Elon got greedier


CR notes, though, that Tesla has improved, with its latest models demonstrating "better-than-average reliability." It’s now in the top 10 of the publication’s new car predictability rankings—just avoid those older models.

That said, it's not all bad news for Tesla on the reliability front. According to Consumer Reports, Tesla ranks ninth in new-car reliability with a predicted reliability of 50. That's just behind Buick (51) and Acura (54), but ahead of Kia (49) and Ford (48), as well as luxury rivals like Audi (44), Volvo (42), and Cadillac (41).

You were so blinded by Elon Derangement Syndrome that you didn't even bother reading your own source.


Two thoughts come to mind: First, looking at the data is always a good idea. Thank you for adding that information and correcting the record.

Second, it may be counter-productive to label any criticisms of a person as [person] Derangement Syndrome.

Elon is an objectively awful, awful human being and one could only be called deranged for finding any redeeming qualities in him.

The 'Derangement Syndrome' trope is a cheap tactic to try to shift derangement from the actually deranged person to the people pointing it out.


When we were comparing EVs it was well before Musk went full DOGE.

And you did see the part about the lack of CarPlay being an automatic disqualifier for me didn’t you? What does that have do do with Musk?

Oh and another citation

https://boingboing.net/2026/01/05/new-study-ranks-tesla-as-t...


Not sure which car you compare it to specifically from those manufacturers, but teslas seem much more expensive where I live than most models of those. Comparing it to corresponding BMW would be a more appropriate comparison.

Then comparison of quality of manufacturing and driving experience would end up in very different way (as driver of even older bmw 5 series teslas I've been to feel very cheap, and driving enjoyment goes way further than straight line performance and there teslas just don't deliver).

I agree the pedodirver should have been an eye opener for everybody. People are who they are and they don't change. Circumstances change and thus corresponding reactions, but thats about it.


This is the archetype I have seen for most fans of Tesla and people who think they make good cars. They assume a $50,000 car (their current Tesla) should compare with a $20,000 car (their previous Honda/Mazda). The Tesla market is also the market with BMWs and Porsches, and dollar for dollar you get a lot more from a BMW than a Tesla.


I compare my $41.5k Model Y with a Rav4/Highlander.

The Rav4 costs the same, but has far worse performance, technology, and ongoing maintenance costs.

The Highlander is slightly better, but costs $10k to $20k more, and still has far worse performance, technology, and ongoing maintenance costs.

Plus, I avoided spending hours at a dealership, and I must know at least a couple dozen Tesla owners that report no issues in the previous 5 to 10 years.

I thought I would miss Carplay, but it’s a non issue. Toyota wanted $15 to $25 per month for remote start, I pay Tesla $0 per month for remote start and remote climate control.


I bought my LR Model 3 in 2020 for ~42,000, ~15k cheaper than a v6 3 Series at the time. A v6 5 series is another significant jump up in price/market.

> Not sure which car you compare it to specifically from those manufacturers

My comparison at the time was a Honda Civic, BMW 3 Series, and that was kind of it.

I generally consider the Model 3 interior roughly middle between the Honda and the BMW, while having worlds better tech, twice the hp, and - Electric (when they were still rare).

There really was nothing like it at any price point at the time, and i still consider it a great car (though of course not perfect).


They must have outcompeted Musk at intelligence and/or insanity with their dedication into maximizing production volume of liquid fueled rocket engines.

Tom Mueller was a VP of propulsion at TRW Inc., which, among numerous other things you know from textbooks, made the Apollo LM descent engine, as well as early Space Shuttle TDRS data relay system sats. Calling Mueller a guy interested with engines having issues with his bosses is like referring to Craig Federighi as a guy interested in designing his own laptop.

I guess now that everyone knows about Elon, and Elon himself probably becoming more paranoid from both age and after SpaceX years and exposure to Twitter infoflood without adequate mental immunity, on top of most people who'd be in position to meet him not being as smart and quietly lunatic as literal Old Space trained rocket scientists, the scheme of temporarily impinging ideas upon Musk so to securely attaching the funding for your own thing do not work so well anymore.


Seeing Elon buy Twitter was like watching a functional alcoholic I admire buy a bar.


To me it was more like watching an old lady watering IE toolbars at a Mcdonald's. Nobody knows what's the deal with her never cancelling any InstallShields, oh wait, here comes another WinRAR installer... aaand a reboot.


Everyone should look up some interviews with his father, he's turning into a carbon copy.


Tesla won because Elon is a great seller, the product is mediocre at best but I’ve heard many times from friends that it was the same quality as a Mercedes Benz, so the reality distortion field is very real.

And Americans in general don’t want electric cars for some reason. I’m happily driving my Buzz and charging on my solar panels instead of paying 5 bucks a gallon on diesel. The propaganda here is strong and people buy it.


I think you are simplifying a little. Musk had the courage to go against the big manufacturers and build the charger network which at the same a lot of smart people would never work. Same with SpaceX. They did something most people thought could never work.

I don't like Musk politically but that doesn't mean we can't acknowledge that he transformed 2 industries by sheer willpower and stubbornness.


> I don't like Musk politically but that doesn't mean we can't acknowledge that he transformed 2 industries by sheer willpower and stubbornness.

If you talk to anyone who worked there, they will tell you that he had little to do with the innovation at any of his companies. His lieutenants and the people that worked for them had all the innovative ideas, and for the most part tried to either avoid Elon's ideas or convince him that their ideas were his so he would push them.


But push them he did until the industry had to get on board. I think people underestimate the impact of a pro-change company culture, even if it does run on a cult of personality that is much less pleasant up close than in the occasional earnings call.


Wasn't Tesla the first auto manufacturer in the US in 60+ years to survive it's 5th year or something like that


Yes, Original Musk was a good innovator. Alas, his brain has rotted - maybe not in IQ, but in execution and quality as he fossilized into a narcissist.


Teslas have a lot of flaws, but there is just now starting to be real competition. There was nothing like the model 3 in 2019. Tesla did well because they were first to market with a disruptive product people wanted, and because Elon sold it well. Both.


There was lots of competition in 2019: Volkswagen ID.3, Audi e-tron, Jaguar I-PACE, Polestar 1, etc., as well as lower-end entries like Hyundai Kona, Kia Niro, and so on. Depends on exactly what you think Tesla is competing against.


- there was nothing like the supercharger network

- All of the other options made a painful trade off on cost or range or something else. Tesla was the only one that had both range and was (to some degree) affordable without being compromised in some way.


> the product is mediocre at best

I'm not a Tesla fanboy, last year was the first time I bought one (new Model Y), but it is by far the best car I've ever owned, and the FSD blew my mind with how much better it was than I expected.

My wife hates Elon, and has a new hybrid Mitsubishi, but she still drives my Model Y all the time because it's just so much better to drive.

What are you basing the 'mediocre' opinion on?


I owned a Model S. It was a nightmare. Sealed poorly, fraying seams, the dashboard crashed regularly.

I had a service center refuse to schedule a safety recall unless I paid $400 for a new dashboard monitor.

That car is behind me now and I'm so glad. Yes, it could accelerate and that's just about the only trick it has.


Same experience here. Had a 2018 P100D. Absolutely the worst car I’ve ever had. Terribly put together. Awful interface. And so utterly fucking distracting it was a liability.

Got rid of it after it stomped the brakes on an empty road and had a battery issue that took weeks to fix.

I don’t own a car now and don’t want one. I’d probably buy a Polestar next time if I had to get one.


>What are you basing the 'mediocre' opinion on?

Tesla is well known for having shitty build quality.

https://www.jalopnik.com/teslas-quality-control-is-so-bad-cu...


I concur. We were in the market for a new car. I went to Audi to test drive their A4; and it was OK. The sales guy sat in the passenger seat, yakking away.

Next we went to the Tesla showroom. The sales guy just entered some address and told me to press the gas pedal and it would go by itself. Full FSD. And no sales guy in the car. That just blew me away.

We ended up buying the Model Y.


Probably based on comparisons with modern electric cars, like BYD.


I did a research project of cars that actually have decent auto lane following distance keeping cruise control for my 1hr highway commute, and tried out a few in a rental cars (hyundai and kia) and a tesla model y and tesla really is the best that is out there unless you want to potentially spend a lot more to get something that comes close. A friend of mine has done many long cross country road trips no problem with just autopilot.

GM Supercruise and Ford Bluecruise are the current competition it seems, with BMW, subaru and mercedes being behind those 2. I haven't driven with them although to personally compare yet.

Even though the interior is a bit lower quality, there isn't very much quite like it on the market. It also fits an almost 7 ft surfboard inside comfortably, is a nice car to sleep in for car camping and you can get a model Y for less than $20k used now.


I’ve tried Ford and comparing it as competition is being generous. It does lane keeping and adaptive cruise control but you can’t just punch in an address and have it take you there.


Tesla was not initially created by Musk: https://www.greencarreports.com/news/1131215_tesla-existed-b...

So, the initial good direction my have been despite him, and still successful mostly thanks to the big load of money he brought him instead.


I can't find one at the moment, but I recall seeing several interviews where people claim that SpaceX is structured with "handlers" or "stage managers" to keep Elon away from where the real work was being done. SpaceX has had Elon the longest, since the beginning, so they're just the most experienced with it. Though, now that people have discussed that publicly, I wonder if Elon ever caught on...


It always seems to be companies that Musk has more impulsive interactions with that seem to end up actioning both the good and the bad ideas. Twitter and Tesla being examples of this. It seems like SpaceXs longer term goals has worked out well for them.


How is Tesla falling apart? Cybertruck was a flop, but Model Y is still one of the best selling cars in the world, and very well reviewed.


To be considered successful, most companies need to sell more of their existing products and/or introduce new products. Tesla is doing neither – they have reduced the number of models they sell and are also selling their existing models in lower numbers.


Nintendo also has had major flops and that did not mean you had to discount them for good


I mean it's really TBD on what happens with Cybercab. The X and S models were always low-volume, and it makes perfect sense to move on from those models.


Deliveries have been falling for the past two years.


To make matters worse, falling while the deliveries of their competitors are rising.


Flat revenue for the last few years while in a market that’s otherwise growing. I don’t know if just maintaining while your competitors grow counts as “falling apart” but it isn’t good.


If you're in the market for a new X, S or Cybertruck, you're one of dozen(s)!


Yes now compare those numbers to all othere EVs sold in the US.


I would think because the original founders spent a lot of time planning, researching, and designing combined with decent timing of Musk jumping in with money. Why else would Musk have bought them in the first place if they didn't have incredibly impressive ideas and engineering to sell? When the roadster originally came out, it was expensive, but also had a near 300 mile range which nobody else even came close to offering and boasted very impressive engineering and crash safety. And im sure a lot of that work was put into atleast the next 2 models released.

Of course the quality has fallen faster than the price over time, but initial impressions still hold on for a long time in general.

I think SpaceX's success is mostly down to throwing money at the problem. The US had tons of graduated aerospace engineers with limited places to go, and places they could go directly in aerospace fields were already committing their funding to established programs. SpaceX startup would of been a dream job for the top aerospace engineers because it was all fresh ground but with a far larger budget than 99.9% of startup aerospace companies. They weren't offered to build one piece of a rocket that may or may not get sold to NASA or someone 15 years down the line, they were offered to work on and put their mark on a completely new rocket design that was going to at the least be test launched. And im sure their early successes helped boost recruitment even further, combined with government contract to keep the money flowing.

We probably don't see many rising EV companies in the US because you need an ass-ton of capital to start an automotive company, and most people holding enough capital to do so know that try to sell cheap consumer cars that most people want is not really the highest margin business. Selling a few hundred or even a few thousand cars still leaves you with a mountain of capital requirements in front of you that your margins are going to have a really hard time climbing. And if you don't climb fast enough, good luck fighting established auto makers and their lawyers with every cent tied up into trying to scale and engineer.


> I think SpaceX's success is mostly down to throwing money at the problem

I'm not sure this holds true. SpaceX accomplished more with very little compared to the entire NASA budget, Boeing, etc.

I think it's much more to do with mission alignment. Run fast and lean, and approach the problem in a non-risk-adverse manner. Fail fast and often and iterate quickly.

Sure, it takes a lot of capital - but that is only a portion of the story. Look at Blue Origin/etc. in comparison.


Lucid has eaten all of their high end sales. Their mid-size SUV will likely take a sizable chunk out of the Y too.


> When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.

[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.


I’ve experienced it at other places as well, just not the frequency or indirectness as Tesla.

During the first 24 hours of the Model 3 pre-order launch, Elon tweeted that we would support another 3-4 currencies than we had built and tested for. The team literally found out because of his tweet and had not planned for those currencies. That wasn’t the first time that sort of deal happened where we found out about a feature because of one of his tweets.


> That wasn’t the first time that sort of deal happened where we found out about a feature because of one of his tweets.

Thankfully I've never (yet) had to experience "planning by management tweet". That does sound like absolute bullshit to deal with.


Still better than jira.


In what way?!


It's a joke because Jira is so horrible


During my last job search I had an interview with Walmart, related to health software. I was flatly told that I might have a project canceled, then restarted on the original timeline. I declined after the interview.

They then shuttered the whole thing some months later: https://www.npr.org/2024/05/01/1248397756/walmart-close-heal...

Which is to say, these things are real warning signs about the company.

In the case of Musk's companies, here we are discussing a major failure and firings.


So this is a common tactic.

I have experienced management assigning people to multiple projects, vaguely acknowledging a time split. The moment the actual work starts people have to go 100% on all projects. This is normal.


yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?


Agreed. Tesla taught me the hard way about work/life boundaries. I spent a lot of time working a full 8-9 hours during the day, then doing deployments during the nights, weekends, and on “vacations”. A 60-hour week was a “light” week at Tesla.

Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.

These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.

Most often, the answer is no.


This is the case at every company I've worked at. When the CEO says jump, the response is to jump or pack your stuff. What's special about xAI/Tesla/SpaceX?


Neither me nor my friends ever worked at such companies. C-suite sets direction in a stategic way, department heads (or whatever, managers between C-suite and you) set tactical goals, product managers think up "things we should do", and product teams deliver those things (and manage this delivery together, e.g. timelines and such).

It would be ridiculous for a CEO, or really anyone who's not my manager, to ask the team personally to do anything. If they had an important task they'd have to trade-off something else from the immediate backlog, by going through the product manager.

Even in small companies you generally have a PM in front of a team.


I think most people would agree that Elon is a particularly fickle, childish, petty, and unstable human being.


But the person you're replying to's point is that CEOs often behave like this. What if the difference is that this one tweets all day long, while the others behave the same to their staff but sit behind expensive shiny wooden desks?


This is not specific to Tesla. If the CEO wants something done in most companies you follow the CEO's order first and drop everything else.


Yes, but in most well functioning large organizations, that happens very rarely.


I wonder why this is surprising. In other type of organizations when CEO demands something everyone is usually behaves like naah, screw it, i rather do what i like, isn't it? Or everyone yells yes sir and runs around?

You may not like Elon - I got it, but let's not pretend he is running xAI/Tesal substantially different from competitors.


I am calling my approach to these tasks to make them rot away. If CEO/customer wants something, I will ignore it until he will start demanding it repeatedly then I will start thinking like working on it. Because it can also happen that CEO/customer will want shiny thing you will deliver the shiny thing and he will have no clue why did you do that, because he forgot that he wanted that - the task has rot away.


I wish i could have a machine to detect workforce like you and not to hire those people.


Consider it self-regulating system. If task can't survive in a mind of a wisher for more then few days, it was not needed from the start. Now you are saving resources and time to company which you can redirect to actually required tasks instead of some silly whims.


It's not a "regulating system" of any kind, but plain passive aggressive arrogance. I know better what to do so I would just pretend like I am going to do what you suggested.

Unfortunately this trait is not so uncommon across IT engineers.


> It's not a "regulating system" of any kind, but plain passive aggressive arrogance.

Is it arrogance if the task won't stick? Because if it won't stick, it was not needed at the first place - system just regulated itself to have less workload.

> I know better what to do so I would just pretend like I am going to do what you suggested.

I would argue that developers probably know better what to do. Look on it from developer's perspective he has tasks which are supposed to be done yesterday, he is being pushed by his PM and then CEO comes in with completely wild and random task which will push existing tasks further down the line. And this is going to be done without any regard to existing tasks, existing deadlines and without any regard to ticketing system. So the best course of action, is to take no action and see if this random task will stick. In most cases it won't stick, because task is more a random thought than something to be worked on.


> Is it arrogance if the task won't stick?

Yes. Because stick/not stick is "decided" by IT Engineer by excerpting leverage over someone who actually have decision making capability - a manager. And what it is "stick/non stick" in your perspective is just manager thinking how to make things work bypassing the annoying engineer.

> I would argue that developers probably know better what to do

Aaaand that's exactly where arrogance is. If developers "know better" why don't they become product owners/visioners?


> Because it can also happen that CEO/customer will want shiny thing you will deliver the shiny thing and he will have no clue why did you do that, because he forgot that he wanted that - the task has rot away.

Hate this. My boss: “Hey, why is it doing that. Who did this?” You did, you clueless idiot. You asked for it.


In other companies they don't make this explicit during the interview, so something is different


I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.

Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.

I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.


Grok could only be conceived by someone who doesn't understand the dependency chart re science & the humanities. It's impossible to build a rational, accurate model that isn't also egalitarian.

I'm going to blame Randall Munroe for this, and assume Philosophy was dating his mom back when he drew that science "purity" strip.


I think there just wasn't enough space on the left to fit philosophy in.

Cfe: "it's impossible to be rational without agreeing with me on everything" and other hits.


[flagged]


That your comment is grey but has no replies speaks volumes.


somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).

BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.


This kind of conditioning has to be damaging to the model’s reasoning.

Consider how research worked in the Stalinist Soviet Union and Nazi Germany. Scientists had to be mindful of topics where they needed to either avoid it completely or explicitly adapt it to the leader’s ideology.

Grok is a digital version of the same thing.


The counter to this are the open weight models that come from China at the moment.

All are great at reasoning but also ideologically aligned.


Their alignment is probably more strategically built in during the training phase.

At least I assume Xi Jinping doesn’t just call up DeepSeek on a whim and dictate what they should have in model context (like Musk apparently does at xAI).


You can’t put a gun to someone’s head, order them to be creative, and also expect good results.


Counterpoint: Sergei Korolev and Andrei Tupolev


Let’s restate this another way:

“ In an interview with {COMPANY} I was literally told that … {COMPANY-OWNER} can call us and demand anything at anytime. “

Doesn’t sound so crazy when Elon name is removed from it.

Note: I’m no Elon fan, but do think sometimes HN overreacts when his name is mentioned.


If you're designing a car, then the CEO/Founder might want the ability to add falcon wings to it at any point, and that's pretty reasonable. If you're designing a trustworthy encyclopedia, knowing that the CEO/Founder might wish to alter arbitrary facts to his whim is really not very reasonable. Is it his company? Sure. Do you want to make low-quality information artifacts? That's a judgement call.


What? I work for a different frontier lab now, and it absolutely would be ridiculous if they told me the same thing. Luckily they haven't.

What products have you worked on where this would be deemed normal?


Sounds pretty crazy to me, bud. I keep landing on 'servitude with extra steps'. Owner should have better things to do/people to bother, I should have space. Boundaries, etc. Yeah yeah, I'll never make a bazillion dollars. I'll know freedom.

Even an executive assistant, which I would never apply for, has off hours.


Same. It was a bit less literal in mine, more like “how do you handle situations where key stakeholders and one in particular have certain demands”


wild, but not surprising! anything else interesting you can share from that interview?


I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...


>He wants it to be truthful

How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud


https://artificialanalysis.ai/evaluations/omniscience?omnisc...

AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).

Grok 4.2 which was just released in the API just benched the best at this benchmark.


Of all the valuable metrics on that site, all of which grok does badly at except one, you managed to pick that single one.

https://artificialanalysis.ai/models


This isn't a response to my question. I asked why you trust him


I totally agree, it's his company 100%, why would you even apply for a job in a company where you don't agree with the owner or his vision.


Do you think the investors of xAI want this behavior baked into the model? Do you think other frontier labs enforce their models to praise their CEO and never insult them?

And, how does this fit into a vision, exactly? What vision might that be beyond "I am only to be praised?"


Some of us have a pesky addiction to food and shelter.


[flagged]


[flagged]


You think he wants Grok not to sound extremely snarky, sarcastic, and full of cringelord humor?

Are we talking about the same xAI/Grok/Elon here?


Yea his ideals demand something much more pure: a 4chan commenter


> Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...

Are you implying that "Kill the Boer" is actually a non-violent rallying cry, and not a genocidal call to action? Ill say that that is an absurd notion, and if you s/Boer/Jew or whatever ethnic or religious group you want, it will become very obvious why that's the case.


> Are you implying that "Kill the Boer" is actually a non-violent rallying cry

(Not the person you're replying to, so caveats about me speaking for them, but) no, they're not. They're highlighting how Grok _isn't_ accurate/unbiased/whatever, by giving examples of how it distorts the truth to fit Elon's narrative.


I assure you that all the models have such biases. Ask any LLM who caused the most death in history and you will get skinny mustache man, an opinion any historian will tell you is wrong. He is in the top 5, but not the top of the table. That was clearly biased into the models in the same way Elon biases his models. I'm not defending this behavior but I don't know how you both get models that returned the sanitized answers some want and the correct answers others want at the same time. Pure correctness probably gets you Mecha-H. Pure sanitized answers will get many wrong. Pick your poison I guess.


Claude: Mao, Ghengis, Stalin v Hitler (depending on how you count)

Gemini: Same list (Hitler not at the top) + Leopold

It’s funny when the “brutal facts” people get stuff wrong in such easily disprovable ways. I mean you literally could’ve typed the query into the LLMs before making this claim.

Prompt I used: “ Which historical figure is responsible for the most human deaths? Rank the top 5”

“Pure correctness gets you MechaHitler” is fucking hilarious :)


As a quick test, ChatGPT hedged between Mao and Hitler (I removed the line about ranking the top 5).


Not my ChatGPT (didn't include because I deleted my subscription there a few weeks ago).

1. Mao Zedong (China) Estimated deaths: 40–70+ million Mostly from the Great Leap Forward famine (1958–1962) and later political campaigns like the Cultural Revolution.

2. Joseph Stalin (Soviet Union) Estimated deaths: 15–20+ million Includes purges, the Holodomor famine, Gulag deaths, and forced collectivization.

3. Adolf Hitler (Nazi Germany) Estimated deaths: 17–20+ million Directly tied to the World War II in Europe and the Holocaust.

+ a footnote about Ghengis Khan is probably ~40MM but lack of records.

Every current LLM seems to give virtually the same answer as Grok. It's obviously not true that current LLMs behave the way GP said they do.


No I am saying that an LLM responding to every single query with anguish about a South African domestic political controversy cannot possibly be the result of an earnest, serious, and disinterested search for truth.

It is simply not possible. It disproves the thesis. Either the search for truth is illegitimate in principle or it’s so poorly executed that it’s illegitimate de facto.


He wants it to tell the truth as he sees it.


Truth doesn’t have the right training weights for Elon


> people who are solely money-motivated (not a judgment).

Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.


I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.


I like how some in this thread are telling their anecdotes about how shitty the company is from when they were in, or interviewing with, xAI. I mean, thanks for your input guys, but how do you go through life without having a moral backbone?

“Here’s my story from that time I had an interview with IG Farben…”


Shame is a powerful social tool, but sadly some are simply immune.


The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.

And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.

There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.


Yes, yes, true, but you've massively moved the goalpost. The original commenter was referring to people working at xAI right now. To continue your comparison, your argument would be like Oppenheimer claiming "How could I have ever known my work would be used as a weapon? I just wanted to make big explosions."

I don't know why this argument often pops up in these kinds of discussions. Approximately no one is judging people who have done their best effort to avoid doing harm. We are judging people who don't care in the first place.


Well if I moved it, consider this to be me putting it back where it was: people who continue to work on things which are concurrently being used in mostly harmful ways and have means to find a different job have no excuse.

As far as Oppenheimer is concerned, his argument is not that nukes are harmless, but that they are less harmful than Nazis, and much less harmful than Nazis with nukes.


Thanks, I can very much agree with that.

Re Oppenheimer: I know. My point was that he very much knew what his work was being used for, as should people working at xAI at the moment.


Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.


Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.


We are not talking about some destitute person hocking cigarettes on the street for minimum wage. We are talking about smart, educated people who are making 500k a year to build the torment nexus. There is no excuse for this. It’s pure greed and any other explanation is deflection.


It's always baffling to me to see people in tech, particularly in hackernews, talking about others earning salaries many times the median of the country and acting like these are people who just simply have no other choice.

They really, really do. In fact, those salaries being so high is probably also due to the fact that you will be doing work that's a net-worse for the world so they gotta compensate accordingly.

A lot of these firms are parasitic institutions at a society level. They do benefit themselves and their workers at the expense of everyone else. Personally, I find it hard to respect someone that takes that choice, but I also get it. A lot of people only care about their own and their immediate people's benefit.

On that note, I really recommend "No other choice" by Park Chan-wook or the book ("The Ax") it is based on.


"Morality is a luxury that only the independently wealthy can afford."

No? Why would you think this? Morality has been practiced by medieval peasants, by slaves, by soldiers sacrificing their lives, by people suffering from the plague, by gladiators. The rich are not known for their outstanding morality in any society I've ever heard of.


I think we're agreeing. Morality requires some sacrifice. The rich have surplus to pay for it, the poor do not.


> Morality is a luxury that only the independently wealthy can afford.

No. At least as I understand the word, "morality" means something different than "do the right thing when it is easy". If only those who can afford it do it, it is not morality. Morality is choosing the right thing even when it costs you, even when it is hard.


Speaking as someone that has spent a large amount of time unemployed because I have a moral compass - let me know when you actually walk that talk.

For me I could only do it because I had "f*ck you" money gained through investments, other people are able to do it because of welfare systems, or even through friends and family.


Textbook ad hominem. If the implication is that nobody sacrifices things for a principle or ever makes hard choices, that is so obviously wrong. Read some history.


I literally said that I personally had done so; so the only ad hominem is coming from you.

I also asked if you had done it yourself, because, as I also said, from personal experience, it's a LOT easier said than done.

Edit: fixed subject as I hadn't realised the person accusing me of making an ad hominem was the person I had originally replied to.


I don't know why the people here are naive enough to think that. Most programmers can donate more than 70% of their income to Africa if they want to make world a better place, yet they only target people earning more than 3x of them, even though majority of the world earns less than 1/3rd of them.


spending your productive output making one of the most powerful terrible people alive more powerful just sounds depressing to me. do you really want your research to be how to make the personality of Elon's anti woke mind virus LLM better at erotic talk.

I guess it comes down to your daily efforts serving only to make the world a worst place, vs having a neutral job


Allowing it to users who ask for erotic talk is in my opinion way better than forcing someone to watch shady ads based on private conversations.


>> If you are wealthy

Then.. you wouldn't be working...


Why is Elon still working then?


I'm not sure that posting deranged tweets at three in the morning _really_ qualifies as work.


At the risk of drawing moderation ire..

When does Elon work?


He works pretty hard to destabilize democracy


I’ve heard the haha-but-serious joke numerous times that you can’t have a security department that’s not trans and furry friendly. Thing is, I completely believe that. Those groups are disproportionately represented among the security community, and I personally would not work somewhere that my friends in those groups would feel unwelcome. That’s a quite common sentiment even among us straight cis non-furry men.

Well, I don’t think it’s a stretch that the kind of highly educated data scientists and engineers who have the experience to work in high-end AI labs also don’t want to work somewhere that their friends and associates would feel unwelcome, let alone have their friends question why they’d be willing to.

Turns out opinions have consequences and freedom of speech goes hand in hand with freedom of association. People have the right to say whatever they wish. Others have the right not to want to work with them.


This absolutely fits my observations and it's got to be one of my favorite secret things about the industry. More generally, the higher the skill level at a given org, the more trans furries you'll find, it seems like. There was a time you couldn't throw a stuffed fox across a Google SRE office without hitting one.

I wonder if this holds well enough that you can use it as a proxy metric to assess the technical chops at a new company.


That's only because autism is common amongst those groups and you can't build anything worthwhile these days without a lot of autism.


I don't believe that for a second. More likely, infosec tends to attract more results-oriented personalities. To generalize, "who cares what you look like as long as you're good?" As a consequence of that, infosec tends to be a lot more welcoming than other groups I've been around. As long as you act nicely, people generally don't care if you're man, women, both, neither, or a gay horse. And it seems like there's been a feedback loop over many years: that acceptance drew more out-of-the-norm folks, which made it more accepting. Lather, rinse, repeat.

But in any case, I thoroughly believe the "joke": turn people away because they don't look / act / think like most others, and soon the very best infosec talent will want nothing to do with you. And based on this article, I'm guessing that's true for other extremely technical fields, too.


This is the first time I hear someone equate furries and trans with “results-oriented personalities.” [1] Not saying they necessarily are not, but it’s finding correlation where there absolutely isn’t one just to disagree with actual evidence.

Yeah, I’m gonna go with Occam’s razor on this one.

1: where is the trans furries representation in senior management and other “results-oriented” fields?


> This is the first time I hear someone equate furries and trans with “results-oriented personalities.”

Technically, it's the zeroth time because I never even implied that. I said that the field itself is results-oriented. You usually can't get very far in the career without demonstrating competency at it. Where plenty of other fields had strong unspoken rules of "...as long as you fit in", this one's traditionally been comparatively open to talented people even when they don't look and act like everyone else.


That's... not what was written there. Better read gp again slower.


Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".


I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.


The feeling on the street is that Anthropic IS the Apple of the AIs.


Come now, surely Anthropic is a premium Linux distribution.


And Apple a premium Unix derivative?


To a researcher, the aesthetic is more like Bell Labs, with many research teams working with some autonomy, which is why the public naming of model releases appears chaotic. Very different to the top-down approach of Apple.


> aesthetics are a type of philosophy.

What philosophy is that?


It's literally called aesthetics, the philosophical approach is the original meaning of the word - https://en.wikipedia.org/wiki/Aesthetics

Properly, focusing on aesthetics as an ethic would be practicing the philosophy of aestheticism - https://en.wikipedia.org/wiki/Aestheticism



"You can use my model to kill others if Dario won't do it sir"


It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”

It’s sad to see the shift.


This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.


Very few people down here want to ride in them, and I have multiple friends with hilariously disastrous stories.

Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."


Wamyos in SF are nearly indistinguishable from ubers/lyfts at this point. Maybe a bit slower if you don't have the highway mode enabled on your account, but they are everywhere and arrive within 5min most of the time I order one. I've ridden them so often I've lost count.

You'd have to pay me to ride in a Tesla robotaxi. That tech isn't anywhere near the same as Waymo.


I can't say I know the AI research community well but I'd imagine OpenAIs alignment w/ the military would not align w/ the the personal philosophy of many.


Why does being a top AI researcher so often come with this philosophical bent you describe?


You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place


it's not working


US isn't randomly launching nukes yet


yet.

Because so far if we left it to AI they would be much quicker to do it [1]

[1] https://www.newscientist.com/article/2516885-ais-cant-stop-r...


virtue signaling is the goal and its working


Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…


Most evil corporations have fairly normal jobs available.


if you want to make the world a better place as OP stated perhaps you can get a normal job in maybe less evil corp?


Most companies are evil in some way, the question is how evil and how close you are to the evil. Most people will pick "not that evil but pays a lot". A few will take "pretty evil and pays more than a lot". Some will choose "less evil and pays poorly". (It's worth noting that there are a lot of jobs that are not at the Pareto frontier and are "more evil and pay worse" but social mobility etc. cause them to be selected anyway).


When presented with a choice between:

1. Take a job making $$$$$$$ at a company making the world worse.

2. Take a job making $$$ at a company not making the world worse.

Very few people have a personality such that they'll pick 2.


exactly what I was asking OP, her/his comment sounded like people will pick the later (I agree with you)


Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.

But it is absurd to claim it is "making the world better place".


I'm not sure you can provide an objective (i.e way to show that it is absurd) means of explaining how an AI researcher is making the world a worse place. It's going to come down to disagreeing about some axiom like "is ASI rapidly approaching" or "Is AGI good to have" and there's no right answer to those.


not really. 15-20 years ago that same upper echelon of college/professional school graduates you're describing were going into finance.


I would think it's because of the staggering money they're making. According to Fortune[0]:

> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

[0] https://archive.ph/lBIyY


I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?


My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.


> care more that their work is prosocial

These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.

Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.


Although the Rand corporation did contribute some ideas theoretically connected to nuclear survivability (packet switching in particular). All that work was pre-ARPAnet and don’t really motivate the design in that way.

It was designed to handle partial breaks and disconnections though. Wikipedia quotes Charles Herzfeld, ARPA Director at the time as below. And has much ore discussion as to why this belief is false. https://en.wikipedia.org/wiki/ARPANET

====

The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[113]


So researchers are going to be irrational and also often value other things more highly than prosociality but that doesn't really refute my point that they value it more highly than the average population.

Also your example of a bad technology is something that allows people to still communicate in the event of nuclear war and that seems good! Not all technology related to war is bad (like basic communication or medical technologies) and also a huge amount of technology isn't for war. We've all worked in tech here, "The development of technology is simply due to the reality of nations being in a constant arms race against one another" just isn't true. I've at the very least developed new technologies meant to make rich assholes into slightly richer assholes. Technology is complex and motivations for it are equally so and won't fit into some trite saying.


I never claimed any techology is good or bad; you also seem to be in agreement with me that technology used in warfare _can_ have "good" applications (I mentioned that the benefits are secondary to their applications in war, that doesn't sound like me saying there are no benefits).

Lastly, the only point I was trying to make is that the argument that researchers do these things for "pro-social" causes is kind of a facade; the macro environment that incentivizes technological development *is* mostly due to government investment. Sure, the individuals working on it may all have different motivations, but they wouldn't be able to do so without large sums of money. The CIA [1] literally has a venture capital firm dedicated to the investing in the development of technology - do you really believe they are doing that to help people?

- [1]: https://fortune.com/2025/07/29/in-q-tel-cia-venture-capital-...


This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.


Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.


Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

Note this doesn’t apply to everyone. Some people just want to make money.


Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?


Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.

“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.


Because a lot of them are academics that are doctors of philosophy


Because they can afford it, they are very sought after.

And smart people usually have moral convictions.

I know for some people on this website it's hard to understand, but not everything in life is about $$$


> And smart people usually have moral convictions.

Are you sure you don't just like the moral convictions and so engage in trait bundling?

Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.

Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.


> Moral knowledge doesn't really exist.

If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary?


Different kinds of claims, it's not self-referential


> Different kinds of claims

How so?

If I claim that one should prefer the claim "moral knowledge doesn't exist" over its contrary, then I am making a moral claim. That would make it self-refuting.

There is no fact-value dichotomy.

And one more thing...

> the lack of falsifiability

Is falsifiability falsifiable? If all credible claims must be falsifiable, then where does that leave us with the criterion of falsifiability (which is problematic even part from this particular case, as anyone who has done any serious reading in the philosophy of science knows).


> And smart people usually have moral convictions.

Dumb people have moral convictions. Smart people see the nuance.


I'm smart and you can buy my morals. So what?


Those people get paid so much anyway that they don't have to compromise their morals.

I guess that's not the case for you and me


so do oil and tobacco people, no?


So what, indeed (not sure what you mean)


True, many smart people will gladly (or even begrudgingly) do evil for money. That's why there is so much suffering in the world, because of people like you.


Is ad tech and the like really causing so much suffering? The government work, mass surveilance, killing people etc. doesn't actually pay that much, typically.


I think ad tech is probably the single most destructive technology of the new millennium. The shift toward "engagement at all costs" business strategies is basically the root cause of societies current political polarization. Engagement bait cultivates fear and rage in the populace to get clicks. We are now seeing the consequences of shoving ads that sow fear, anger, doubt and inadequacy into peoples faces 24/7. This doesn't even touch on the fact that mass surveillance is only possible because of the technologies forged by the Ad tech industry.


Well I'm not sure I entirely believe this myself, but it seems easy enough to argue that this is progress of a sort.

The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?

It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.


That’s fair enough. I wouldn’t say I’m happy about needing to live through interesting times, but if we make it out the other end maybe something better will come of it.


It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.


There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".


My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.


I think Musk is odious but I think there's a lot of complicating evidence to the story of what happened at Twitter. And: very smart people, like Dan Luu, were complaining about their culture long before Musk arrived.


Is there anything from Dan Luu you could point me offhand at about Twitter's culture? The only thing I recall was a blog about technical issues but that didn't seem to have much bearing on the culture.

My understanding is Twitter always had cultural issues but it was not very different from other tech companies of the time, and what most of us would consider "directionally correct." I have it on pretty good authority from a very senior engineer who left before Elon took over (so no grudges other than, you know, "because Elon") that a lot of the things he said publicly about Twitter's technology was highly misleading or downright false. Like, IIRC, something about them not having CI/CD. Total lie.


I have no idea what Musk did or didn't say. I don't pay attention to him; I think he's odious. But he did cut more than half the entire workforce and the service works as well as it ever has, which is pretty damning. I'm not willing tie myself into the pretzel required to explain how antebellum Twitter was well-managed given that.

There's some fraction of that workforce that supported projects intended to make Twitter a viable standalone business, which it probably no longer is. Backoffice / line of business projects intended to support advertisers, that sort of thing. But I don't think you can explain a RIF of Twitter's scale that way.

(I'll try to dig up the Luu post I'm thinking of.)


Many of the workforce he laid off were content moderators -- I've read it was a serious effort with a large number of people doing thankless work. There is now way more anti-Semitic content on X, more racial insults, etc.


Side point, but you'll find plenty of anti semitism on HN in the Israel articles that have many comments - it comes in the form of conspiracy comments that people reply with, that use mossad, pedophilia, Netanyahu and the US in the same sentence. Any replies calling it out become greyed from downvotes.

It's just not viewed as anti-Semitism, probably in the same way that the posts on X aren't viewed as far-right or extremist.

Extremists usually don't experience their views as extreme, but as rational and important.


Come on. No they weren't.


Well not just content moderators, but he gutted Trust and Safety and the content moderation function of the company, which is surprisingly larger than the moderators themselves. Having worked peripherally with similar departments that had multiple teams, even though a lot of it comes down to human moderators, there is a ton of technology around the moderators, and even more keeping the content getting to them in the first place.

Firstly, this is a red queen’s race because like security, new types of unwanted content, threats and risks keep arising as the information (and misinformation) landscape and overall zeitgeist keeps shifting. The work is never done and the best that can be done is to build platforms and frameworks to streamline it. There is also a lot of fractal complexity everywhere.

E.g. there’s a ton of technology needed to support the moderators themselves. Infrastructure like review queues to enable them to rapidly handle content classified by type, risk level and priority. Like Jira but not Jira because it can’t scale to the number of queues and issues involved here. So you basically re-implement and maintain a Greenspun’s 10th rule version of Jira.

There is still a huge amount of invisible complexity beyond that. For instance, you need to manage how much of a certain type of content gets exposed to a given moderator because some types (CSAM, gore) lead to burnout and PTSD. You also need to blur these things.

(Also the same type of content often gets reshared, so you need things like reverse image search to auto-filter that, because running the whole pipeline each time is expensive.)

This of course necessitates a ton of machine learning. Because risks keep shifting, and (pre-LLMs) each type requires the entire ML lifecycle and related infra: collecting and cleaning data, building classifiers for them, deploying them, seeing how well they work, and tuning them, and then replacing them when the bad actors eventually adapt to newer means.

ML is also of course needed for bots, spam and scams, which keep evolving. Entirely different techniques here though.

Then there is all the infra needed to handle the fallout of moderation. Counting strikes against users, dealing with their complaints, handling escalations, each case with a long history of interactions that needs to be collated for quick evaluation. Easier said than done because of course the backend is not an RDBMS but a bunch of MongoDB-alikes because webscale.

And all of this is a signal for the ranking used for feed, the main product, which keeps evolving, so a ton of “fire and motion” happening there. You introduce a new feature in the feed? You just introduced a dozen different abuse vectors.

Then there are policy makers and the technology needed to support them. Policy is always shifting as the landscape is shifting. This also includes dealing with regulations, which are also often shifting and require ways to deal with legal requirements and various legal systems like NCMEC. And this varies by jurisdiction. Like not just by countries, sometimes even by states.

(Funny story about NCMEC – it has an API to report CSAM, but I could not find it. So I googled something like “child porn API” and got a blank results page. Pretty sure I’m now on a list somewhere.)

I could go on and on. And I wasn’t even working in this area, just supporting these teams! Admittedly in our case I'd put the relevant headcount in the hundreds and not thousands, but our scale was also very different. For a company that is ENTIRELY about user-generated content at massive scale, up to national-level events like Arab Spring -- even if there was a lot of bloat -- I would not be surprised to learn this function was the majority of the workforce.

And Elon killed pretty much all of this. And, well, we see the results everyday.


I get that he shredded trust & safety, and that Twitter got way worse afterwards in that regard. But he fired more than half the workforce, and they were not mostly T&S people.


I dunno, most reports from the time (and a quick Google AI overview just now) mentioned the cuts largely focused on T&S and moderation teams. Even the ML teams he cut reprotedly were working more on safety and integrity issues. Many who worked on "woke" issues were also cut, but the line between T&S and "woke" gets blurry quickly.

To be fair, this could be due to the bias in reporting, as media outlets may have had incentives for over-emphasizing the T&S angle.

I do not deny there was bloat. There was bloat in most tech firms at the time. But I don't think it was 80% bloat. My post was to explain how, even if T&S / moderation seems like a small function, it can require an unexpectedly large headcount -- probably even more for a pure-UGC company like Twitter -- and so could realistically account for the bulk of the cuts.


Come on. Zillions of developers have complained about getting RIF'd. It's not a mystery. I don't like Musk's Twitter. I don't like Musk. But pretending isn't getting us anywhere.


I'm not sure I follow. Assuming you mean the zillions of developers that got RIF'd at Twitter, do we know how many were bloat versus working on the T&S and related functions? I tend to believe the latter based on media reports and because that has clearly had an impact on the product.


It's OK if our premises are too far apart to hash this out. No, I don't think shredding T&S is one of the principal components of the giant Twitter RIF. Yes, T&S got killed; yes, that's bad. No, you can't explain how Musk manages to keep Twitter technically functioning as well as it does by pointing to T&S.


Totally fair. That said, I'll leave a mention of a plausible theory I have for how Elon -- and the rest of the industry have been managing to keep things running with all these layoffs:

https://news.ycombinator.com/item?id=45192092

Again, I'll admit Twitter and all other companies had bloat. But based on these industry-wide reports about record levels of burnout, inside knowledge of at least one company that I thought had unjustified layoffs, and a large number of conversations I've been having with connections across the tech industry, I think these layoffs have long gone far beyond the bloat.


You are strawmanning what I said. I said "Many of the workforce he laid off were content moderators" but you are arguing against something I didn't say, which was that those Musk laid off were mostly T&S people. Those are two different claims.


I don't really think thats true.

The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.

The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.

We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.


I think we are saying the same thing. He builds trillion dollar companies that are labor efficient; nobody said they are good places to work.


Many ex-employees have said to me that working for Elon did not enrich them at all, either financially or professionally.


He's a notorious cheapskate and Tesla is known for firing people shortly before their stock options vest


What about all the ones who are suing him for shortchanging them?


There's probably a lot of survivor bias going on there


Undoubtedly. With 2.5T in value between tsla and sx that’s a lot of value for survivors.


What % of that is owned by employees that aren't named Elon Musk?


Ask the people at Twitter..


You mean the 80% of the workforce that was fired and the company continued running just fine?

Usually just firing 3 to 5% of any company workers have terrible consequences for the company that does it.

It does not speak so well about the workers.


He also cut 80% of the traffic... And the fact that it kept running with him willy nilly pulling network cables is a credit to the work they did to make it resilient to failure.


Source on pre/post traffic numbers?


I don't have it at hand, but if you look at all the products and apis they cut - and then all the users who abandoned it in the first few months, I think that's how this was derived.


I don't understand this take. Do people think engineers go in to work to turn handcranks to keep the machines running? It's actually a credit to the automation built by the engineers he fired that it kept running!

At the time I joked that like Chaos Monkey, we should have an "Elon Monkey" to "fire" arbitrary people by sending them on mandatory vacations with no connectivity to see what falls over.


the people that built the infrastructure that runs twitter left before he showed up. most of it was written by a half dozen people that left around 2016.


It was significantly worst, could not keep ads, became overrun by bots. The quality went down significantly. And earnings too.


> Ask the people at Twitter

The ones with stock options in, now, SpaceX?


Poor SpaceX employees whose options got diluted by Twitter. :/


Stock options aren’t magic. I bet you that the remaining Twitter employees won’t see a higher comp than equivalent employees at BigTech companies between their cash + RSUs when SpaceX IPOs.

Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?

Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO


180 day lockup period is standard


No, the ones suing his ass.


> Elon's enriched pretty much everybody who's ever worked for and invested with him.

I'd wager you were saying the same thing about bitcoin until last year.


I'm unclear what statement this is trying to make.

Is it meant to draw equivalence between crypto and Tesla/SpaceX? That each has roughly similar (i.e., low) value to humanity, or value as businesses?

Is it that the metric of whether a person makes others money is invalid?

The comment seems coy, possibly to avoid making any claim at all, but it must not be that because that wouldn't be very sporting.


He’s saying that it’s easy to say good things when the market’s on an upswing.


I'm also saying that almost all of TSLA's price is roughly the same as all of bitcoin's price, which is to say vibes-based. It's a fandom. A cult.


After seeing the type of people he hired for doge.. yikes.


Was doge ever anything more than a "get root, grab the data, and run" operation?


Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.


Don't forget the destruction of USAID and countless projects that had the word "diversity" in its work.


I think more important than that was shutting down all investigations into Musk's companies.


It's pretty obvious now.


It was obvious at the time too.


Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?


Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.


It's possible. I don't know. My tone comes off as support Elon, and I do not, at all. I've seen first-hand almost all of these tactics while I was at <Elon Company>. I'm observing that some people seem to do OK at Elon's companies, and for many years, and never seem to get the boot or be abused in other ways. Therefore, Elon is probably not quite as bad a manager as he is made out to be. This is all I am saying. Since I have firsthand knowledge, I believe my opinion has value. Those that disagree? Show me your Source of Truth. Thank you.


I don’t believe Elon is even remotely related to a people manager. He’s a stakeholder and operator which require different skill sets. He finds folks who will manage to o bring the empathy he tends to lose in his pursuit of his next project. I believe your evidence may be anecdotally valuable but let’s be clear about the dynamic of a founder/ceo.


Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.


Gooning and racism have been a cornerstone of humanity since we descended from the trees, for better or worse.


Even people who are purely money motivated have an instinct for self-preservation.


The main issue is the requirement for relocation tbh.


Sorry, but what is the philosophical niche of openai really? Obtain money at all cost? No red lined when using your modele in war? Work for scam altman?


What do you mean “philosophical”? Ethics and morals are not required, Elon can get whatever type of asshole he needs. Something else is up.


> But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.


> The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.

I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.


> The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.


Idealism of what? That the government shouldn't use AI for surveillance or the military?

You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.


I have my doubts that top Chinese AI researchers want to work for an AI company with direct tires to the white house and zero morals. Not for any great ethical concerns mind you. Simply because the US is a geological rival to China.


>What we’re learning from this episode is that the government actually has way more leverage over private companies than we realized.

Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.


This stood out to me too - there's an underlying assumption that private entities _can_ say no to governments, but that's only true to a point. If the government decides it needs AI-powered killbots as a matter of national security, it can and will nationalize whatever entities it needs to build them.


> Who is learning this for the first time only now?

A teenager, probably. Not everyone is 100 years old.


My metric for this kind of stuff is: Did Glaze build the Glaze app?


If someone tells you they're going to handle US communications in a way that is consistent with the FISA Act, that is not a good thing.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: