Agreed. (Reasonable) humans also don't ask for 20-50% raises year on year, but replacing workers with AI places incredible pricing risk in your business operation. AI may be cheaper in the short term, but the ultimate goal of AI companies is to capture as much value as possible, and they will have no problems pricing AI tooling as close to the replaced human salaries as possible.
Exactly this, and not just another middleman, a middleman with an obscene burn rate that isn't close to profitability and is incentivized to ratchet up prices as soon as they can.
And then AI procurement has problems on the buyer side. Do I just blindly trust that the model is going to make the purchase as specified? Do I trust the model's search capabilities and objectivity of returning results? How do I know that OpenAI isn't running its own "marketplace", only showing me options to buy that they want me to see while filtering out less desirable options for them?
It's a fundamentally less transparent experience than Amazon.
Let it go. Take pride that a company like Anthropic thought your idea was good enough to run with. Aiming for some sort of statement from them is a waste of time.
I'd happily take pride if they'd acknowledged it. A single reply. That's all. Instead I got silence. There's no pride in that,,just frustration.
But I sincerely appreciate your input.
Mountains Beyond Mountains is a pantheon read for me.
Farmer grew up incredibly poor, got into Duke and Harvard, had opportunities to make incredible money and traded it for a life of providing medical care to the third world on a shoestring budget while schooling organizations like the WHO on how to provide care along the way.
Agreed. Farmer's O for the P (provide a preferential option for the poor in health care) was clearly central to his life. I think about it often.
On top of that he was incredibly competent at navigating the combination of hostile bureaucracy, apathy, and disorganization. It's incredible what he and PIH accomplished.
> but general voting interference seems quite brazzen.
It's also not a new tactic. During his '68 presidential campaign against Lyndon Johnson, Nixon convinced the South Vietnamese to not engage in peace talks with the Communists to weaken Johnson's campaign. The war went on for 7 more years and killed hundreds of thousands of Americans and Vietnamese.
The candidate who stands to benefit from circumstances like this is usually quite corrupt as well. The mere appearance of a foreign espionage outfit helping one candidate should raise questions about their integrity.
My therapist just asked for my consent to use an AI note transcription tool through SimplePractice, one of the largest therapy administration platforms out there. So I wound up digging through their documentation and policies around that tool and the lack of transparency around it was galling.
The whole thing felt like a giant "gotcha" to the point where I have zero trust that data captured through platforms like that won't be used against my interests.
I personally think the fact that it's an indie reporter like Ed Zitron diving into this says a lot about the state of tech media broadly. Reminds me a bit of how sports journalism works nowadays: nobody wants to call out industry leaders for fear of losing access, because losing access is career suicide.
False. The current mainstream media outlets are by far the more anti technology than pro. It is unclear why you think journalists fear losing access when the status quo is opposing tech.
Respectfully disagree. Frontier lab CEOs have had incredible media access the last 4 years, making huge claims to the press without a lot of pushback or difficult questions. There's obviously no way to give some quantifiable metric on it, and reasonable people can disagree.
But Zitron frequently points out the inconsistencies in these data center deals, noting that companies like OpenAI and Anthropic make these announcements without a formal contract in place, companies like Oracle get a stock bump off of the news, and then we all find out from the mainstream press months later that the deal was never done and in fact may not even be happening anymore.
That's not really behavior you'd expect to see from a vehemently anti-tech press. They're happily making news to boost stock prices short-term, essentially acting as mouthpieces for large shareholders.
The sibling comment is dead, which is unfortunate because it's accurate, and brings at least some data that matches what I've anecdotally observed. I can't find a single article in my news feed that is overwhelmingly positive about AI. Any article that is even slightly editorialized mentioning anything positive about AI typically follows it with any of the same litany of risks -- hallucinations, jobs, deepfakes, environment, energy, that MIT report or that METR study, etc. etc.
It is not surprising that media is largely biased against AI, considering they see this technology as a) disintermediating them, and b) built by stealing their content. And since AI is doing this across a large number of professions, like artists and engineers, they find a willing audience for engagement.
If you watch any interviews with anyone who has power in tech, they're exclusively asked the most soft ball questions imaginable to make them look better.
The media DOES occasionally say negative things about tech. But of what they say, they scratch, like, 1% of the bad stuff. And they make excuses and let people off easy.
It's very similar to how the media is overly sympathetic to Trump. Yes, Trump is critiqued - but everything he says is interpreted in the least crazy way possible, even though he is a lunatic. MSNBC and co will even go as far as fabricating reasoning for Trump's actions when he doesn't provide any - and it's good reasoning!
To Ed's credit, he's coming with real numbers. Much of his reporting is based on quarterly earnings reports, press releases, correlating reports from outlets like The Information, etc.
Contrast that with hyperscalers no longer reporting AI revenue separately, making bold claims about long term growth with no evidence to back it up, and a tech media apparatus that has largely avoided asking founders hard questions.
I know just as well as you how this is all going to turn (which is to say, nobody really knows). But I'll take the person doing the math over the person trying to hide numbers all day long.
See this [1] for how he comes up with numbers. I think he says a lot of things without understanding and not many serious participants in the area takes him seriously.
Feel free to point out where the numbers are wrong in this article. If you're right about his ability to math, then you'll have no problem identifying concrete aspects of this piece that are wrong.
Let's see how true this is once the era of VC-subsidized AI ends. A 10x increase in frontier model cost is an entirely reasonable outcome given that Claude code is rumored to allocate up to $5K in compute for a $200/mo plan.
The losses fueling these companies is staggering and will not last.
reply