Would imagine it's the simplest answer: they're flying by the seat of their pants, there's 1000 things happening every day that demand attention and there's not enough of it to go around. They toss their LLM at it, give it a cursory glance, and ship it. A quick glance at the Claude Code source code bears the result of this process out. The fundamental question is, if their model is so powerful, why do they keep fucking up such simple things? We're led to believe this is a serious company with a model so powerful they can't release it to the general public.
Hermes is one of these OpenClaw clones, so this was certainly intentional, not a model hallucinating something.
I think the problem is clear. Anthropic saw their usage go up much more than their capacity could handle. There are a few tried and true solutions to this, like "increase the price" or "restrict signups so you can guarantee service to what you have already sold".
Then there is the "large scale fraud" option, where you materially change and degrade the service you have already sold. Just because you have obfuscated and mislead in how you describe the product you are selling doesn't mean you get to capture the cash flow of 1 year subscriptions then not honor that contract for the full duration.
So that's what it is. Reading its README I thought it was another harness like Pi [1], but with built-in memory so it remembers what it learns, and gets more capable the longer it runs.
Like Letta [2], Dirac [3][4] and the other "more experimental harnesses that look interesting but I haven't had time to try out".
Late in replying to this, but just wanted to say I found this pretty compelling. I generally think people are too quick to assign to malice what could be assigned to incompetence. In this case I'm not convinced of that anymore especially given their public statements about these third-party harnesses. It does seem unavoidable that they'll have to move away from subscription-based pricing and towards token-based, but they're managing this in a really ham-fisted and user hostile way regardless.
Non-Claude client access is not permitted in the terms and conditions, except via API key.
The correct implementation of this condition by Anthropic on the server side would be to block usage by non-Claude apps via Claude's authentication mechanism, and allow it via the per-token API key billing.
Instead of a simple 403 error, which would block usage, they silently redirect to a different billing bucket, which is not ethical behaviour especially since it is based on fuzzy heuristics.
Yeah, at the least it should alert the user that it is happening. Maybe the thinking was alerting it gives people signal on how to get around the restrictions, but having it silently charge from a different bucket isn't the answer either.
I think part of the issue is they were letting people use plan's API for random stuff, so people could do testing or small projects. Then the agents came along and exploded the cost, so they want to restrict those but still let some other usage, which I don't think is tenable.
I'm sure there is some way that they could enforce that all calls are coming from the Claude app or Claude Code. It might be hard to 100% enforce, with stuff running on a user's machine, but they could still could make it quite difficult, where someone has to be intentionally trying beat the system (like stealing encryption keys out of the Claude Code binary or something).
They've said publicly that they don't want apps like OpenClaw (Hermes is a variation) being used with a monthly plan vs per-token billing. The problem is this was implemented pretty badly (trying to regex??). And they should put a firm boundary between the two. It shouldn't be trying to switch over to a different billing plan automatically using the same api key.
I think they wanted to try not to totally lock down the monthly plan for non-agent uses, but that makes it all too fuzzy. They should use some specific method like encrypted signatures or something, so that anything sending to the monthly plan that isn't Claude Code or the desktop Claude app just errors out and be done with it.
Surely there's a middle ground where improved APIs can be leveraged by both people and LLMs alike while keeping those APIs approachable? Why is it necessary that changing the python APIs would lead to "need[ing] LLMs for Blender"? I'm nowhere close to an AI maximalist but this criticism seems grounded in execution concerns. I'm definitely not saying that they won't mess this up and make the APIs overly complex, I just don't think that's necessarily going to be the case.
We should focus on effective means for change. Focusing external influence on low-level individuals with no decision making power might feel good but it has accomplished a sum total of nothing in the past. Why would we think it will make the situation better this time? They swap people in and out of projects all the time and it's really not disruptive at all. The only ways these tech behemoths have made any meaningful positive changes is through sustained governmental pressure either through oversight or regulation.
Techies believed they didn't need unions because their compensation is high, and "meritocracy" yadda yadda. But unions were never just about compensation. Crucially, they also collectively negotiate working conditions.
You quitting your job is not necessarily much of a threat to your employer. But a union going on strike, effectively everyone quitting simlultaneously, is a major threat.
You'd also have some very large but currently unknowable number of people underfunding Medicare and Social Security but still expecting to be able to draw out of it when they're older and demanding they be allowed to do so when they're seniors.
Why should someone need to start a business just to have as good a life as their parents did? Why is today's wealth inequality optimal for society? Rich people in the past got by just fine. They started businesses, succeeded, lived incredibly comfortable lives, all while earning a smaller multiple more than the people working for them. The centralization of wealth is a sign of a sick society, especially when people who provide labor suffer and get less of the pie.
That may be the intention, but to me it looks like the EFF is just giving its employees a paid day off. Also, you're not really striking if your employer approves of it, and pays you for it.
There actions are to raise awareness. People reaching out to them on Friday or going to the website will see a shutdown notice. Meanwhile, employees are available to strike, document the day, or do whatever they wish.
yeah it may not be helpful, but at least it's starting a conversation. most people aren't even aware yet, and there's no way to raise awareness without a vacation. for example none of our web designers knows how to put a message on the website without shutting it down. and none of our managers knows how to shut down a website without taking a vacation.
I got solar panels installed two years ago and I've washed them once. I'm still getting great production. Are you trying to convince yourself that maintaining solar panels is difficult? Because it isn't.
So let me get this straight, I save hundreds of dollars a month, I drive my cars "for free", I get paid at the end of my net-metering year, and somehow this is a bad deal because I've wanted (not needed) to wash my panels once? It sounds like you optimize your life around not maintaining the things around you which is fine, but I'd much rather save thousands of dollars.
reply