Hacker Newsnew | past | comments | ask | show | jobs | submit | cedws's commentslogin

We know that AI will ultimately just end up enriching a very small group of people with no change in prosperity for working and middle classes. CEOs are openly saying as much. For the past number of decades the rise in productivity has been completely detached from wages, it'll be no different this time.

We're also no strangers to enshittification, we have first hand experience of technology causing negative societal effects when in the hands of for-profit entities.


Man I really want to like this thing but this jargon is so stupid.

The jargon is just naming the free-standing components after rope/string related things. i.e. tangle, knot, spindle, etc.

Just call them what they are. Federations/networks, servers, repositories...

You got pretty much none of the names right.

Tangle: the appview server of the tangled network.

Knot: the git server that holds an arbitrary quantity of git repos.

Spindle: CI servers/runners/nodes.

Each one is the name of a component and the name for those components is pretty arbitrary.


I think they're mad because 'knot' is a furry fetish term.

Pretty much every word in English seems to have an innuendo meaning to someone, do anyone truly care past the age of 15?

I find Tangled's language a bit annoying because I'm pretty sure if this caught on it's even more single word concept rather needlessly. If the protocol is called Knot, then call a server a Knot instance or Knot server. If the runner protocol is called Spindle, each server which responds to that could be a Spindle runner. That'll serve two functions: It'll let people contextually hook the terms up against existing terms and still retain the option of evolving into singular word concepts if they prove successful enough for that to happen.

From my point of view as a non-native speaker, the frequent overloading of commonplace words add to the confusion of learning English. I don't like that. It's far from a big hurdle, but just big enough to earn a soft little sigh from me.

Your comment was the only thing that made me even care to comment: Isn't it rather unlikely that the person you're commenting on takes issue with a kink rather than any other reason why "knot" and "spindle" might be poor choices? Who knows, they might even have a good reason, but you started out with assuming bad faith and at least I tend to just leave conversations at that point.


I was expecting something more pragmatic than 'lean into AI even harder' to be honest.

GitHub needs to slow down with the AI shit and spend manpower fixing what's broken. Actions is a complete fucking disaster.

Also I have no idea what Pierre is but their website is horrible.


Being pragmatic means admitting AI is unstoppable whether you like it or not. Stopping copilot BS doesn't conflict with that

AI should not even be part of this discussion, it’s frankly irrelevant. None of the complaints Mitchell or anyone else has would be fixed by deleting Copilot or adding new agentic features. Deliver a stable and secure development platform first, then do AI stuff.

>manage all these different accounts for each place they contribute

For me that's a minor problem. The struggle of working across multiple code forges or making my code available on multiple is syncing CI/CD, issues, releases between them. I don't have the energy to maintain multiple versions of a pipeline.


I wonder if they’ll end the free lunch we’ve been having since the MS takeover. There’s been a deluge of spam and crapware projects due to the LLM wave which is visible in that graph. Can’t see them sustaining being a public dustbin for low value projects forever.

I could see them expiring/archiving/deleting inactive projects after some time.

I feel like this would have negative impacts (lots of interesting historical archives on Github) but maybe if a project hasn't been touched, or cloned, in some time, it just gets deleted with some notice.


Thing is, projects that don't get touched for months and months are the least costly. Disk space is cheap; what's costly is compute time to process new commits, new/updated/closed issues, new/reviewed/merged PRs, and so on. Inactive projects just sit there taking up disk space but basically zero compute time. So it would make no sense at all for them to delete old, inactive projects. (Which doesn't mean they won't do it: they might have hidden costs I'm unaware of, or they might make stupid decisions. People do make stupid decisions sometimes).

Also creates a perverse incentive to automatically push random commits to make sure your repos stay “active” and don’t get deleted, creating more load

I hope not but it will probably happen.

Just last week I found an interesting repo that hadn't been touched in 9 years. I immediately cloned it as it was something reverse engineered so DMCA isn't out of the question, but now I have two reasons to clone.


Just be aware OpenRouter charges a 5.5% fee, I didn’t know until recently. I like the product, and I think the fee is fair, but if you want the absolute best pricing then go direct.

But with open router you can always just use the latest model. If you're committed to eg Claude opus then you're better off going directly to anthropic for sure, but if not, varying other models may be fine too, depending on use case and be massively cheaper. Eg new deep seek model with same mio context window or Kimi k2.6 with 270k context window for subagents which implement

>but if not, varying other models may be fine too, depending on use case and be massively cheaper

Do inference providers have standardized endpoints, or at least endpoints compatible with claude code? Otherwise to pay 5.5% on all your tokens just so it's slightly easier to swap providers (ie. changing a few urls?)


> Do inference providers have standardized endpoints, or at least endpoints compatible with claude code?

Yep, you can plug deepseek/kimi/minimax into claude code just fine. Or run everything through another harness like opencode instead.


Or you could use gcp Vertex or aws Bedrock and still have access to a bunch of FMs without a markup.

Wow thats a lot for routing traffic.

And handling API tokens, and billing, and reliability, and middleware. I am not affiliated with them but it’s not “just” routing.

Apple still charges 30%. 5.5 seems pretty reasonable. /shrug I dunno.


> handling API tokens

Don't you still need to handle tokens with them? Also that's trivial.

> billing

Yes but you'd be paying for billing anyway.

> reliability

They increase reliability?

> middleware

Which you wouldn't need if you paid directly.

I'm not saying they shouldn't get 5.5%, but that list is mostly non-convincing.

> Apple still charges 30%.

3 of the 30 is for billing, with the rest mostly being gatekeeping with a fake justification on top.


There's nothing trivial about getting a Google API key. Openrouter removes that stress from my life. And I can route requests to providers above a certain TPS threshold. And much more.

My point was that it centralizes this to one place instead of 10 for engineers, not that you wouldn’t have to deal with these things at all.

A single point of access with a single key for all of these things is a worthwhile convenience.


> They increase reliability?

For models that have multiple providers, they automatically route your requests to a different provider if one of them goes down.


Payment processing likely eats up at least 2-3% of that

IIRC OpenRouter charges you for the payment processing fee also.

Still worth it IMO to be able to switch from Provider A to Provider B if Provider A is having a bad day.


Seems like a strong signal the money burning party is coming to a close. Nearly all AI companies have tightened their belts in the past month. Anthropic removed Claude Code from the Pro plan, Z.AI increased their prices, GitHub removed some Claude models from Copilot, now this.

Also, Opus 4.7 seems like a model more intended to save Anthropic money than push the bar.


> Seems like a strong signal the money burning party is coming to a close.

One provider who was undercutting the market with non-standard billing model moving to a more standard billing and prices doesn't seem like that strong of a signal, other than that Copilot was underpriced.

I don't disagree with your other points though.


It was the only clear model from a user's perspective. Sure, a request may not perform as expected, or end earlier than desired, but it was an agreed to cost that was clear on both sides: 1 enter press in a prompt window = 1 request.

If they wanted to limit what a request can do via their harness, I'm sure they artificially could.

I hate all of the other plans I've seen of here a "credit" or here's a "bucket of usage", and we pull an announced amount from it based on arbitrary info that can't be audited or proven, and most of shat is spent might be entirely useless anyway.

Claude Code has a problem where 1 request could take a significant portion of your 5 hour window, and it's unclear why.

It's much like SEO, where Google sometimes says things that might help, but it's just magic wand eaving hoping something works.


I believe Anthropic added CC back to the pro plan.

the point is that they tipped their hand about where they want to go in the future. They are just A B testing to see how much it pissed off their customers

Now they just removed Opus.

I signed up over the weekend and still have access to Opus. I believe the AB test they were doing only removed Opus from a small percentage of users.

Don't think I'll be renewing though. The usage limits are low enough that I don't think this is worth it. One complex prompt while Americans are awake will wipe out your alloted tokens it seems.


>Opus 4.7 seems like a model more intended to save Anthropic money than push the bar.

How so? By all accounts I've read so far it uses more tokens overall for roughly the same results.


If you're delivering the same results and charging the customer more/letting the customer use the product less, that's saving the company money.

Their variable cost is (basically) the number of tokens. They increased that. I don't get how that saves them money

Yeah, honestly it feels like this came faster than I was expecting. I thought we'd see another few years of reeling in with too-good-to-be-true prices to really lock in dependency but it feels like most companies have kind of a lot of wiggle room to back out of this still

Anthropic has done no such thing. WTF is wrong with you people? HN used to be made up of industry people, but random uninformed comments

In the time I’ve had agents I’ve never abandoned more projects. Vibe coding especially just leads me to feel no attachment. I don’t feel proud to put my name on it.

Despite coding from a young age I always thought that I cared more about the outcome than the code. Turns out that’s not entirely the case.


Open to all except it’s not because as soon as you try to use it for security purposes it will shut down and silently route you to a worse model. I was trying to use GPT 5.3 for reverse engineering and got an account warning.

He who is a ripper off-er cannot be ripped off.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: