Hacker Newsnew | past | comments | ask | show | jobs | submit | zaphar's commentslogin

Man what a blast from the past. I was on that team before it got shut down.

Maybe it's time for fossil to get another look... It's effectively distributed code, wiki, and issues all using the same tool.

Every time Fossil comes up, people's big objection is that you can't squash commits. Personally, I'm fine with that - I tend to agree with Hipp that the repo history should not sacrifice truth for the sake of pettiness in the timeline. But a lot of people seem to disagree, which limits the audience for Fossil. I use Fossil for my own projects but I wouldn't expect it to become big like git is.

Mailing patches is the same as squashing commits. The Linux kernel would be much harder to maintain without messy history being carefully distilled down to well crafted patches.

But mailing patches is a pain in the ass. VCSes should support squashing and rebasing.


You essentially have to run in google to use them and that probably limits their ability to breakout. Anthropic might be doing this deal as a way to shore up their supply chain and cost of both inference and training by leveraging Google's hardware and chip manufacturing expertise.

Several customers like Citadel, run TPUs in their own datacenters (closer to Exchanges)

every tpu thats been made is in use and sold at a high margin, demand is not the issue.

Google does have a sort of temporary moat. They have a much better hardware supply line story than anyone else and the revenue to maintain that edge indefinitely.

This is the thing - Google is a real company with well established business, money of their own, hardware, server farms, etc. ChatGPT and Anthropic have none of that in the same way google does. They have an incentive to lie and 'fake it till you make it' so they can get out of the 'risk zone' of collapsing back in on themselves. Google can throw money at Gemini all day.

That may be true for OpenAI, less so for Antropic - which has much better margins. Both of these companies CEOs have come in public saying the same.

No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.


If AI is commoditising, who is Bahrain and who are the Saudis?

The company with the access to cheap and plentiful energy and the real estate to build data centers will be Saudi Arabia in your analogy.

This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.


> Putting compute in space is expensive but so is building a data center in the US.

You know what's also really hard in a vacuum? Dissipating heat.


> You know what's also really hard in a vacuum? Dissipating heat

Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.

At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.

If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.


Would you want more wattage per kg for a better radiator?

Yes! Thank you–fixed.

Putting it centrally globally makes a lot of sense, just like connecting airports

Saudi will host the biggest data centers in the world


What does that mean?

> What does that mean?

I really couldn't have been more obscure, could I? :P

In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)

In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.

To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)

[1] https://en.wikipedia.org/wiki/Bahrain_Petroleum_Company

[2] https://en.wikipedia.org/wiki/Proclamation_of_the_Kingdom_of...

[3] The alternate hypothesis is it's at distribution.


Plus the whole thing of first mover advantage being a myth, especially in the tech industry

> Plus the whole thing of first mover advantage being a myth, especially in the tech industry

Source? That would be surprising!


https://hbr.org/2005/04/the-half-truth-of-first-mover-advant...

https://static1.squarespace.com/static/5654eb6ee4b0e19716ec5...

Showing how old I am with that reference

A more recent article https://www.productplan.com/learn/first-mover-advantage-fast...

I should say it’s “mostly” a myth, there are some fleeting competitive advantages to first mover but a lot of them don’t apply well to tech companies and there isn’t strong historical evidence supporting it.


Why? Being a first mover only counts for something if it can yield exclusivity that is durable.. you should know this being a VC and all. Real options - hello?

If you want to benefit massively off being a first mover, you better do the work in figuring it out how you are going to acquire exclusivity that lasts long enough that keeps most firms out.


I believe they were drawing a parallel to oil commoditization, but that's as far as I got.

The app layer is Bahrain.

Running AI at a loss long enough to kill the competition would run afoul of antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.

Although I doubt this will stop them if they think it’s advantageous…


Lower real operating costs isn't the same thing as below cost pricing.

US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...


I thought that these type of antitrust laws are in no way enforced anymore in the tech industry. And that it's been that way for decades. I mean the sheer existence of Google shows that right? What about Maps, Mail, Books... basically everything apart from Search? Why would an AI Mode as one category of Search results be any different? They're not actively promoting Gemini in those search results. They're simply augmenting it with this new tool that exists now.

Yes anti-trust is very much theatre nowadays.

As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.


Eh, I think this is actually not a specifically American thing. More of a neo-liberal mindset. Competition may be good in the long term. But a monopoly now may mean more money in your pocket now. The tech giants definitely give the US some geo-political power in some cases but in general the US would be better off with more competition.

ed: @er2d, can't reply to your comment for some reason, so doing it here: I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.

And for the Europe comment: Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.

But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.


Nope the reason for a monopoly is incentives for R&D and innovation.

The US understands that and allows it to happen as the former yields a compounding effect of power.

European states certainly don't get this.


You’re wrong actually I suggest you read a book on industrial organisation and why monopoly is a more efficient market structure in relation to incentives for R&D.

Why do people comment on stuff they barely have an understanding of? Comical. People like you create noise.


TSMC ?

Airbus ?


Are you claiming they are tech firms in the manner of a Apple, Google etc?

lol


> run afoul of antitrust laws

Now, that’s a name I haven’t heard in a long time.


> antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.

couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.


> Running AI at a loss long enough to kill the competition would run afoul of antitrust laws.

Running at a loss long enough to kill the competition is basically the name of the game these days.

When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.


Who's going to enforce antitrust laws in this environment, pray tell?

>would run afoul of antitrust laws

Buwahahahahahahahhahah

They drop a little cash on some shitcoin the president controls and those problems go away.


I don't like aspartame because it's sickeningly sweet. I could care less if it's healthy or not.

No, it isn't. Twitter was absolutely brilliant marketing. It perfectly encapsulated what the site was at the time.

X is just a letter the current owner likes. It has absolutely no relevance to what the site does or is for.


I worked at google. k8s does not really look at all like what they used internally when I was there, aside from sharing some similar looking building blocks.

Yeah, but is the internal tool simpler? I'd be surprised.

Simpler to use? yes. Simpler under the hood? No.

If increasing spending had almost no impact over time why would cutting spending have an impact?

If filling a leaky bucket had almost no impact over time, why would stopping filling the bucket have an impact?

But filling a leaky bucket does have an impact. You just have to fill it faster than it empties. Which is probably your point.

My point is different. Study after study shows that below a specific floor spending has almost no impact on educational outcomes. The correlation is such that you can both determine that there is likely no leak and also that it has no effect.

The stuff that does have an impact is much harder to move the needle on though so everyone just scapegoats funding instead. Stuff like building up the nuclear family in an area, increasing income mobility, and holding parents accountable for child outcomes do have a measurable effect but are politically intractable today.


Unfortunately there is much more to the story than a number on a line. Just because you increase spending doesn't mean that the spending isn't earmarked for items like digital projectors and virtual textbooks that have minimal impact on learning outcomes.

So theoretically if your spending was hiring more and better teachers and better HVAC and more/smaller classes then spending would and has experimentally been verified to have an impact. Especially if you also paired it with getting rid of teacher who don't meet the bar.

But as a practical matter that is not what happens when a campaign to increase funding for a school happens. The problem is not insufficient money, the problem is not enough skill and political will in how you spend the money.


>If increasing spending had almost no impact over time why would cutting spending have an impact?

big if true. we should probably cut 100% of spending in that case.

edit: not sure if people are missing the /s, or if people legitimately believe that cutting spending has no impact.


I probably use a different interpretation of Postel's law. I try not "break" for anything I might receive, where break means "crash, silently corrupt data, so on". But that just means that I return an error to the sender usually. Is this what Postel meant? I have no idea.

I don't think that interpretation makes that much sense. Isn't it a bit too... obvious that you shouldn't just crash and/or corrupt data on invalid input? If the law were essentially "Don't crash or corrupt data on invalid input", it would seem to me that an even better law would be: "Don't crash or corrupt data." Surely there aren't too many situations where we'd want to avoid crashing because of bad input, but we'd be totally fine crashing or corrupting data for some other (expected) reason.

So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.


I actually dont think it's that obvious at all (unless you are a senior engineer). It's like the classic joke:

A QA engineer walks into a bar and orders a beer. She orders 2 beers.

She orders 0 beers.

She orders -1 beers.

She orders a lizard.

She orders a NULLPTR.

She tries to leave without paying.

Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.

The bar explodes.

It's usually not obvious when starting to write an API just how malformed the data could be. It's kind of a subconscious bias to sort of assume that the input is going to be well-formed, or at least malformed in predictable ways.

I think the cure for this is another "law"/maxim: "Parse, don't validate." The first step in handling external input is try to squeeze it into as strict of a structure with as many invariants as possible, and failing to do so, return an error.

It's not about perfection, but it is predictable.


Hmm. Fair point. It's entirely possible that it's not obvious and that the "law" is almost a "reminder" of sorts to not assume you're getting well-formed inputs.

I'm still skeptical that this is the case with Postel's Law, but I do see that it's possible to read it that way. I guess I could always go do some research to prove it one way or the other, but... nah.

And yes, "Parse, don't validate." is one of my absolute favorite maxims (good word choice, by the way; I would've struggled on choosing a word for it here).


Right even for senior engineers this can be hard to get right in practice. Parse, don't validate is certainly one approach to the problem. Choosing languages that force you to get it right is another.

Yea, I interpret it as the same thing: On invalid input, don't crash or give the caller a root shell or whatever, but definitely don't swallow it silently. If the input is malformed, it should error and stop. NOT try to read the user's mind and conjure up some kind of "expected" output.

I think perhaps a better wording of the law would be: "Be prepared to be sent almost anything. But be specific about what you will send yourself".

I mean, if you are them and trying to detect when people are using your system incorrectly the detection system is going to be a little bit flaky. How do they prove you aren't violating your ToS by using OAuth for a system they didn't approve that usage for?

The fault here is not with Anthropic. It lies with cowboy coders creating a system that violates a providers terms of service and creating an adverse relationship.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: