Hacker Newsnew | past | comments | ask | show | jobs | submit | alexpotato's commentslogin

What's the quote:

"A billion here, a billion there and pretty soon you're talking about some real money."


A write up review of books I've read and there are a couple scifi books (and a bunch of other recommendations) here: https://alexpotato.com/books/

Nicely done, thanks!

One of my favorite stories about logistics and quarterly earnings deadlines (from when I worked at a pharmaceutical company:

"In our business, a truckload of various drugs can easily reach $10-$15 million. Now, if that truck arrives at the depot at 11:59pm March 31st then it's first quarter earnings. If it arrives at 12:01am April 1st then it's second quarter earnings.

$15 million is a BIG shortfall, even for us, so you better believe those truck drivers will roll the stop signs, blow red lights etc to make sure that truck arrives before 11:59pm"


That doesn't make any sense because the revenue is already booked for the sale which has nothing to do with when the delivery truck actually arrives.

If it is cash based accounting, revenue and expenses are booked when the money changes hands.

If it is accrual-based acocunting, it takes place when the event legal triggering the change of ownership of goods in the transaction takes place, which depends on the shipping terms, which could be anywhere from when it is available for the buyer’s transport agent to pick up at the seller’s facility (EXW) to when it is delivered, unloaded, and at the buyers door (DDP) or any of a variety of places in between (FOB Origin, FOB Destination, and a bunch of other potential shipping terms with their own rules on when ownership—and responsibility—transfer from seller to buyer.)


Yes, to add on -

- incoterms (https://www.dhl.com/content/dam/dhl/global/dhl-global-forwar...)

- cash flow statement v. income statement (accurals)


Those diagrams of risk vs cost bared are helpful.

It's got to be one of these:

FOB Shipping Point (or Origin): Responsibility transfers to the buyer as soon as the goods leave the seller's premises. You book it when it leaves your loading dock.

FOB Destination: The seller retains risk and costs until the goods reach the buyer’s location.

The sale doesn't happen until the asset transfer occurs. Before that any cash you get from the sale is balanced by the liability to actually produce the good or refund the money. Or more likely you don't get any cash but can't record the bill as accounts receivable. It's not receivable until the transfer point is crossed.


You can account a transaction that's been placed but not fulfilled. I think when someone orders $15m of goods, you can immediately book $15m accounts receivable (asset) and $15m goods owed (liability) as soon as you have the expectation it will happen. If the transaction falls through, you delete them.

Under GAAP you cannot recognize revenue before the service is delivered or product is shipped. You can accrue revenue that is earned but not yet paid (if you are paid on Net 30, for example), but even if pre-paid you have to book that as deferred revenue, which is a liability (until you ship).

There's no deleting anything in accounting.

As someone with a partner who’s an accountant, I love seeing technologists be confidently wrong about accounting fundamentals vs. the type of technicalities that she has to deal with. Your comment highlights the absurdity of their confidence; kudos.

This is not correct. A business this big would definitely be using accrual accounting (not cash) which generally means you count the revenue when the actual ownerships transfers to the buyer. Since the truck was operated by the seller, the transfer of ownership is almost certainly counted as when the buyer receives the goods.

Rewrite the anecdote with the truck racing to the supplier to make the pickup on time.

Accounts receivable, revenue, and cash are related, but separate, accounting items.

Ya but op's anecdote is cute and funny.

That’s not why it doesn’t make sense. It doesn’t make sense because in the forward guidance you’d be able to say you expect the $15 million coming in.

Cash vs Accrual

How many F500 companies use cash accounting? How many public companies altogether?

Over the long run, companies that spend resources on this micromanagement of earnings probably are not seeing the forest for the trees. I cringe thinking about how much time and top talent at a company is spent preparing for earnings rather than spending those hours improving the business itself.

I worked at a previous listed company where a single $6MM order of hardware being pushed out a week made quarterly p&l positive. Im absolutely sure the same situation occurred every other quarter as well in some part of the business I didnt see.

Sounds more realistic than a low level truck driver running stop signs.

this kind of deal timeline management happens at all companies. this is why contracts get structured in complicated pricing structures to make it easier for revenue recognition to occur in the quarter it’s supposed to. the timeline can move from 3 months to 6 it’s still going to be a huge focus area for a lot of people at every company

This is why Netflix broke up the final season of Stranger Things in such a weird way... they wanted new episodes at the end of quarters, to have good subscriber numbers for the quarter report

Likewise, if you know you've already got the current quarter in the bag, but the next quarter is looking soft, you tell that truck driver to slow down!

If you can't be arsed to ship it before the last day of the report, it's your fault and asking the truck driver to drive like a maniac to compensate is amateurish, dangerous.

That's not something I'd be proud of.


It's an extreme example that shortens the time frame and shrinks the cast of characters to illustrate the point.

Substitute "telling the truck driver to run stop signs" with "order the factory to increase production", it's the same thing.


Or, sometimes, you order the factory to _reduce_ output to 50% of what it can do for the last week of Q1 so you don't have excess unsold inventory on the books.

Then in Q2, you panic because you don't have enough inventory, so you order the factory to produce at 150% to catch up. Both 50% and 150% are inefficient factory states; if you weren't thinking about snapshot reporting you'd have just let it run at 100% and your Q1+Q2 results would be overall better.

I have personally seen this happen at a household-name Fortune 50 company. It's insane and causes real damage to the business in many ways.


Early in my career I worked at a place where the sales people would half-joke about signing deals on December 40th -- to claim it in the previous quarter/year.

December 40th is last day of Q5.

Okay, what's that to say it won't be the same but even worse on a 6mo reporting schedule?

as the time period gets longer, the the more likely it is that the numbers represent the true performance of the business rather than randomness. That has to be balanced against the fact that investors get less frequent updates i.e. the information is now potentially 6 months out of date rather than 3 months at worst. But then its just a judgment call of the relative benefit of each - you could argue that with modern accounting systems, modern companies could deliver weekly or even daily earnings , which would give investors much more timely information, and the high frequency would probably mean it wouldn’t be worth making the effort for management to fudge the numbers to bring forward or delay revenue one day or one week. There would be a lot more variance in the numbers if they were daily, but thats a good thing - it would just reflect the underlying randomness, and then the investors could decide when the accumulated trend over a period of time is meaningful or not, instead of management wasting time massaging numbers into a fairy tale of steady growth.

In every sales-led company quarter end is a shitshow. It'll be even worse if there's only one chance to bring the numbers back in instead of 2 or 3. It's used to put pressure on sales teams, but the net result over the year is never good because it sours relationships and reduces overall deal value.

The best thing would be continuous daily or weekly reporting with no defined year end. Unfortunately the entire global system of tax and accounting is set up around annual reporting, so change is impossible.


How is 2 data points a year "representing the true performance of the business" but 4 data points a year is randomness?

You also get less frequent CPU usage % datapoints when you want to be sure about usage? That makes no sense at all.


To be fair, this is a side effect of financial deadlines in general. A business reporting annually will still try to beat that deadline right up to the last second.

In this particular example a single truckload would be less significant annually than quarterly though.


General Electric has a history of using that exact trick... just with jet engines and power generators and medical devices that can represent much larger amounts of revenue.

Not just - Welch pioneered using GE Capital to “smooth” earnings - lots of judgment calls in those finance companies in the early 90s.

GE's latest trick is to roll long term maintenance contracts into the price of the product and then sell off the unit holding the bag on the maintenance contract. Very shady but very clever.

"We risk everyones lives in order to have a barely just-in-time warehouse on shipments in the low 8 figures."

Cool.

What company is this?


It's very clearly an analogy.

I don’t get it, why not just say “oh we were 15 million short that quarter and 15 million ahead this one so it’s all good”

Like why get hung up on these arbitrary cutoffs


This is what confuses me about tech sales too. Why do I always get a discount for buying right at the end of the quarter? Seems like you could just get ahead on the next one.

Because of how performance targets are defined. If sales figures “fell” or didn’t grow at the expected rate it can be worse than having a few more points over next quarter.

Why do they define performance targets that way then? Are they making it hard on purpose for some reason?

You have to set a deadline at some point. Now, I think any rational manager would agree if the sale shows up on April 1 instead of March 31st, that's totally fine. But HR/Finance systems aren't always rational.

Because many people are lazy and need deadlines or else they won't work.

It, unfortunately, always comes down to people being dumb. I wish there was an another answer, but that's always the essence of the issue.

I've seen someone trying to explain to an investor that the numeric change between quarters wasn't meaningful. Investor seemed to understand explanation, but was clearly deeply unhappy about the chart not being tidy. People respond to that by trying to make the chart tidy, even if it means losing the company money at times.


I work as a DevOps/SRE and have been doing it FinTech (bank, hedge funds, startups) and Crypto (L1 chain) for almost 20 years.

My thoughts on vibe coding vs production code:

- vibe coding can 100% get you to a PoC/MVP probably 10x faster than pre LLMs

- This is partly b/c it is good at things I'm not good at (e.g. front end design)

- But then I need to go in and double check performance, correctness, information flow, security etc

- The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)

- The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs

- Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)

So overall, this is why I think we're getting wildly different reports on how effective vibe coding is. If you've never built a data pipeline and a LLM can spin one up in a few minutes, you think it's magic. But if you've spent years debugging complicated trading or compliance data pipelines you realize that the LLM is saving you some time but not 10x time.


There’s a big gap between reality and the influencer posts about LLMs. I agree with you that LLMs do provide some significant acceleration, but the influencers have tried to exaggerate this into unbelievable numbers.

Even non-influencers are trying to exaggerate their LLM skills as a way to get hired or raise their status on LinkedIn. I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company). Many of these posts come from people who were all in on crypto companies a few years ago.

The world really is changing but there’s a wave of influencers and trend followers trying to stake out their claims as leaders on this new frontier. They should be ignored if you want any realistic information.

I also think these exaggerated posts are causing a lot of people to miss out on the real progress that is happening. They see these obviously false exaggerations and think the opposite must be true, that LLMs don’t provide any benefit at all. This is creating a counter-wave of LLM deniers who think it’s just a fad that will be going away shortly. They’re diminishing in numbers but every LLM thread on HN attracts a few people who want to believe it’s all just temporary and we’re going back to the old ways in a couple years.


> I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company).

This always seems to be the pattern. "I vibe coded my product and shipped it in 96 hours!" OK, what's the product? Why haven't I heard of it? Why can't it replace the current software I'm using? So, you're looking for work? Why is nobody buying it?

Where is the Quicken replacement that was vibecoded and shipping today? Where are the vibecoded AAA games that are going to kill Fortnite? Where is the vibecoded Photoshop alternative? Heck, where is the vibecoded replacement for exim3 that I can deploy on my self hosted E-mail server? Where are all of the actual shipping vibecoded products that millions of users are using?


I found one example of this going very wrong on reddit the other day -

https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr...

One redditor security reviews a vibe coded project


Wow, great example, and great example of what these fakers do when called out. Summary:

The maintainer, instead of listening to the security researcher and accepting feedback about his development process, instead:

1. Denied the problem

2. Censored discussion of the problem

3. Banned the people calling out the problem

...and then when the security issues were posted more publicly and got traction...

4. Made the subreddit private

5. Wiped and deleted his account

6. Wiped and deleted the GitHub repo

7. Took the project's web site off the web

Absolutely wild and unhinged behavior.


The self hosted reddit has been inundated with slop and is trying to ban it. It's not a good idea to run anyone else's vibe code!

holy fuck this is awesome.. I haven't laughed this hard in a while

I agree with your general point but ... "Where are the vibecoded AAA games". A game dev team is typically less than 15% programmers. Most of the team are artists, followed by game designers. Maybe someday those will be replaced too but at the moment, while you can get some interesting pictures from stable-diffusion techniques it's unlikely to make a cohesive game and even prompting to create all of it would still take many person years.

That said, I have had some good experiences getting a few features from zero to working via LLMs and it's helped me find lots of bugs far easier than my own looking.

I can imagine a vibe coded todo app. I can also kind of imagine a vibe coded gIMP/Photoshop though it would still take several person years, prompting through each and every feature.


> Where are all of the actual shipping vibecoded products that millions of users are using?

Claude Code and OpenClaw - they are vibecoded. And I believe more coming.


Claude Code is not vibecoded, it is made using Claude Code but it is not vibecoded using Claude Code.


But it's like crypto then, good for buying other crypto, or illegal stuff.

Also people are using CC for the cheap access to the model, otherwise they'd be using opencode.


Yeah, I really wonder if someone would trust to do their taxes in a vibe-coded version of Turbotax...

Do you really need Turbotax? Just feed it the tax code, your financial data, and the relevant forms and it should be good to go. Now we have freed up the labor of accountants so they can go be productive in another segment of society. /s

Looking from outside the US tax system, I feel taxes are intentionally complex to keep some people employed and to hide from questions.

well, you still need to drive some data: say you purchased some food, was that a personal purchase or was it business-related (e.g. a meal with a client). Software can help with that decision making.

But even if human interaction wasn't needed, the question still stands, would you trust an LLM to calculate your tax liability just by feeding it data?


I regret only having one upvote for this.

I note that games are mostly art assets and things like level design, and players are already happy to instantly consign such products to the slop bin.

The whole thing is "market for lemons": app stores filling with dozens of indistinguishable clones of each product category will simply scare users off all of them.


"I come from a state that raises corn and cotton and cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I am from Missouri. You have got to show me."

The “store on the chain” thing turned out to be a fad in terms of technology, even though it made a lot of money (in the billions and more) to some people via the crypto thing. That was less than 10 years ago, so many of us do remember the similarities of the discourse being made then to what’s happening now.

With all that said, today’s LLMs do seem so provide a little bit more value compared to the bit chain thing, for example OCR/.pdf parsing is I’d say a solved thing right now thanks to LLMs, which is nice.


>Many of these posts come from people who were all in on crypto companies a few years ago.

This is ditto my observation. There seems to be a certain "type" of people like this. And it's not just people looking for work.

My guess is either they have super low critical thinking, a very cynical view of the world where lies and exaggeration are the only way to make it, or something more pathological (narcissism etc).


The "type" is simply the get-rich-quick schemers.

I have a relative who was late to crypto, late to drop shipping, late to carbon credits, but is now absolutely all-in on AI as his ticket out. It honestly depresses the hell out of me trying to talk to him because everything is about money and getting rich.

People like this don't care about underlying technologies or learning past the most basic surface level of understanding.


Day 7 of using Claude Code here are my takes...

“Day 7" would be amazing - all that I see YouTube recommending is "I tried it for 24 hours"

I was listening to an "expert" on a podcast earlier today up until the point where the interviewer asked how long his amazing new vibe-coded tooling has been in production, and the self-proclaimed expert replied "actually we have an all-hands meeting later today so I can brief the team and we will then start using the output..."


I'm building a Java HFT engine and the amount of things AI gets wrong is eye opening. If I didn't benchmark everything I'd end up with much less optimized solution.

Examples: AI really wants to use Project Panama (FFM) and while that can be significantly faster than traditional OO approaches it is almost never the best. And I'm not taking about using deprecated Unsafe calls, I'm talking about using primative arrays being better for Vector/SIMD operations on large sets of data. NIO being better than FFM + mmap for file reading.

You can use AI to build something that is sometimes better than what someone without domain specific knowledge would develop but the gap between that and the industry expected solution is much more than 100 hours.


AI is extremely good at the things that it has many examples for. If what you are doing is novel then it is much less of a help, and it is far more likely to start hallucinating because 'I don't know' is not in the vocabulary of any AI.

> because 'I don't know' is not in the vocabulary of any AI.

That is clearly false. I’m only familiar with Opus, but it quite regularly tells me that, and/or decides it needs to do research before answering.

If I instruct it to answer regardless, it generally turns out that it indeed didn’t know.


I haven't had that at all, not even a single time. What I have had is endless round trips with me saying 'no, that can't work' and the bot then turning around and explaining to me why it is obvious that it can't work... that's quite annoying.

Try something like:

> Please carefully review (whatever it is) and list out the parts that have the most risk and uncertainty. Also, for each major claim or assumption can you list a few questions that come to mind? Rank those questions and ambiguities as: minor, moderate, or critical.

> Afterwards, review the (plan / design / document / implementation) again thoroughly under this new light and present your analysis as well as your confidence about each aspect.

There's a million variations on patterns like this. It can work surprisingly well.

You can also inject 1-2 key insights to guide the process. E.g. "I don't think X is completely correct because of A and B. We need to look into that and also see how it affects the rest of (whatever you are working on)."


Ok! I will try that, thank you very much.

Of course! I get pretty lazy so my follow-up is often usually something like:

"Ok let's look at these issues 1 at a time. Can you walk me through each one and help me think through how to address it"

And then it will usually give a few options for what to do for each one as well as a recommendation. The recommendation is often fairly decent, in which case I can just say "sounds good". Or maybe provide a small bit of color like: "sounds good but make sure to consider X".

Often we will have a side discussion about that particular issue until I'm satisfied. This happen more when I'm doing design / architectural / planning sessions with the AI. It can be as short or as long as it needs. And then we move on to the next one.

My main goal with these strategies is to help the AI get the relevant knowledge and expertise from my brain with as little effort as possible on my part. :D

A few other tactics:

- You can address multiple at once: "Item 3, 4, and 7 sound good, but lets work through the others together."

- Defer a discussion or issue until later: "Let's come back to item 2 or possibly save for that for a later session".

- Save the review notes / analysis / design sketch to a markdown doc to use in a future session. Or just as a reference to remember why something was done a certain way when I'm coming back to it. Can be useful to give to the AI for future related work as well.

- Send the content to a sub-agent for a detailed review and then discuss with the main agent.


Eh… I am not sure if that translate to “I don’t know”.

IDK would require the LLM to be aware of the frequency of cases seen in its own training.

I can see this working as a risk ranking, which is certainly worth trying in its own right.

Does it actually say “I don’t know?”


I don’t know can be added to the vocabulary depending on the technique being used.

There are so many overlapping and also unique approaches to software development beyond vibe coding and ai driven software development.


I think the main issue is treating LLM as a unrestrained black box, there's a reason nobody outside tech trust so blindly on LLMs.

The only way to make LLMs useful for now is to restrain their hallucinations as much as possible with evals, and these evals need to be very clear about what are the goal you're optimizing for.

See karpathy's work on the autoresearch agent and how it carry experiments, it might be useful for what you're doing.


> there's a reason nobody outside tech trust so blindly on LLMs.

Man, I wish this was true. I know a bunch of non tech people who just trusts random shit that chatgpt made up.

I had an architect tell me "ask chatgpt" when I asked her the difference between two industrial standard measures :)

We had politicians share LLM crap, researchers doing papers with hallucinated citations..

It's not just tech people.


We were working on translations for Arabic and in the spec it said to use "Arabic numerals" for numbers. Our PM said that "according to ChatGPT that means we need to use Arabic script numbers, not Arabic numerals".

It took a lot of back-and-forths with her to convince her that the numbers she uses every day are "Arabic numerals". Even the author of the spec could barely convince her -- it took a meeting with the Arabic translators (several different ones) to finally do it. Think about that for a minute. People won't believe subject matter experts over an LLM.

We're cooked.


Kind of a tangent but that did make me curious about how numbers are written in Arabic: https://en.wikipedia.org/wiki/Eastern_Arabic_numerals

I guess "Western Arabic" would have been more precise.

The architect should have required Hindu numbers. Same result, but even more confusion.

Man this is maddening.

Honestly I think we're just becoming more aware of this way of thinking. It's certainly exacerbated it now that everyone has "an expert" in their pocket.

It's no different than conspiracy theorists. We saw a lot more with the rise in access to the internet. Not because they didn't put in work to find answers to their questions, but because they don't know how to properly evaluate things and because they think that if they're wrong then it's a (very) bad thing.

But the same thing happens with tons of topics, and it's way more socially acceptable. Look how everyone has strong opinions on topics like climate, rockets, nuclear, immigration, and all that. The problem isn't having opinions or thoughts, but the strength of them compared to the level of expertise. How many people think they're experts after a few YouTube videos or just reading the intro to the wiki page?

Your PM is no different. The only difference is the things they believed in, not the way they formed beliefs. But they still had strong feelings about something they didn't know much about. It became "their expert" vs "your expert" rather than "oh, thanks for letting me know". And that's the underlying problem. It's terrifying to see how common it is. But I think it also leads to a (partial) solution. At least a first step. But then again, domain experts typically have strong self doubt. It's a feature, not a bug, but I'm not sure how many people are willing to be comfortable with being uncomfortable


And the worst part is, these people don't even use the flagship thinking models, they use the default fast ones.

There’s a possibility the same people might believe anything they read on social media or via Google and it’s something worthy of attention.

In my experience, people outside of tech have nearly limitless faith in AI, to the point that when it clashes with traditional sources of truth, people start to question them rather than the LLM.

I am curious about what causes some to choose Java for HFT. From what I remember the amount of virgin sacrifices and dances with the wolves one must do to approach native speed in this particular area is just way too much of development time overhead.

Probably the same thing that makes most developers choice a language for a project, it's the language they know best.

It wasn't a matter of choosing Java for HFT, it was a matter of selecting a project that was a good fit for Java and my personal knowledge. I was a Java instructor for Sun for over a decade, I authored a chunk of their Java curriculum. I wrote many of the concurrency questions in the certification exams. It's in my wheelhouse :)

My C and assembly is rusty at this point so I believe I can hit my performance goals with Java sooner than if I developed in more bare metal languages.


"HFT" means different things to different people.

I've worked at places where ~5us was considered the fast path and tails were acceptable.

In my current role it's less than a microsecond packet in, packet out (excluding time to cross the bus to the NIC).

But arguably it's not true HFT today unless you're using FPGA or ASIC somewhere in your stack.


The one person who understands HFT yeah. "True" HFT is FPGA now and also those trades are basically dead because nobody has such stupid order execution anymore, either via getting better themselves or by using former HFTs (Virtu) new order execution services.

So yeah there's really no HFT anymore, it's just order execution, and some algo trades want more or less latency which merits varying levels of technical squeezing latency out of systems.


Software HFT? I see people call Python code HFT sometimes so I understand what you mean. It's more in-line with low latency trading than today's true HFT.

I don't work for a firm so don't get to play with FPGAs. I'm also not co-located in an exchange and using microwave towers for networking. I might never even have access to kernel networking bypass hardware (still hopeful about this one). Hardware optimization in my case will likely top out at CPU isolation for the hot path thread and a hosting provider in close proximity to the exchanges.

The real goal is a combination of eliminating as much slippage as possible, making some lower timeframe strategies possible and also having best class back testing performance for parameter grid searching and strategy discovery. I expect to sit between industry leading firms and typical retail systematic traders.


> AI really wants to use Project Panama

It would help if you briefly specified the AI you are using here. There are wildly different results between using, say, an 8B open-weights LLM and Claude Opus 4.6.


I've been using several. LM Studio and any of the open weight models that can fit my GPU's RAM (24GB) are not great in this area. The Claude models are slightly better but not worth they extra cost most of the time since I typically have to spend almost the same amount of time reworking and re-prompting, plus it's very easy to exhaust credits/tokens. I mostly bounce back and forth between the codex and Gemini models right now and this includes using pro models with high reasoning.

Maybe a silly question, but why Java? As a C# guy, my experience with AI is it hasn't been great with it, and I'd suspect similar for Java. I'd probably go with Rust, which my own efforts with AI has done really well with, even if I'm far from a Rust expert.

Then you list all of the things you want it not to do and construct a prompt to audit the codebase for the presence of those things. LLMs are much better at reviewing code than writing it so getting what you want requires focusing more on feedback than creation instructions.

Wouldn't Java always lose in terms of latency against a similarly optimized native code in, let's say, C(++)?

Not necessarily. Java can be insanely performant, far more than I ever gave it credit for in the first decade of its existence. There has been a ton of optimization and you can now saturate your links even if you do fairly heavy processing. I'm still not a fan of the language but performance issues seem to be 'mostly solved'.

"Saturating your links" is rarely the goal in HFT.

You want low deterministic latency with sharp tails.

If all you care about is throughput then deep pipelines + lots of threads will get you there at the cost of latency.


You can achieve optimized C/C++ speeds, you just can't program the same way you always have. Step 1, switch your data layout from Array of Structures to Structure of Arrays. Step 2, after initial startup switch to (near) zero object creation. It's a very different way to program Java.

You have to optimize your memory usage patterns to fit in CPU cache as much as possible which is something typical Java develops don't consider. I have a background in assembly and C.

I'd say it's slightly harder since there is a little bit of abstraction but most of the time the JIT will produce code as good as C compilers. It's also an niche that often considers any application running on a general purpose CPU to be slow. If you want industry leading speed you start building custom FPGAs.


As long as you tune the JVM right it can be faster. But its a big if with the tune, and you need to write performant code

Java has significant overhead, that most/every object is allocated on heap, synchronized and has extra overhead of memory and performance to be GC controlled. Its very hard/not possible to tune this part.

You program differently for this niche in any language. The hot path (number crunching) thread doesn't share objects with gateway (IO) threads. Passing data between them is off heap, you avoid object creation after warm up. There is no synchronization, even volatile is something you avoid.

> Passing data between them is off heap

how exactly you are passing data? You can pass some primitives without allocating them on heap. You can use some tiny subset of Java+standard library to write high performance code, but why would you do this instead of using Rust or C++?


In some places I'm using https://github.com/aeron-io/agrona

Strangely this is one of the areas where I want to use project panama so I might re-implement some of the ring buffers constructs.

You allocate off heap memory and dump data into it. With modern Java classes like Arena, MemoryLayout, and VarHandle it's honestly a lot like C structs.

I answered "why" in another post in this thread.


> You allocate off heap memory and dump data into it. With modern Java classes like Arena, MemoryLayout, and VarHandle it's honestly a lot like C structs.

my opinion is that no, it is not, declaring and using C struct is 20x times more transparent, cost efficient and predictable. And that's we talking about C raw stucts, which has lots of additional ergonomics/safety/expression improvements in both c++ and rust on top of it.


Depends. Many reasons, but one is that Java has a much richer set of 3rd party libraries to do things versus rolling your own. And often (not always) third party libraries that have been extensively optimized, real world proven, etc.

Then things like the jit, by default, doing run time profiling and adaptation.


Java has huge ecosystem in enterprise dev, but very unlikely it has ecosystem edge in high performance/real time compute.

There are actually cases when Java (the HotSpot JVM) runs faster than the same logic written in C/C++ because the JVM is doing dynamic analysis and selective JIT compilation to machine code.

I personally know of an HFT firm that used Java approximately a decade ago. My guess would be they're still using it today given Java performance has only improved since then.

it doesn't mean Java is optimal or close to optimal choice. Amount of extra effort they do to achieve goals could be significant.

Optimal in what sense? In the java shops I've worked at it's usually viewed as a pretty optimal situation to have everything in one language. This makes code reuse, packaging, deployment, etc much simpler.

In terms of speed, memory usage, runtime characteristics... sure there are better options. But if java is good enough, or can be made good enough by writing the code correctly, why add another toolchain?


> But if java is good enough, or can be made good enough by writing the code correctly,

"writing code correctly" here means stripping 95% of lang capabilities, and writing in some other language which looks like C without structs (because they will be heap allocated with cross thread synchronization and GC overhead) and standard lib.

Its good enough for some tiny algo, but not good enough for anything serious.


It's good enough for the folks who choose to do it that way. Many of them do things that are quite "serious"... Databases, kafka, the lmax disruptor, and reams of performance critical proprietary code have been and continue to be written in java. It's not low effort, you have to be careful, get intimate with the garbage collector, and spend a lot of time profiling. It's a totally reasonable choice to make if your team has that expertise, you're already a java shop, etc. I no longer make the choice to use java for new code. I prefer rust. But neither choice is correct or incorrect.

> Databases, kafka, the lmax disruptor, and reams of performance critical proprietary code have been and continue to be written in java.

those have low bar of performance, also they mostly became popular because of investments from Java hype, and rust didn't exist or had weak ecosystem at that time.


I would say that if AI has to make decisions about picking between framework or constructs irrelevant to the domain at hand, it feels to me like you are not using the AI correctly.

I've seen SQL injection and leaked API tokens to all visitors of a website :)

> The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc

This makes sense when you stop viewing the LLM as a "vending machine" for apps and start seeing it as a repository of software deltas.

LLMs aren't just trained on final code; they are trained on the entire history of pull requests, review comments, and issue discussions that move a project from one version to the next.

When I use an LLM now, my workflow has shifted entirely. I’ve stopped trying to be the "coder" and have instead stepped into the role of PR Reviewer and Power User. My job is to point out edge cases, define the spec, and catch regressions—effectively managing a "virtual team" that handles the boilerplate and feature implementation.

Expecting a one-shot 1.0 release is unrealistic because it bypasses the thousand micro-decisions that happen in a real dev cycle. By embracing the "review and refine" loop, I’m becoming a better maintainer, even if that 100-hour gap to a polished product still exists.


This is exactly my experience at Lovable. For some parts of the organization, LLMs are incredibly powerful and a productivity multiplier. For the team I am in, Infra, it's many times distraction and a negative multiplier.

I can't say how many times the LLM-proposed solution to a jittery behavior is adding retries. At this point we have to be even more careful with controlling the implementation of things in the hot path.

I have to say though, giving Amp/Claude Code the Grafana MCP + read-only kubectl has saved me days worth of debugging. So there's definitely trade-offs!


My colleague recently shipped a "bug fix" that addresses a race condition by adding a 200ms delay somewhere, almost completely coded by LLM. LLM even suggests that "if this is not good enough, increase it to 300ms".

That says something about how much some people care about this.


Even doubly so because that's how most people have solved a similar problem, so that the LLM suggests that

More generally: LLM effectiveness is inversely proportional to domain specificity. They are very good at producing the average, but completely stumble at the tails. Highly particular brownfield optimization falls into the tails.

The magic is testing. Having locally available testing and high throughput testing with high amount of test cases now unlocks more speed.

The test cases themselves becomes the foci - the LLM usually can't get them right.


How does that test suite get built and validated? A comprehensive and high quality test suite is usually much larger than the codebase it tests. For example, the sqlite test suite is 590x [1] the size of the library itself

1. https://sqlite.org/testing.html


By sweat and tears, and unfortunately, AI can only help so much in those cases. You'll have to have a really concrete idea about what your product is and how it should work.

sqlite is an extreme outlier not a typical example, with regard to test suite size and coverage.

> The magic is testing.

No it is not.

There os no amount of testing that can fix a flawed design


That is a given, similar to how no amount of implementation can fix a wrong product. Both statements are not very meaningful.

The word "Testing" is a very loaded term. Few non-professionals, or even many professionals, fully understand what is meant by it.

Consider the the following: Unit, Integration, System, UAT, Smoke, Sanity, Regression, API Testing, Performance, Load, Stress, Soak, Scalability, Reliability, Recovery, Volume Testing, White Box Testing, Mutation Testing, SAST, Code Coverage, Control Flow, Penetration Testing, Vulnerability Scanning, DAST, Compliance (GDPR/HIPAA), Usability, Accessibility (a11y), Localization (L10n), Internationalization (i18n), A/B Testing, Chaos Engineering, Fault Injection, Disaster Recovery, Negative Testing, Fuzzing, Monkey Testing, Ad-hoc, Guerilla Testing, Error Guessing, Snapshot Testing, Pixel-Perfect Testing, Compatibility Testing, Canary Testing, Installation Testing, Alpha/Beta Testing...

...and I'm certain I've missed dozens of other test approaches.


You forgot a hope-driven development and release process and other optimism based ("i'm sure it's fine" method), or faith based approaches to testing (ship and pray, ...). Customer driven invluntary beta testing also comes to mind and "let's see what happens" 0-day testing before deployment. We also do user-driven error discovery, frequently.

There is no science to testing, no provable best way, despite many people's vehement opinions

Why did you assume I'm talking about a "provable best way"? I meant that it doesn't make sense to talk simply about "testing" without clarifying what one means by it. If you assume that the absence of a "provable best way" implies a lack of utility, let me remind you that there is no "provable best way" for training LLMs either. Does that matter in practice?

> - This is partly b/c it is good at things I'm not good at (e.g. front end design)

Everyone thinks LLMs are good at the things they are bad at. In many cases they are still just giving “plausible” code that you don’t have the experience to accurately judge.

I have a lot of frontend app dev experience. Even modern tools (Claude w/Opus 4.6 and a decent Claude.md) will slip in unmaintainable slop in frontend changes. I catch cases multiple times a day in code review.

Not contradicting your broader point. Indeed, I think if you’ve spent years working on any topic, you quickly realize Claude needs human guidance for production quality code in that domain.


Yes I’ve seen this at work where people are promoting the usage of LLMs for.. stuff other people do.

There’s also a big disconnect in terms of SDLC/workflow in some places. If we take at face value that writing code is now 10x faster, what about the other parts of the SDLC? Is your testing/PR process ready for 10x the velocity or is it going to fall apart?

What % of your SDLC was actually writing code? Maybe time to market is now ~18% faster because coding was previously 20% of the duration.


It’s the Gell-Mann amnesia effect applied to LLM instead of media

> Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)

Actually I had some terrible experiences when asking the agent to do something simple in our codebase (like, rename these files and fix build scripts and dependencies) but it spent much longer time than a human, because it kept running the full CI pipelines to check the problems after every attempted change.

A human would, for example, rely on the linter to detect basic issues, run a partial build on affected targets, etc. to save the time. But the agent probably doesn't have a sense of time elapsed.


Went through something similar recently with database calls.

Co-pilot said something about having too many rows returned and had some complex answer on how to reduce row count.

I just added a "LIMIT 100" which was more than adequate.


Can't this be solved with something like "Don't run any CI commands" in the AGENTS.md?

Except for the times you do want it to run the CI.

LLM issues can often be solved by being more and more specific, but at some point being specific enough is just as time consuming as jumping in and doing it yourself.


>Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)

Absolutely. Tight feedback loops are essential to coding agents and you can’t run pipelines locally.


This is where I think we need better tooling around tiered validation - there's probably quite a bit you can run locally if we had the right separation; splitting the cheap validation from the expensive has compounding benefits for LLMs.

What I do now is I make an MVP with the AI, get it working. And then tear it all down and start over again, but go a little slower. Maybe tear down again and then go even more slowly. Until I get to the point where I'm looking at everything the AI does and every line of code goes through me.

I concur on the DevSecOps aspect for a more specific reason: If you're failing a pipeline because ThirdPartyTOol69 doesn't like your code style or W/E, you can have the LLM fix it. Or get you to 100% test coverage etc. Or have it update your Cypress/Jest/SonarQube configs until the pipeline passes without losing brain cells doing it by hand. Or finds you a set of dependency versions that passes.

> The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)

> The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs

This is where configuration language like CUE can be useful in complementing LLM [1].

It's the deterministic NLP cousin of the stochastic LLM based on mathematically sound latticed-value logic [2].

[1] Guardrailing Intuition: Towards Reliable AI:

https://cue.dev/blog/guardrailing-intuition-towards-reliable...

[2] The Logic of CUE:

https://cuelang.org/docs/concept/the-logic-of-cue/


Also, now you're reading someone else's code and not everybody likes that. In fact, most self-proclaimed 10x coders I know hate it.

So instead of the 10x coder doing it, the 1x coder does it, but then that factor of 3x becomes 0.3x.


Absolutely. In my experience there are more “good coders” than people who are good at code review/PR/iterative feedback with another dev.

A lot of people are OCD pedants about stuff that can be solved with a linter (but can’t be bothered to implement one) or just “LGTM” everything. Neither provide value or feedback to help develop other devs.


> A lot of people are OCD pedants about stuff that can be solved with a linter (but can’t be bothered to implement one) or just “LGTM” everything. Neither provide value or feedback to help develop other devs.

This many be one of the best quotes on HN in a while.


Thank you, I felt old & cranky today

Isn’t that the reason why people advocate for spec-driven development instead of vibe coding?

Do you ever think that maybe your biased? I ask because I get the feeling from a lot of professional programmers that they feel like they are better and smarter than everyone and everything else. No matter how good an LLM or AI in general gets at programming task, people who make a living programming will always have a problem with it. There is going to come a time when you're going to be obsolete. I hate to say it, but it's coming and the hostility twords the tech isn't going to save your job.

At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code. -So, you want to tell me that you don't review the code you write? Or that others don't review it? - You bring up ONE example with a bottleneck that has nothing to do with programming. Again, if you claim it doesn't make you 10x more productive, you don't know how to use AI, it is that simple. - I pin up 10 agents, while 5 are working on apps, 5 do reviews and testing, I am at the end of that workflow and review the code WHILE the 10 agents keep working.

For me it is far more than 10x, but I consider noobs by saying 10x instead of 20x or more.


Just goes to show that most programmers have no idea what most programmers are mostly programming. Great that it works for you, but don't assume that this applies to everyone else.

Can you link to one launched product with users for us?

I can't tell if this is real or a joke.

What exactly are you producing? LinkedIn posts?

I was just chatting with a co-worker that wanted to run a LLM locally to classify a bunch of text. He was worried about spending too many tokens though.

I asked him why he didn't just have the LLM build him a python ML library based classifier instead.

The LLMs are great but you can also build supporting tools so that:

- you use fewer tokens

- it's deterministic

- you as the human can also use the tools

- it's faster b/c the LLM isn't "shamboozling" every time you need to do the same task.


I use Haiku to classify my mail - it's way overkill, but also doesn't require training unlike a classifer. I recieve many dozens of e-mails a day, and it's burned on average ~$3 worth of tokens per month. I'll probably switch that to a cheaper model soon, but it's cheap enough the "payoff" from spending the time optimizing it is long.

> AI makes configuration and extension trivial

I recently said to a co-worker:

"Linux is free if your time has no value" is no longer true in the era of LLMs. Just about any wonky Linux issue can be fixed quickly with even just OpenCode and free models.


In my experience that sentence never made much sense anyway. Of course over the ~25 years of Linux I had to setup/learn a lot of stuff, but usually it ended up being time well invested.

To play devil's advocate:

Some people argue that the difficulty of passing laws in the United States is "a feature not a bug" b/c it prevents the US from creating laws too quickly.

You could argue the House of Lords did the same: by vetoing bills, it acted as a "speed bump" to laws that might cause too much change too quickly.


It doesn't really help the United States create good law. You could argue that it worsen the quality of laws by forcing kludges to be built on top of kludges.

A sortition panel collecting random people from all walks of life to give feedback on law would probably improve the quality of law more than any amount of procedure and paperwork ever will.

We mistaken paperwork with deliberation and quality control.


I’d go further. To bypass the deadlocked congress, obama used executive orders in new and expansive ways. That ratcheted things up. Now trump is using executive orders even MORE expansively, to do things that are patently undemocratic and unconstitutional (federalizing who can vote, ilegal tariffs). The kludges and hacks are causing a crumbling of democracy, not just mediocre law.

> To bypass the deadlocked congress, obama used executive orders in new and expansive ways. That ratcheted things up.

While I agree - this has been an issue long before Obama.

Any reasonable country should be able to decide on the legality of abortion through the normal political process - the public deliberates, they elect representatives, the representatives hammer out the fine print and pass legislation.

But in the American system, the legality of abortion is decided at random, based on the deaths of a handful of lawyers born in the 1930s. If that person dies between ages 68-75, 84-87 or 91-95 abortion is illegal, if they die aged 76-83, or 88-91 it's legal.

Why doesn't America deal with political questions using their political process?


> Why doesn't America deal with political questions using their political process?

Since 2022 we do. But it’s through the political process of the States. This has made a lot of people very angry because a bunch of States have got it all wrong, and the exact way they got it wrong depends on your point of view on the subject, but no matter which side of the debate you’re on, some on your side most assuredly want to preempt all the States that got it all wrong with Federal law.

That Congress hasn’t come to a political consensus is the Federal political consensus.


> Since 2022 we do. But it’s through the political process of the States.

Which is exactly as it should be. There's nothing in the Constitution which gives the federal government power to act on this issue, therefore it should be decided on a state by state basis. Government works best when it is done based on the values and needs of the local population, not one solution for an entire heterogeneous nation.


I might take this argument seriously, if not for the fact that the party of “state’s rights” are pushing for a national ban on abortion. https://www.americanprogress.org/article/what-you-need-to-kn...

Exactly! What the Constitution /says/ and how it is interpreted... The Tenth Amendment is written (IMO) incredibly short to underscore its importance AND breadth:

"The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people."

But I've very seldom heard the phrase "states rights" uttered by anyone who isn't pro-gun and anti-abortion. I doubt they'd feel any freer if their state came down like a ton of politically-angered bricks on unfettered gun ownerships and anti-abortionists.


Pro gun is explicitly mentioned in the Constitution, about 8 amendments before the tenth, so that argument isn't the best tack

While the American left has largely ceded the term “states rights” to the American right (and was/is well on the way to ceding the term “Free Speech”) they have their own share of “states rights” issues. Medical and recreational marijuana is a “states rights” issue. “Sanctuary cities” are a “states rights” issues. The fact that the Trump administration can’t (yet) force California schools to drop teaching certain things is a “states rights” issue. California deciding they’re goin to just gerrymander the heck out of everything in response to the current administration is a “states rights” issue. In fact basically every state level opposition to the current administration is a form of a “states rights” issue.

It’s immensely frustrating to me that what should be a huge lesson in the importance of limited government power and diffusion of that power across multiple governmental levels isn’t likely to result in that lesson being learned. I have a real fear that in history Trump will have been an inflection point on the road to an ever more powerful federal government in general and executive branch in particular, rather than a historical anomaly at the high end of that same power dynamic.


Because that requires compromise and Americans are raging absolutists that need immediate results.

In 1791, abolitionists tried to end slavery in the British Empire but couldn't get it passed by the House of Commons. Henry Dundas changed the bill so it would be phased-in. Existing slaves wouldn't be emancipated but their children would be. That bill did pass. Slavery naturally ended over the following decades until the much smaller slave population was bought by the government and freed in 1833.

In the USA, nobody budged until a Civil War happened and then the slaves were freed by force in the 1850s without monetary compensation. But that time, emancipation happened immediately after they got full power, there was no need to give money to racists, and no moral compromises were required.


Shelby Foote has a great quote about this in regards to the Civil War:

“The war happened because we failed to do the thing that we have a true genius for and that’s compromise”


> But that time, emancipation happened immediately after they got full power, there was no need to give money to racists, and no moral compromises were required.

I really hope you were being sarcastic here... Emancipating the slaves during/after the Civil War was not an orderly, immediate process. And even once all slaves were freed, they continued to live second-class lives due to the laws of the time.


Yes, it's sarcasm. I'm contrasting how Britain made their legal process gradual enough to match reality with the USA's demand that legal processes create reality.

For reference, fully elective abortion legally doesn't exist in most of the UK. It's just that a fetus being dangerous to the mental health of the mother has progressively been interpreted more and more broadly...

https://en.wikipedia.org/wiki/Abortion_in_the_United_Kingdom


In the American system as originally founded, most of these things were intended to be decided by the states.

In the American system as originally founded, black people were property.

It should be expected that the American system is not eternally bound to the will and scope of vision of the founding fathers, that it can and should evolve over time as the needs and nature of society evolves. Otherwise, it isn't a republic, it's a cult.


Yes, that was corrected by using the amendment process (and fighting a huge war) a long time ago. The system was designed to allow for correction.

It’s more like Americans did decide, that it was illegal and judges decided they could use legal tricks to make it legal (which in turn meant as soon as they didn’t have the majority the opposite could occur.)

There's a long political tradition which doesn't acknowledge that there are political questions. In their world, there's only good policy and bad policy, and making the first is only a question of competence. Conflicts of interests they won't talk about. These people fight a constant battle to take political power away from people (not just regular people, elected representatives as well), and give it to their preferred "experts".

Could you explain this to a non USian???

Or a USian who has no idea which lawyers you are referring to obliquely, so as to look "cool" and "knowledgeable", while avoiding communication with the sullied masses?

They're referring to increasingly partisan Supreme Court Justices

The problem here isn't the temptation to bypass a system intended to require consensus before action can be taken. That temptation is present with any system that provides any checks on autocratic tyranny.

The problem is that something like executive orders are being used to bypass that system instead of being prevented from doing so.


The problem is that the US constitution was written before people realized that the natural consequence of that type of constitution is a two party system. You cannot have a viable third party in the long run because it will necessarily weaken one or the other existing party and that party will then absorb it.

So no you have a situation where the government can have split brain: some parts of the legislative branch can be party A and other parts can be party B and the president isn’t tied to either.

From what I understand when the US “brings democracy” to another country we set up a parliamentary system and that system is widely seen as better. You cannot form an ineffective government by definition, though you can have a non-functioning government that is trying to form a coalition. These types of systems tend to find center because forming a coalition always requires some level of compromise. Our system oscillates between three states: party A does what they want, party B does what they want, and split brain and president does what he wants because Congress has no will to keep him accountable.

What I would like to try is a combination of parliamentary system, approval voting, and possibly major legislation passed by randomly selecting a jury of citizens and showing the the pros and cons of a bill. If you cannot convince 1000 random citizens that we should go to war, maybe it’s not a good idea.


> The problem is that the US constitution was written before people realized that the natural consequence of that type of constitution is a two party system.

The two party system is a consequence of using first past the post voting, which the US constitution doesn't even require. Use score voting instead, which can be done by ordinary legislation without any constitutional amendment, and you don't have a two party system anymore.


Are we reading the same constitution?

Article II, Section 1

> The Person having the greatest Number of Votes shall be the President


A party is a thing where multiple elected officials band together in a persistent coalition. The section you're quoting from only applies to a single elected office in the whole country. Are only two parties are going to run candidates for President when there are five or more parties in the legislature?

On top of that, that section applies to how the votes of the electoral college delegates are counted. It doesn't specify how the electoral college delegates are chosen, which it leaves up to the states. There are plenty of interesting ways of choosing them that don't result in a structural incentive for a two-party race.


> The section you're quoting from only applies to a single elected office in the whole country. Are only two parties are going to run candidates for President when there are five or more parties in the legislature?

I don't think it's a coincidence that every US state is structured as a smaller mirror of the federal government.


It's not a coincidence because they adopted their initial constitutions at around the same time or based them on the existing states that had. But we're talking about the electoral college and none of the states use something equivalent to that to choose their governor.

Using score voting instead of FPTP for state-level offices would be a straightforward legislative change in many states and still not require any change to the US Constitution even in the states where it would require a change to the state constitution, which is generally a much lower bar to overcome than a federal constitutional amendment.


I'll tell Hillary Clinton, she'll be thrilled.

And Al Gore, while you're at it.

US "parties" are giant coalitions compared to the "parties" in parliamentary democracies. You're solving a problem that doesn't exist.

Change the American voting system tomorrow and legislators will belong to different nominal parties that end up forming precisely the same coalitions.

Love him or hate him, Trump is a great example of this - in 2016, Trump effectively formed a new party focused on anti-immigration and protectionism, which rapidly grew to dominate the "conservative" coalition. But those other parties, ranging from libertarians to the Chamber of Commerce (highly pro immigration and highly pro free trade) parties are still there in the coalition.


> Change the American voting system tomorrow and legislators will belong to different nominal parties that end up forming precisely the same coalitions.

The US is extremely partisan right now and the partisanship is strongly aligned with the two major parties, not the individual coalitions that make them up. And with two parties you get polarization, because then it's all about getting 51% for a single party rather than forming temporary coalitions between various parties none of which can do anything unilaterally.

A different voting system allows you to have more than two viable parties, which changes the dynamic considerably.


Coalitions are pretty static in most parliamentary democracies except sometimes when forming governments post-election.

The 51% is for the coalition, not the party. That’s what you’re missing. CoC Republicans for example have temporarily sacrificed their immigration policies to retain legislative influence - and they are a check on the Trumpist wing passing whatever anti-immigrant legislation they want, because they too cannot act without at least tacit support from the CoC wing.

The “major party” is from a systems perspective no different than a European parliamentary governing coalition.


> Coalitions are pretty static in most parliamentary democracies except sometimes when forming governments post-election.

The "except when forming governments post-election" is a major difference. It also presumes that a coalition in the legislature is required to persist for an entire election cycle rather than being formed around any given individual piece of legislation. You don't have to use a system where an individual legislator or party can prevent any other from introducing a bill and taking a vote on it.

In less partisan periods in US history, bills would often pass with the partial support of both major parties.

Moreover, the US coalitions being tied to the major parties makes them too sticky. For example, the people who want lower taxes aren't necessarily the people who want subsidies for oil companies, or increased military spending, but they've been stuck in the same "coalition" together for decades.

Suppose you want to do a carbon tax. People who don't like taxes are going to be a major opponent, so an obvious compromise would be to pass it as part of a net reduction in total taxes, e.g. reduce the federal payroll tax by more than the amount of the carbon tax. But that doesn't happen because the coalition that wants lower taxes never overlaps with the coalition that wants to do something about climate change. Meanwhile the coalition that wants lower taxes wouldn't propose a carbon tax on their own, and the coalition that wants a carbon tax to increase overall government revenue gets shot down because that would be extremely unpopular, so instead it never happens.


All countries have these problems which vary by the local political environment and history. Multiple European countries are facing particularly absurd varieties of these dilemmas because of their refusal to form coalitions with the second or third largest party in their country.

Again, it seems like the flaw is in trying to form a long-term coalition instead of just passing the bills that have enough support to pass when you put them up for a vote among all the people who were actually elected. Why should anyone have to give a crap what someone else's position is on immigration when the bill in question is on copyright reform or tax incentives for solar panels?

The coalitions do a pretty good job of representing people’s pre-existing positions. People aren’t not voting for copyright reform because their party said so, but because they agree with their party. Party discipline in the US is not nearly as strong as in most parliamentary systems.

The point is that if you can't do the thing the democratic way (because the system is so biased against change as to make it impossible) then people will look for workarounds.

The workarounds are accepted since otherwise nothing would get done at all, and then people are surprised when the workaround gets used in ways they no longer like.


When people say "nothing gets done" they mean "we can't do things that a substantial plurality of the public doesn't want done" -- which is exactly what's supposed to happen.

If you break the mechanisms ensuring that stays the case, what do you honestly expect to happen the next time it's you in the minority?


Things substantial plurality of public wants are not being done. The votes in legislature dont match what plurality of voters want.

Public opinion is not really represented in a way your comment implies.


It's not supposed to cause things a significant plurality of the public wants to happen. It's supposed to cause things a significant plurality of the public doesn't want to not happen.

>federalizing who can vote

Almost every single democracy in the world requires proof that you are eligible to vote. 80% of Americans agree with the idea as well.

https://wisconsinwatch.org/2026/02/voter-id-americans-suppor...


So let's have a national ID, given to all citizens.

Unfortunately the party calling most strongly for proving eligibility absolutely hates that idea.


And that national ID has to be free, and available to people who cannot appear at federal offices during business hours without losing what sparse wages they get...

Yes, and, Bush-Cheney were the modern forefathers of pushing the unitary executive theory, building on the work of Reagan after a 90’s shaped lull. Reagan took ideas from The Heritage Foundation, who returned in the ‘24 elections pushing Project 2025. A natural endgame and roadmap for the movement of power to the president, that is being followed as approximately as any political roadmap ever is.

Remember that each time you’re tempted to crack a Coors light!


So I should remember that… never? Got it. ;)

Unitary executive is popular and doesn’t have to mean an imperial presidency. Actually the most popular version, albeit not the one you hear about the most, is the libertarian idea that the executive should have little power at all and almost no bureaucracy to command.

It could be my interpretation, the framing of the above comment feels as if Obama gave Trump the idea to use executive orders in expansive ways. I think Trump would have used executive orders expansive even if no president ever had used executive orders.

Trump is just trying to get away with as much as he can. The tariffs used by Trump and his "jokes" about skippings election and other things he did are quite unprecedented.


The argument isn't that it helps the US create good law. It's that it keeps the US from creating too many bad laws.

"The more laws, the less justice." -- Cicero

Government needs to be more Agile.

Government needs to be less.

Government needs to be for all the people, and not just for the 1% with wealth and power. Not more or less.

This seems to go against human nature. Government is always for the 1% and in the rare case it isn’t it simply just creates a new 1%

True. Seems self-preservation is strong in our genes and can manifest in strong greed or prefering to avoid (direct) conflict with the greedy.

Humans are not always social creatures on all social fronts.


The idea of a second chamber is not controversial. The argument is how you populate it.

Elected - you have the problem of two chambers claiming legitimacy and potential deadlock, and also the problem of potentially having the same short term view as the other elected chamber.

Appointed - who gets to appoint, on what criteria, who are they beholden to ( ideally unsackable once appointed - I want them to feel free to say what they really think ).

Inherited - Very unlikely to represent the population. No quality filter. Potentially a culture of service built up - and free to say what they think.

Random. - More likely to represent the population. No quality filter.

You can obviously have a mix of all or any of the above.

In my view, the ideal second chamber would be full of people of experience, who are beholden to nobody (unsackable), that represented a broad range of views, with a culture of service.

I'm against a fully elected second house - as that's not really adding anything different to the first house. Appointed has worked quite well in the past, but it has become more and more abused recently as the elected politicians have two much control.

It's tricky - perhaps some sort of mix.


Abused is probably an understatement. The Tories made some extremely questionable and bizarre appointments in their recent terms. We have the son of Russian oligarch sitting there! Inexplicable advisors whose appointment is a mystery even after FOIA requests. And extreme partisans like Jacob Rees Mogg and Priti Patel.

Imo they should be proposed and voted on by the house. That should at least offer some prevention of peerages as favours, as they quite clearly have been used.


> Imo they should be proposed and voted on by the house. That should at least offer some prevention of peerages as favours, as they quite clearly have been used.

You'd get party political trading - we will vote for your pick if you vote for our pick - but perhaps it will help at the margins - the obviously embarrassing would be harder to squeeze through.

The problem is the current process relied a bit too much on people being trustworthy - as you say that's kinda fallen away recently - and obviously the election of Trump show how dangerous it is for a process to rely on people being decent and not abuse the trust. Which is a shame as trusting people gives people the leeway to do the right thing.

In terms of JRM or Patel - while they are not my cup of tea, I think there is value in senior politicians becoming members of the Lords almost by default ( like senior judges or religious leaders ) - as to some extent it does reflect what people have voted for in the past and they have valuable experience. However perhaps it's too early in their cases.

An age limit has been talked about - but normally in terms of upper age - I wonder if it wouldn't be better as an age threshold - you have to have retired and be no longer 'on the make'. Sure that means no young people in the second chamber - but ultimately being representative is the commons role, the second chamber is for experienced people to tell the commons not to be hasty and do more work.


It's very tricky to balance right that's for sure. Agreed that it opens the door to behind the scenes deals. But marginal improvements are still better than whatever the hell we have now.

In the case of Priti Patel she was fired from government for having secret/undisclosed meetings with Israel to recognise some contested land (IIRC). That should be an instant disqualifier for a lifelong peerage.


> That should be an instant disqualifier for a lifelong peerage.

Again the current process does have an element of that - MI5 et al have a look at the list and say 'reputational risk'. "That's a very brave choice minster.."

However, as with Mandelsons appointment to the Lords and US ambassador, it's clearly being ignored - but then who better than the PM of the day to have the final say - the problem is somebody has to - and if you take it away from the PM - then it potentially becomes undemocratic.

Perhaps one improvement would be the removal of the tradition of exiting PM's creating a nomination list - when they no longer care about what the public think - a bit like Joe Biden outrageously pardoning his son.


>Imo they should be proposed and voted on by the house.

Then why wouldn’t the house just stuff them with people that will agree with everything they do and remove any checks and balances? You only need one house at that point.


In part because the composition of the commons changes over time - so if the term timescales are different then they won't necessarily agree at any point in time - but I do agree it would potentially become too politicised if you had that kind of vote.

Ultimately in the UK system, the commons has the final say ( ignoring the monarch in the room here ), so most of the time what the Lords do isn't typically a big public issue - it's quiet revision, have you thought of this?, type stuff. Not that common to have a big conflict - though it does happen.


Jacob Rees Mogg isn’t in the Lords.

The Lords doesn’t actually have the power to veto bills thanks to the Parliament act. They also have a principle of ultimate legislative priority under which they defer to the commons in matters where the commons puts its foot down. They generally act as a revising body rather than outright attempting to defy the commons.

   > Under the Parliament Acts 1911 and 1949 it is possible for a bill to be    presented for Royal Assent without the agreement of the House of Lords, provided that certain conditions are met. This change was seen by some as a departure from Dicey’s notion of sovereignty conferred upon a tripartite body.
https://commonslibrary.parliament.uk/research-briefings/cbp-...

On the other hand, the process of having Commons legislation rejected by the Lords, then amended and sent back can take almost a year. A government looking to push its legislative programme in a single parliament may choose to remove the most controversial elements in return for an easier passage through the Lords. In this way, just the threat of Lords scrutiny can be enough to moderate the output of the Commons.

If the Lords can’t veto bills, why does their rejection matter?

Note that this change is not getting rid of the Lords; it's just getting rid of Hereditary piers - i.e. those passed down through generations. We'll still have Lords who have been selected by previous governments within their lifetime; so they still provide that speed bump; but do it in a way that means they were at some point chosen by an elected body.

Not sure how you think this will improve things. None of these people are elected. They likely got these positions by doing political favors. They are likely even more out of touch with the electorate. They are even more likely to make decisions based upon ideology instead of practical quality of life considerations. Seems to me this just centralizes power even more in the hands of a few. And that's the last thing the UK needs right now.

> They are likely even more out of touch with the electorate.

Not compared to the hereditary peers.

In theory these people have proved themselves useful in some way and bring expertise to the upper chamber, rather than just being born in the right family. In practice there is some of that and some political cronyism.

> Seems to me this just centralizes power even more in the hands of a few.

That is exactly what hereditary peerage is. The few, by definition. The aristocracy.


Being out of touch with the electorate is the thing they have as a feature over the house of commons.

i.e. they're not trying to win the next election.

They're also not there because of the favours they've done existing politicians.

I don't think this is "great" but it does make me wonder if the people who want an end to herditary peers are really going to like what they get.


They're there because someone a long time ago was wealthy and probably had ties to one or more monarchs.

This is not a basis for holding power in any country that calls itself democratic. This idea that they are somehow above everyday concerns and that's a good thing is some sort of weird retcon, and if we're going to use unmitigated cynicism to impugn the validity of action of other office holders who are elected, or who have got to the lords through prominence in public life, then allow me the same here: they're just there to pursue the interests of the landed gentry and hold back progress on issues like fox-hunting. And they have done exactly this in the past. The fact they're not trying to win an election means they are entirely free to pursue selfish aims.

There's no virtue in maintaining the privileges of these alleged 'nobles' to interfere in the running of the state.

What they’re going to get is 92 fewer (to use the modern parlance) nepo-babies having access to the levers of power. It’s something to celebrate.


Lots of countries call themselves democratic that absolutely aren't e.g. The DPRK for a ridiculous example. We actually aren't even democratic in the truest sense that we don't all vote on everything but instead elect representatives to vote for us (we hope). It's all a compromise with trade offs.

Here one will just get different "nepo babies" who are more directly involved in the struggle for power because they will be connected those in power - people who have been useful and will be wanted in future.

Some people say that the desire for power is the thing that should disqualify a person from having it. i.e. we perhaps need some anti-politicians. This would mean people who don't want to be in power having some forced upon them like in Jury duty.


> That is exactly what hereditary peerage is. The few, by definition. The aristocracy.

Not true at all - there's nothing special about having a rich land-owner in your ancestry - most people do.

In fact, now, after a few centuries of reversion to the mean, the hereditary peers are the only people in government who are representative, in the statistical sense.

(Not that this is related in any way to the actual reason why this is being done - the actual motive is that a hereditary peer is necessarily British, and Starmer hates the British and wants them disenfranchised so that he can continue with their destruction. But that's another story..)


You argue that a lot of things are likely. Why don't you take the time to check instead of slander?

> You could argue the House of Lords did the same

It can still do the same thing without hereditary peers. A slow-moving, conservative (in the classical sense) upper chamber is a classic in bicameral systems, it is not specific to the House of Lords.


Just in case someone gets the wrong end of the stick, the UK isn’t getting rid of the House of Lords, just the hereditary members (of which there aren’t many).

But almost all the remainder are political appointees.

It's disappointing that they didn't replace the hereditary peers with some other non-politically-appointed folks. There is a very great need to have people in the House of Lords who are not beholden to any of the political parties.

I personally favour a lottery system where random people get given the opportunity to join the House of Lords for the rest of their working lives.


> for the rest of their working lives.

One of the nicer things about Lords debates is that many members have ended their working lives and are no longer worried about the day to day felicities of their industry.


The House of Lords isn't going anywhere. The majority of the chamber are life peers, functionally identical to Canadian senators.

And for many years now, even the remaining minority of hereditary peers in the chamber are elected to that job, albeit not by the general public. My guess is that all those who are actually useful will get "grandfathered in" by this legislation making them life peers so that they can keep doing the exact same job. Many life peers (who are all entitled to be there) rarely attend, so it would be kinda silly if Lord Snootington, the fifteenth Earl of Whatever is kicked out for being a hereditary peer despite also being the linchpin of an important committee and one of the top 100 attendees in the Lords, while they keep Bill Smith, a business tycoon who got his peerage for giving a politician a sack of cash and hasn't been in London, never mind the House of Lords, since 2014...

> My guess is that all those who are actually useful will get "grandfathered in" by this legislation making them life peers

The government made a political deal with the hereditary peers-drop their fight against this bill, and in exchange the government will grant a subset of them life peerages

But that political deal is just an informal extralegal “understanding”, it isn’t actually in the text of the bill-having the bill text grant someone a life peerage would upset the status of peerages as a royal prerogative, and they don’t want to do that


> The government made a political deal with the hereditary peers-drop their fight against this bill, and in exchange the government will grant a subset of them life peerages

Wouldn't a "deal" theoretically benefit both sides? That one doesn't offer the hereditary peers anything they don't already have.


> Wouldn't a "deal" theoretically benefit both sides? That one doesn't offer the hereditary peers anything they don't already have.

They don't have any expectation against losing their seats entirely when hereditary peers are ejected from the House, and, even with a sufficient number of life peers voting with them, they couldn't actually prevent such a bill from passing, only delay it. Securing a commitment of life seats is getting something they didn't have.


Only 92 of the 842 peers are hereditary currently, so it’s not really necessary to convince them to agree; the deal only needs to be seen as fair enough by the other peers. Or really, it only needs to be seen as fair enough to the House of Commons.

> Only 92 of the 842 peers are hereditary currently, so it’s not really necessary to convince them to agree;

As I understand it, it was necessary (in order to pass the bill without the delay the Lords can impose) to secure a deal on the hereditary peers (not with them), because the Conservatives (the largest Lords faction) and many of the cross-benchers among the life peers, a sufficient number in total to delay the bill (the Lords can't actually block it permanently) oppose the bill, not just a group among the existing hereditary peers.


The hereditary peers were elected and that's what is being discarded? So before at least the voters got some choice and that's going away? Amazing...

> The hereditary peers were elected

By a larger pool of hereditary peers. Previously several hundred members of the aristocracy were all entitled to a seat in there by virtue of their birth and title alone. After reforms in 1999 this group had to nominate from within themselves a subset of 92 hereditary peers who would be allowed to participate in the chamber.

If by "the voters" you mean the general public, then no, they had no say at all.


yes. just because it is unfashionable to argue in favor of aristocracy does not mean that it doesn’t have its own intrinsic set of benefits and drawbacks… the drawbacks of ultra democracy (populism, etc.) are all cast aside as the innocent folly of people yearning to be free but not knowing whereof to yearn (“it’s not a system problem, it’s a people problem, but we must no matter what condemn ourselves to people problems because anything else is anathema to “liberty”, or whatever”). but dare utter one word in favor of conservatism in the original, true sense, and it is as though democracy is an unalloyed good with absolutely no downside. like, clearly we should have a direct democracy with no senate and no house, no? anything else is just allowing the Powers That Be to patriarchy everything!

You get something far worse in the US. Which is a government that no longer feels any need to either pass or be bound by laws.

Ah yes, the country whose supreme court struck down its global tariffs and then forced the federal government into refunding all the money back is truly no longer bound by its own laws.

Did the government pass any laws to steal those 130 billion dollars from Americans? I can't recall that it did.

Are there any consequences for the people who did it?

The government has long ceased to govern by law. It now governs from the bench, and from executive order, because laws are too troublesome to actually pass.


America operates on a strong executive common law system not whatever system you are imagining.

I took business law more than a decade ago and the professor basically said do what you want (money wise) if you can pay for it. This is the English legal system and is how it's always worked. Liability is purely monetary and the law only applies to those who can show standing to do anything about it.


So no-one affected by illegal tariffs has any legal standing?

Hyperbole beyond belief there

It's only a speed bump for progressive laws while the most reactionary garbage gets fast tracked with their approvals.

That view is a leftover from a bygone era, when others could look at the US with often grudging admiration. Today? The US itself doesn't think much of itself, and to the rest of us it is a cautionary tale.

"The US itself doesn't think much of itself"

If you ever find yourself wondering why US voters elected someone like Trump...if you ever wondered why institutions in the US are crumbling and experts don't have much credibility, this is why. I assure you, most Americans think very highly of the US compared to the rest of the world (especially if they have traveled). Only the out of touch don't and the reasons why most US voters don't give them much credibility is the absolutely crazy amount of twisting of facts to align to that POV.

As people like that are slowly removed/aged out from those institutions, the institutions will magically start working well again and regain public trust. In case you wondered how a potted plant like Trump can somehow perform better than those experts, that's how. Because people who believe things like that have to twist around their worldview to such an extreme that its impossible for them to be competent no matter how smart or how much education they have. Its also how people who claim to be for peace and democracy somehow end up supporting a religious oligarchy that funds terrorism across an entire region. Ideology makes you dumb to the degree that you are smart.

PS Europe is the cautionary tale here. Again, your leaders are far smarter than Trump. Does that seem to matter? Nope, because ideology destroys the effectiveness they (you) should have.


Oh yeah, gotta love the "if you dont join our illegal unnecessary war, you support religious dictatorship".

Spoken by supporter of a goverment who prefres dictatorships over democrscies, claims does not even want regime change in iran, claims they dont care about targetting civilian infrastructure.

That just made it so goverment in Iran is more hardline. And that just gave a lifeline to Russia while being at it.


> Because people who believe things like that have to twist around their worldview to such an extreme that its impossible for them to be competent no matter how smart or how much education they have.

Extremely well said


I think a good revising chamber is critical to good democracy, though the Lords recently have been playing silly buggers around the Employment Rights Act and ignoring the Salisbury Convention (which is that they shouldn’t block manifesto commitments).

I do think the USA goes too far, which has led to frustration among the public and contributed to Trump and the resulting behaviour. I’ve said before that I think the US House of Representatives should have a mechanism to override Senate speed bumps, though not without effort. The idea is to encourage the legislature to compromise but maintain the “primacy” of the House if the Senate is being obstinate. Something like the Parliament Act, is what I’d have in mind.


The Senate in the US is the upper house and can override the House. There is no "primacy" of the House in the US system. The only place where anything like that exists is in impeachment (which is for any member of the executive or judicial branch, not just the president) where the House simply has more votes than the Senate (each member gets 1 vote). Those types of hearings are pretty rare (usually).

> There is no "primacy" of the House in the US system.

I know, I'm saying this is not a good approach, for the reasons I gave above.


Which manifesto commitments have been blocked in this parliament?

> Which manifesto commitments have been blocked in this parliament?

To be clear, I didn't say they "blocked," I said:

> though the Lords recently have been playing silly buggers around the Employment Rights Act

This was a manifesto commitment which, while it eventually went through, it was touch and go for a little bit. Reporting at the time:

https://www.youtube.com/watch?v=f412kJChC6g


Okay, though to be fair to me, you said just after

> and ignoring the Salisbury Convention (which is that they shouldn’t block manifesto commitments)

which is what attracted my question.

Thanks for the link. I haven’t watched it, but I will observe that a lot of the modern legislation that comes out of the commons should properly attract the attention of the Lords, as it doesn’t get nearly enough attention from the commons.


I totally agree, the upper chamber can and should make amendments to legislation. In this case, they made a generally good amendment to the Employment Rights Bill (allowing "at-will" dismissal up to the first 6 months rather than the initially proposed total ban).

However after that amendment was accepted, Conservative Peers (who hold a majority) initially voted against the bill again: https://bectu.org.uk/news/prospect-slams-house-of-lords-for-...

It was eventually passed a week later when the Lords accepted the Commons amendments but that second block on 11th December shouldn't have happened.


Chartr Daily had this chart [0] back in 2023 and it shows how much the big tech firms grew from 2016 to 2022.

Some of the firms, Apple being the exception, doubled or even almost tripled in size.

I'm sure AI is partly to blame here but I think a lot of it is over hiring and firms just getting bogged down in bureaucracy and trying to clear things out.

0 - https://www.instagram.com/p/CnxN-Mayo3N/


I don't think AI is even partially to blame. Unless Atlassian is claiming AI can fully replace 1,600 workers, layoffs don't make sense.

You need people driving AI to get the benefits.

Its like a courier service that uses horses firing people once cars are invented because cars are faster than horses. You would switch everyone from horses to cars and deliver more packages.


Well you might not, if your accessible market is 10 horses and 5 cars can fill that market need, then you're left with 5 people who aren't needed because your products don't meet the needs of 10 cars worth.

If your postal service services a population of a million people and it takes 1000 horses to do that easily, but it takes 500 cars, you don't have a need for those 500 extra people.

You can't deliver more packages if the packages aren't there to be delivered. Your product demand doesn't just magically scale up once supply meets it.


Good point, but what if you were previously chaining horse carriage rides and now a car can cover the same distance as 10 of them with a single driver?

You can now deliver 10x the packages and make 10x money.

What healthy business aims to stagnate in the face of a revolutionary technology?


I don't know, maximum package turnover might be bounded and most likely you were previously not constrained by lack of drivers already... Sure you might try and expand but why would that work better than before? Especially assuming all other providers now also have cars.

Good analogy but wrong number. Try 1.17x

> there's always an infinite supply of new work that could be done

I distinctly remember a discussion where someone says "Man, I wish JIRA would add this feature/fix this bug"

Someone else pipes in: "I bet there is already a ticket on the JIRA bugtracker/feature board for this, it's not done and it's from 9 years ago" and lo and behold there was.


Unfortunately, none of these companies are going to turn their AI loose on important, annoying, 9 year old bugs. They're just going to use it to cram more unwanted features into their software, just like they're doing today with human developers.

They generally hire smart people who are good at a combination of:

- understanding existing systems

- what the paint points are

- making suggestions on how to improve those systems given the paint points

- that includes a mix of tech changes, process updates and/or new systems etc

Now, when it comes to implementing this, in my experience it usually ends up being the already in place dev teams.

Source: worked at a large investment bank that hired McKinsey and I knew one of the consultants from McK prior to working at the bank.


My take*: McKinsey hiring largely selects for staying calm under pressure and presenting a confident demeanor to clients. Verbal fluency with decision-making frameworks goes a long way. Having strong analytical skills seemed essential; hopefully the bar for "sufficiently analytical" has raised along with general data science skills in industry.

I don't view them as top-tier experts in their own right, whether it be statistics or technology, but they have a knack for corporate maneuvering. I often question their overall value beyond the usual "hire the big guns to legitimize a change" mentality. Maybe a useful tradeoff? I'd rather see herd-like adoption of current trends than widespread corporate ignorance and insularity.**

A huge selling point for M&Co is kind of a self-fulfulling prophecy based on the access they get. This gives them a positive feedback loop to find the juiciest and most profitable areas to focus on.

For those who know more, how do my takes compare?

* I interviewed with them over 15 years ago, know people who have worked there, and I pay attention to their reports from time to time.

** Of course, I'd rather see a third way: cross-pollination between organizations to build strong internal expertise and use model-based decision making for nuanced long-term decisions... but that's just crazy talk.


> Having strong analytical skills seemed essential

and

> they have a knack for corporate maneuvering

One way to view this is that the above combination of skills is both rare and very useful. That means it's expensive. So instead of hiring someone like that at "full rate" and keeping them around, you can "borrow" them from McK to solve a problem your regular crew can't (or isn't able to) for various reasons.

Plus, as one manager of mine said many years ago:

"We use consultants b/c they are both easy to hire AND easy to fire"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: