Hacker Newsnew | past | comments | ask | show | jobs | submit | anymouse123456's commentslogin

There are very few phrases in all of history that have done more damage to the project of software development than:

"Premature optimization is the root of all evil."

First, let's not besmirch the good name of Tony Hoare. The quote is from Donald Knuth, and the missing context is essential.

From his 1974 paper, "Structured Programming with go to Statements":

"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

He was talking about using GOTO statements in C. He was talking about making software much harder to reason about in the name of micro-optimizations. He assumed (incorrectly) that we would respect the machines our software runs on.

Multiple generations of programmers have now been raised to believe that brutally inefficient, bloated, and slow software is just fine. There is no limit to the amount of boilerplate and indirection a computer can be forced to execute. There is no ceiling to the crystalline abstractions emerging from these geniuses. There is no amount of time too long for a JVM to spend starting.

I worked at Google many years ago. I have lived the absolute nightmares that evolve from the willful misunderstanding of this quote.

No thank you. Never again.

I have committed these sins more than any other, and I'm mad as hell about it.


Another one from my personal experience: apply DRY principles (don't repeat yourself) the third time you need something. Or in other words: you're allowed to copy-and-paste the same piece of code in two different places.

Far too often we generalise a piece of logic that we need in one or two places, making things more complicated for ourselves whenever they inevitably start to differ. And chances are very slim we will actually need it more than twice.

Premature generalisation is the most common mistake that separates a junior developer from an experienced one.


The rule of 3 is awful because it focuses on the wrong thing. If two instances of the same logic represent the same concept, they should be shared. If 10 instances of the same logic represent unrelated concepts, they should be duplicated.

The goal is to have code that corresponds to a coherent conceptual model for whatever you are doing, and the resulting codebase should clearly reflect the design of the system. Once I started thinking about code in these terms, I realized that questions like "DRY vs YAGNI" were not meaningful.


Of course, the rule of 3 is saying that you often _can't tell_ what the shared concept between different instances is until you have at least 3 examples.

It's not about copying identical code twice, it's about refactoring similar code into a shared function once you have enough examples to be able to see what the shared core is.


But don’t let the rule of 3 be an excuse for you to not critically assess the abstract concepts that your program is operating upon and within.

I too often see junior engineers (and senior data scientists…) write code procedurally, with giant functions and many, many if statements, presumably because in their brain they’re thinking about “1st I do this if this, 2nd I do that if that, etc”.


3 just seems arbitrary in practice though. In my job we share code when it makes sense and don’t when it doesn’t, and that serves us just fine

I agree. And I think this also distills down to Rob Pike’s rule 5, or something quite like it. If your design prioritizes modeling the domain’s data, shaping algorithms around that model, it’s usually trivial to determine how likely some “duplication” is operating on shared concepts, versus merely following a similar pattern. It may even help you refine the data model itself when confronted with the question.

The devil’s in the details, as usual. No rule should be followed to the letter, which is what the top comment was initially complaining about.

Yet again, understanding when to follow a rule of thumb or not is another thing that separates the junior from the senior.


Agreed. DRY is a compression algorithm. The rule of 3 is a bad compression algorithm. Good abstraction is not compression at all.

> If two instances of the same logic represent the same concept, they should be shared. If 10 instances of the same logic represent unrelated concepts, they should be duplicated.

Exactly.


I think we should not even generalize it down to a rule of three, because then you're outsourcing your critical thinking to a rule rather than doing the thinking yourself.

Instead, I tend to ask: if I change this code here, will I always also need to change it over there?

Copy-paste is good as long as I'm just repeating patterns. A for loop is a pattern. I use for loops in many places. That doesn't mean I need to somehow abstract out for loops because I'm repeating myself.

But if I have logic that says that button_b.x = button_a.x + button_a.w + padding, then I should make sure that I only write that information down once, so that it stays consistent throughout the program.


The reason for the rule of thumb is because you don't know whether you will need to change this code here when you change it there until you've written several instances of the pattern. Oftentimes different generalizations become appropriate for N=1, N=2, N>=3 && N <= 10, N>=10 && N<=100, and N>=100.

Your example is a pretty good one. In most practical applications, you do not want to be setting button x coordinates manually. You want to use a layout manager, like CSS Flexbox or Jetpack Compose's Row or Java Swing's FlowLayout, which takes in a padding and a direction for a collection of elements and automatically figures out where they should be placed. But if you only have one button, this is overkill. If you only have two buttons, this is overkill. If you have 3 buttons, you should start to realize this is the pattern and reach for the right abstraction. If you get to 10 buttons, you'll realize that you need to arrange them in 2D as well and handle how they grow & shrink as you resize the window, and there's a good chance you need a more powerful abstraction.


> Instead, I tend to ask: if I change this code here, will I always also need to change it over there?

IMO, this is the exact (and arguably only) question to ask.


Critical thinkers understand that rules aren't written for critical thinkers; that they are written for beginners who don't yet have the necessary experience to be able to think critically.

IMO, the right way to think about DRY is to consider why a given piece of code would ever change.

If you have two copies of some piece of code, and you can reasonably say that if you ever want to update one copy then you will almost certainly want to update the other copy as well, then it's probably a good idea to try to merge them and keep that logic in some centralized place.

On the other hand, if you have three copies of the same piece of code, but they kind of just "happen to" be identical and it's completely plausible that any one of the copies will be modified in the future for reasons which won't affect the other copies, maybe keeping them separate is a good idea.

And of course, it's sometimes worth it to keep two or more different copies which do share the same "reason to change". This is especially clear when you have the copies in different repositories, where making the code "DRY" would mean introducing dependencies between repositories which has its own costs.


This is so true. I have been burned by this more times than I can count. You see two functions that look similar, you extract a shared utility, and then six months later one of them needs a slightly different behavior and now you are fighting your own abstraction instead of just changing one line in a copy. The rule of three is a good default. Let the pattern prove itself before you try to generalize it.

I really like Casey Muratori's "[Semantic] Compression-oriented programming" - which is the philosophical backing of "WET" (Write Everything Twice) counterpart to DRY.

https://caseymuratori.com/blog_0015


It’s not how many times, it’s what you do about it. DRY doesn’t mean you have to make abstractions for everything. It means you don’t repeat yourself. That is, if two pieces of code are identical, chances are one of them shouldn’t exist. There are a lot of simple ways you might be able to address that, starting from the most obvious one, which is to just literally delete one of them. Abstraction should be about the last tool you reach for, but for most people it’s unfortunately the first.

I had a situation where we need to implement a protocol. The spec was fairly decent but the public implementations of the other end were slightly non compliant which necessitated special casing. Plus multiple versions etc.

An expensive consultant suggested creating pristine implementation and then writing a rule layer that would modify things as needed and deploying the whole thing as a pile of lamdba functions.

I copy pasted the protocol consumer file per producer and made all the necessary changes with proper documentation and mocks. Got it working quickly and we could add new ones without affecting.

If I'd try to keep it DRY, i think it would be a leaky mess.


The instances should be based on the context. For example we had a few different API providers for the same thing, and someone refactored the separate classes into a single one that treats all of the APIs the same.

Well, turns out that 3 of the APIs changed the way they return the data, so instead of separating the logic, someone kept adding a bunch of if statements into a single function in order to avoid repeating the code in multiple places. It was a nightmare to maintain and I ended up completely refactoring it, and even tho some of the code was repeated, it was much easier to maintain and accommodate to the API changes.


I think this is a reasonable rule of thumb, but there are also times that the code you are about to write a second time is extremely portable and can easily be made reusable (say less than 5 minutes of extra time to make the abstraction). In these cases I think it's worth it to go ahead and do it.

Having identical logic in multiple places (even only 2) is a big contributor to technical debt, since if you're searching for something and you find it and fix it /once/ we often thing of the job as done. Then the "there is still a bug and I already fixed that" confusion is avoided by staying DRY.


The D stands for "dependency", the R stands for "regret" and I'm not sure what the Y stands for yet.

Yelling... it stands for yelling...

Mostly at the massive switch statements and 1000 line's of flow control logic that end up embedded someplace where they really dont belong in the worst cases.


You say that, but I've created plenty of production bugs because two different implementations diverge. Easier to avoid such bugs if we just share the implementation.

I've also seen a lot of production bugs because two things that appeared to be a copy/paste where actually conceptually different and making them common made the whole much more complex trying to get common code to handle things that diverged even though they started from the same place.

DRY follows WET (Write Everything Twice).

Agreed, I think even Carmack advocates this rule

My rule of thumb is “when I have to make changes to this later, how annoying is it going to be to make the same change in multiple places?”

Sometimes four or five doesn’t seem too bad, sometimes two is too many


“Once, twice, automate/abstract” is a good general rule but you have to understand that the thing you’re counting isn’t appearances in the source code, it’s repetitions of the same logic in the same context. It’s gotta mean the same, not just look the same.

More critical in my mind is investigating the "inevitably start to differ" option.

If two pieces of code use the same functionality by coincidence but could possibly evolve differently then don't refactor. Don't even refactor if this happens three, four, or five times. Because even if the code may be identical today the features are not actually identical.

But if you have two uses of code that actually semantically identical and will assuredly evolve together then go ahead and refactor to remove duplication.


Depends on length and complexity, imho. If it's more than a line or two of procedure? Or involves anything counterintuitive? DRY at 2.

Extract a method or object if it's something that feels conceptually a "thing" even if it has only one use. Most tools to DRY your code also help by providing a bit of encapsulation that do a great job of tidying things up to force you to think about "should I be letting this out of domain stuff leak in here?"



Ehh, people who are really excited about DRY write unreadable convoluted code, where the bulk of the code is abstractions invented to avoid rewriting a small amount of code and unless you're very familiar with the codebase reasoning about what it actually does is a mystery because related pieces of functionality are very far away from each other.

DRY is not to avoid writing code (of any amount). DRY is a maintainability feature. "Unless you're very familiar with the code" you probably won't remember that you have to make this change in two places instead of one. DRY makes life easier for future you, and anyone else unfortunate to encounter (y)our mess.

You are confusing DRY done as intended vs what DRY looks like in the real world to many people.

Making maintainable code is a good goal.

DRY is one step removed from that goal and people use it to make very unmaintainable code because they confuse any repeated code with unmaintainability. (or their theory that some day we might want to repeat this code so we might as well pre-DRY it)

The result is often a horrendous complex mess. Imagine a cookbook with a cookie recipe that resided on 47 different pages (40 of which were pointers on where to find other pointers on where to find other pointers on where to find a step) in attempts to never write the same step twice in the whole book or your planned sequels in a 20 volume set.


It's almost like there's a "reasonable person" type of standard that's impossible to nail down in a general rule...

If you can describe a rule in one sentence it'll probably lead to as much trouble as it fixes.

The problem is zealots. Zealotry doesn't work for indeterminate things that require judgement like "code quality" or "maintainability", but a simple rule like "don't repeat yourself" is easy for a zeal. They take a rule and shut down any argument with "because the rule!"

If you're arguing about code quality and maintainability without one sentence rules then you actually have to make arguments. If the rule is your argument there's no discussion only dogma.

As a result? Easy to distill rules spread fast, breed zealots, and result in bad code.


Huh, I've always understood that quote very differently, with emphasis on "premature" ... not as in, "don't optimize" but more as in "don't optimize before you've understood the problem" ... or, as a CS professor of mine said "Make it work first, THEN make it work fast" ...

And if you know in advance that a function will be in the critical path, and it needs to perform some operation on N items, and N will be large, it’s not premature to consider the speed of that loop.

Another thought: many (most?) of these "rules" were before widespread distributed computing. I don't think Knuth had in mind a loop that is reading from a database at 100ms each time.

I've seen people write some really head shaking code that makes remote calls in a loop that don't actually depend on each other. I wonder to what extend they are thinking "don't bother with optimization / speed for now"


First, I agree with what you're saying.

But second, I'd remove "optimization" from considering here. The code you're describing isn't slow, it's bad code that also happens to be slow. Don't write bad code, ever, if you can knowingly avoid it.

It's OK to write good, clear, slow code when correctness and understandability is more important that optimizing that particular bit. It's not OK to write boneheaded code.

(Exception: After you've written the working program, it turns out that you have all the information to make the query once in one part of the broader program, but don't have all the information to make it a second time until flow reaches another, decoupled part of the program. It may be the lesser evil to do that than rearrange the entire thing to pass all the necessary state around, although you're making a deal with the devil and pinky swearing never to add a 3rd call, then a 4th, then a 5th, then...)


If you really have a loop that is reading from a database at 100ms each time, that's not because of not having optimized it prematurely, that's just stupid.

Reminds me of this quote which I recently found and like:

> look, I'm sorry, but the rule is simple: if you made something 2x faster, you might have done something smart if you made something 100x faster, you definitely just stopped doing something stupid

https://x.com/rygorous/status/1271296834439282690


Got it. What about initiating a 800mb image on a CPU limited virtual machine that THEN hits a database, before responding to a user request on a 300ms roundtrip? I think we need a new word to describe the average experience, stupidity doesn't fit.

And yet... :)

I think there is just a current (I've seen it mostly in Jr engineers) that you should just ignore any aspect of performance until "later"


and, I guess, context does matter. If you need to make 10 calls to gather up some info to generate something, but you only need to do this once a day, or hour, and if the whole process takes a few seconds that's fine, I could see the argument that just doing the calls one at a time linearly is simpler write/read/maintain.

Just remember Rob Pike's 1st rule: don't assume where bottlenecks will occur, but verify it.

I've worked on optimizing modern slow code. Once you optimize a few bottlenecks it turns out it's very hard to optimize because the rest of the time is spread out over the whole code without any small bottlenecks and it's all written in a slow language with no thought for performance.

From my understanding you still need to care care about the algorithms and architecture. If N is sufficiently large, you should pick O(N) algorithm over O(N^2). But usually there is a tradeoff, simple code (or hiding something behind some abstraction) might be easier to understand and maintain, but it might work slower with large input data and vice versa. I would rather write code that will be easier to optimize if there is some bottleneck than to optimize it overagressivelly. Also, different code needs different kind of optimization. Sometimes the code might be more IO heavy (disk / DB or network), for this type of code, the IO operation planning and caching is more critical than the optimization of the raw CPU time. Sometimes the input is too small to have any significant performance effects, and, what's paradoxical, choosing smarter algorithms might even hurt performance (alongside the maintanability). For example, for 10 - 100 items a simple linear scan in an array might be faster than using a O(log n) binary search tree. It's also well known that faster executable code (regardless of being hand written, machine generated, high level or machine code) usually has larger size (mostly because it's more "unrolled", duplicated and more complex when advanced algorithms are used). If you optimize the speed everywhere, the binary size tends to increase, causing more cache misses, which might hurt the performance more than improve. This is why some profiling is often needed for large software than simply passing O3.

Charles Eames said that "design depends largely on constraints".

If you know in advance, then one of the constraints on the design of that function is that it's on the critical path and that it has to meet the specification.

You're stating the obvious that you need to consider its implementation.

But that's not the same as "I have a function that is not on the critical path and the performance constraint is not the most important, but I'll still spend time on optimizing it instead of making it clear and easy to understand."


We will optimize it later, we don't have time for that right now, it seems it works fast enough for our needs right now.

"Later" never comes and all critical performance issues are either ignored, hot-patched externally with caches of various quality or just with more expensive hardware.


My favourite quote for that is:

Broken gets fixed, but crappy stays forever


While what you say is often true, it is a different problem and does not change the fact of the prior posters.

Plenty of people seem to understand it as, "don't even think about performance until someone has made a strong enough business case that the performance is sufficiently bad as to impact profits".

IDK if it can be applied in all situations.

Sometimes, especially when it comes to distributed systems, going from working solution to fast working solution requires full blown up redesign from scratch.


well you see, in corporate (atleast in big tech), this is usually used as a justification to merge inefficient code (we will optimize it later). That later never comes, either the developers/management moves on or the work item never gets prioritized. That is until the bad software either causes outages or customer churn. Then it is fixed and shown as high impact in your next promo packet.

I agree with you. "Premature" is the keyword. Bloated software is the result of not having the intention to optimize it at all.

> Make it work first, THEN make it work fast

1. I have seen too many "make it work first" that ended up absolute shitshow that was notoriously difficult to do anything with. You can build the software right the first time

2. The "fast" part is what I think too many people are focusing on and in my experience the "THEN" part is always missing resources utilization and other types of inefficiency that are not necessarily related to speed. I have seen absolute messes of software that work really fast


Ditto here

> and the missing context is essential.

Oh yes, I'd recommend everyone who uses the phrase reads the rest of the paper to see the kinds of optimisations that Knuth considers justified. For example, optimising memory accesses in quicksort.


This shows how hard it is to create a generalized and simple rule regarding programming. Context is everything and a lot is relative and subjective.

Tips like "don't try to write smart code" are often repeated but useless (not to mention that "smart" here means over-engineered or overly complex, not smart).


I dunno, Ive seen people try to violate "dont prematurely optimize" probably a thousand times (no exaggeration) and never ONCE seen this happen:

1. Somebody verifies with the users that speed is actually one of the most burning problems.

2. They profile the code and discover a bottleneck.

3. Somebody says "no, but we shouldnt fix that, that's premature optimization!"

Ive heard all sorts of people like OP moan that "this is why pieces of shit like slack are bloated and slow" (it isnt) when advocating skipping steps 1 and 2 though.

I dont think they misunderstand the rule, either, they just dont agree with it.

Did pike really have to specify explicitly that you have to identify that a problem is a problem before solving it?


>1. Somebody verifies with the users that speed is actually one of the most burning problems.

Sometimes this is too late.

C++98 introduce `std::set` and `std::map`. The public interface means that they are effectively constrained to being red-black trees, with poor cache locality and suboptimal lookup. It took until C++11 for `std::unordered_map` and `std::unordered_set`, which brought with them the adage that you should probably use them unless you know you want ordering. Now since C++23 we finally have `std::flat_set` and `std::flat_map`, with contiguous memory layouts. 25 years to half-solve an optimisation problem and naive developers will still be using the wrong thing.

As soon as the interface made contact with the public, the opportunity to follow Rob Pike's Rule 5 was lost. If you create something where you're expected to uphold a certain behaviour, you need to consider if the performance of data structures could be a functional constraint.

At this point, the rule becomes cyclical and nonsensical: it's not premature if it's the right time to do it. It's not optimisation if it's functional.


> the opportunity to follow Rob Pike's Rule 5 was lost.

std::set/std::map got into trouble because they chose the algorithm first and then made the data model match. Rule 5 suggests choosing the right data model first, indicating that it is most important.


You've inadvertently made an argument for deprecation, not ignoring rob's rule.

When building interfaces you are bound to make mistakes which end users will end up depending on (not just regarding optimization).

The correct lesson to learn from this is not "just dont make mistakes" but to try and minimize migration costs to prevent these mistakes from getting tightly locked in and try to detect these mistakes earlier on in the design process with more coordinated experimentation.

C++ seems pretty bad at both. It's not unusual, either - migration and upgrade paths are often the most neglected part of a product.


How would you have minimised migration costs for std::map?

There's probably tons of stuff people use that is way slower than it needs to be but speed isn't one of their self-reported burning problems because they don't even realize it could be faster. pip vs uv?

Exactly!

I wish Knuth would come out and publicly chastise the many decades of abuse this quote has enabled.


To be fair, I think human nature is probably a bigger culprit here than the quote. Yes, it was one of the first things told to me as a new programmer. No, I don't think it influenced very heavily how I approach my work. It's just another small (probably reasonable) voice in the back of my head.

Yep. If one is implementing quicksort for a library where it will be used and relied on, I'd sure hope they're optimizing it as much as they can.

Ignoring optimization opportunities until you see the profile only works when you actually profile!

Profiling never achieved its place in most developers’ core loop the way that compiling, linting, or unit testing did.

How many real CI/CD pipelines spit out flame graphs alongside test results?


I usually defer this until a PM does the research to highlight that speed is a burning issue.

I find 98% of the time that users are clamoring to get something implemented or fixed which isnt speed related so I work on that instead.

When I do drill down what I tend to find in the flame graphs is that your scope for making performance improvements a user will actually notice is bottlenecked primarily by I/O not by code efficiency.

Meanwhile my less experienced coworkers will spot a nested loop that will never take more than a couple of milliseconds and demand it be "optimised".


Even at Google, the tendency is (or was when I was there), to only profile things that we know are consuming a lot of resources (or for sure will), or are hurting overall latency.

Also the rule (quote?) says "speed hack", I don't think he is saying ignore runtime complexity totally, just don't go crazy with really complex stuff until you are sure you need it.


That depends on which part of Google. I worked in the hot path of the search queries and there speed was extremely important for everything, they want to do so much there every single query and latency isn't allowed to go up.

> I usually defer this until a PM does the research to highlight that speed is a burning issue.

Does a carpenter wait until you get a splinter before pulling out the sandpaper?

We should be actively taking pride in our work, not churning out crap until someone notices.


The problem with ignoring performance is that you'll always end up with slow software that is awful to use but ticks all the feature boxes. As soon as someone comes along that is fast and nice people will switch to that.

People don't ask for software to be fast and usable because it obviously should be. Why would they ask? They might complain when it's unusably slow. But that doesn't mean they don't want it to be fast.


The problem with saying "one must validate that speed is a problem before doing something about it" is that some people hear "one must ignore speed".

Users not being up front about their desires and needs is an argument for better research, not presuming on their behalf. It is true that they are not necessarily adept communicators.


I was a bit worried you are paraphrasing Rob Pike, but no, he actually agrees with that Knuth quote.

I am almost certain that people building bloated software are not willfully misunderstanding this quote; it's likely they never heard about it. Let's not ignore the relevance of this half a century old advice just because many programmers do not care about efficiency or do not understand how computers work. Premature optimization is exactly that, the fact that is premature makes it wrong, regardless if it's about GOTO statements in the 70s or a some modern equivalent where in the name of craft or fun people make their apps a lot more complex than they should be. I wouldn't be surprised if some of the brutally inefficient code you mention was so because people optimized prematurely for web-scale and their app never ever needed those abstractions and extra components. The advice applies both to hackers doing micro-optimizations and architecture astronauts dreaming too big IMHO.


No I've definitely heard plenty of people use this as some kind of inarguable excuse to not care about performance. Especially if they're writing something in Python that should really be not super slow. "It's fine! Premature optimisation and all that. We'll optimise it later."

And then of course later is too late; you can't optimise most Python.


I don't think the quote itself is responsible for any of that.

It's true that premature optimization (that is, optimization before you've measured the software and determined whether the optimization is going to make any real-world difference) is bad.

The reality, though, is that most programmers aren't grappling with whether their optimizations are premature, they're grappling with whether to optimize at all. At most companies, once the code works, it ships. There's little, if any, time given for an extra "optimization" pass.

It's only after customers start complaining about performance (or higher-ups start complaining about compute costs) that programmers are given any time to go through and optimize things. By which point refactoring the code is now much harder than it wouldn've been originally.


Totally agree. I’ve see that quote used to justify wilfully ignoring basic performance techniques. Then people are surprised when the app is creaking exactly due to the lack of care taken earlier. I would tend to argue the other way most of the time: a little performance consideration goes a long way!

Maybe I’ve had an unrepresentative career, but I’ve never worked anywhere where there’s much time to fiddle with performance optimisations, let alone those that make the code/system significantly harder to understand. I expect that’s true of most people working in mainstream tech companies of the last twenty years or so. And so that quote is basically never applicable.


Slow code is more of a project management problem. Features are important and visible on the roadmap. Performance usually isn't until it hits "unacceptable", which may take a while to feed back. That's all it is.

(AI will probably make this worse as well, having a bloat tendency all of its own)


> generations of programmers have now been raised to believe that brutally inefficient, bloated, and slow software is just fine.

I believe people don't think about Knuth when they choose to write app in Electron. Some other forces might be at play here.


This discussion kind of irks me. I just read these posts as: "The quote saying A is bad. Actually it said A all along!"

It's just complaining about others making a different value judgement for what is a worthwhile optimization. Hiding behind the 'true meaning of the quote' is pointless.


I wish we lived in a world where quotes could be that powerful. But I'm afraid in reality this quote, like any other, is just used as a justification after the fact.

Actually, I do not believe devs are to blame, or that CS education is to blame; I believe that's an unfortunate law of society that complexity piles up faster than we can manage it. Of course the economic system rewards shiping today at the expense of tomorrow's maintenance, and also rewards splitting systems in seemingly independent subsystems that are simpler in isolation but results in a more complex machinery (cloud, microservices...)

I'm even wondering if it's not a more fundamental law than that, because adding complexity is always simpler than removing it, right? Kind of a second law of termodynamic for code.


> From his 1974 paper, "Structured Programming with go to Statements":

> He was talking about using GOTO statements in C.

I don’t think he was talking about C. That paper is from December 1974, and (early) C is from 1972, and “The UNIX Time-Sharing System” (https://dsf.berkeley.edu/cs262/unix.pdf) is from July 1974, so time wise, he could have known C, but AFAICT that paper doesn’t mention C, and the examples are PL/I or (what to me looks like) pseudocode, using ‘:=’ for assignment, ‘if…fi’ and ‘while…repeat’ for block, ‘go to’ and not C’s ‘goto’, etc.


Yes, just like in Dijkstra's earlier (1968) "Go To Statement Considered Harmful". The syntax is not C and "go to" is two words, and of course that's definitely too early.

A lot of developers get enamored by fetishes. Just one example, because it's one i always struggle to vanquish in any of my teams.

Devs are obsessed with introducing functional-style constructs everywhere, just for the sake of it. FP is great for some classes of software, but baseline crufty for anything that requires responsiveness (front-ends basically), let alone anything at real interactive speeds (games, geo-software, ...)

The "premature optimization" quote is then always used as a way to ignore that entire code paths will be spamming the heap with hundreds of thousands of temporary junk, useless lexical scopes, and so forth. Writing it lean the first time is never considered, because of adherence to these fetishes (mutability is bad, oo is bad, loops lead to off-by-one errors, ...). It's absolutely exhausting to have these conversations, it's always starting from the ground up and these quotes like "premature optimization is the root of all evil" are only used as invocations to ward of criticism.


> Multiple generations of programmers have now been raised to believe that brutally inefficient, bloated, and slow software is just fine

100%


I agree. Faster hardware or horizontal scaling on distributed cloud environments can mask the problem; but it certainly doesn't solve the problem of bloated, inefficient software.

While it might not be necessary to spend hours fine-tuning every function; code optimization should be the mindset of every programmer no matter what they are coding.

How many fewer data centers would we need if all that software running in them was more efficient?

https://didgets.substack.com/p/finding-and-fixing-a-billion-...


I hear you, friend!

While you were seeing those problems with Java at Google, I saw seeing it with Python.

So many levels of indirection. Holy cow! So many unneeded superclasses and mixins! You can’t reason about code if the indirection is deeper than the human mind can grasp.

There was also a belief that list comprehensions were magically better somehow and would expand to 10-line monstrosities of unreadable code when a nested for loop would have been more readable and just as fast but because list comprehensions were fetishized nobody would stop at their natural readability limits. The result was like reading the run-on sentence you just suffered through.


I think the bigger problem is that "Premature optimization is the root of all evil" is a statement made by software engineers to feel more comfortable in their shortcomings.

That's not to bemoan the engineer with shortcomings. Even the most experienced and educated engineer might find themself outside their comfort zone, implementing code without the ability to anticipate the performance characteristics under the hood. A mental model of computation can only go so far.

Articulated more succinctly, one might say "Use the profiler, and use it often."


Picking the starting point is very important. "optimization" is the process of going from that starting point to a more performant point.

If you don't know enough to pick good starting points you probably won't know enough to optimize well. So don't optimize prematurely.

If you are experienced enough to pick good starting points, still don't optimize prematurely.

If you see a bad starting point picked by someone else, by all means, point it out if it will be problematic now or in the foreseeable future, because that's a bug.


> If you see a bad starting point picked by someone else, by all means, point it out if it will be problematic now or in the foreseeable future, because that's a bug.

Can't disagree at all, but many people push back in the name of XP.


Don't confuse premature pessimization for the warnings against premature optimization.

I can write bubble sort, it is simple and I have confidence it will work. I wrote quicksort for class once - I turned in something that mostly worked but there were bugs I couldn't fix in time (but I could if I spent more time - I think...)

However writing bubble sort is wrong because any good language has a sort in the standard library (likely timsort or something else than quicksort in the real world)


I always point out the operational word is "premature".

Anyone else feel like bloat is usually not an algorithmic problem, but rather a library or environment issue most of the time?

Totally agree. Out of this context, the word "premature" can mean too many things.

> Multiple generations of programmers have now been raised to believe that brutally inefficient, bloated, and slow software is just fine. There is no limit to the amount of boilerplate and indirection a computer can be forced to execute. There is no ceiling to the crystalline abstractions emerging from these geniuses. There is no amount of time too long for a JVM to spend starting.

I think that's due to people doing premature optimization! If people took the quote to heart, they would be less inclined to increasing the amount of boilerplate and indirection.


The boilerplate and indirection isn't done for performance

As someone currently writing 16-18 tables all with common definition, and crud, Id like some abstraction

In all honesty, this is one of the less abused quotes, and I have seen more benefit from it than harm.

Like you, I've seen people produce a lot of slow code, but it's mostly been from people who would have a really hard time writing faster code that's less wrong.

I hate slow software, but I'd pick it anytime over bogus software. Also, generally, it's easier to fix performance problems than incorrect behavior, especially so when the error has created data that's stored somewhere we might not have access to. But even more so, when the harm has reached the real world.


I don't believe there is any tension at all between fast and simple software.

We can and should have both.

This is a fraud, made up by midwits to justify their leaning towers of abstraction.


User-facing, sure, nothing stopping us from doing "simple and fast" software. But when it comes to the code, design and architecture, "simple" is often at odds with "fast", and also "secure". Once you need something to be fast and secure, it often leads to a less simple design, because now you care about more things, it's kind of hard to avoid.

IME doing application servers and firmware my whole career, simple and fast are usually the same thing, and "simple secure" is usually better security posture than "complex secure".

Interesting, never done firmware, but plenty of backends and frontends. Besides the whole "do less and things get faster", I can't think of a single case where "simple" and "fast" is the same thing.

And I'd agree that "simple secure" is better than "complex secure" but you're kind of side-stepping what I said, what about "not secure at all", wouldn't that lead to simpler code? Usually does for me, especially if you have to pile it on top of something that is already not so secure, but even when taking it into account when designing from ground up.


Not really. `return 0` is the simplest program you could write, but it's not terribly useful. There's an underlying assumption that there's some purpose/requirement for the program to exist. Through that lens "secure" is just assumed as a requirement, and the simplest way to meet your requirements will usually still give you the fastest program too.

"Do less and things get faster" is a very wide class of fixes. e.g. you could do tons of per-packet decision making millions of times per second for routing and security policies, or you could realize the answer changes slowly in time, and move that to upfront work, separating your control vs data processing, and generally making it easier to understand. Or you could build your logic into your addressing/subnets and turn it into a simple mask and small table lookup. So your entire logic gets boiled down to a table (incidentally why I can't understand why people say ipv6 is complex. Try using ipv4! Having more bits for addresses is awesome!).


> "simple" is often at odds with "fast"

Sort of. But if you keep the software simple, then it is easier to optimize the bottlenecks. You don't really need to make everything complicated to make it faster, just a few well selected places need to be refactored.


> I have seen more benefit from it than harm.

Same. I, too, am sick of bloated code. But I use the quote as a reminder to myself: "look, the fact that you could spend the rest of the workday making this function run in linear instead of quadratic time doesn't mean you should – you have so many other tasks to tackle that it's better that you leave the suboptimal-but-obviously-correct implementation of this one little piece as-is for now, and return to it later if you need to".


yes! See Rule 4

/* If we can cache this partial result, and guarantee that the cache stays coherent across updates, then average response time will converge on O(log N) instead of O(N). But first make the response pass all the unit tests today */


At least before the LLM world you would have to trade money (aka more compute) for time to market. What is the point of spending a single second optimizing a query when your database has 10 users?

Of course today this has changed. You can have multiple agents working on micro optimizing everything and have the pie and eat it too.


So you're saying people have misunderstood "premature optimisation is the root of all evil" as "optimisation is the root of all evil"?

I don't think you can blame this phrase if people are going to drop an entire word out of an eight word sentence. The very first word, no less.


I too learned this the hard way, via a supposedly concurrent priority queue that did quadratic-time work while holding a lock over the entire thing. I was told that "premature optimization is the root of all evil."

Sorry, folks, but that's just an excuse to make dumb choices. Premature _micro_optimization is the root of all evil.

EDIT: It was great training for when I started working on browser performance, though!


And if I may add a corollary: Measurement doesn't need to be held off until the end of the project! Start doing it as soon as you can!

This is the sort of pontifical statement that old guys like me tend to make which is strictly wrong but also contains a lot of wisdom.

Yes, software is bloated, full of useless abstractions and bad design. You kids(well, anyone programming post 1980, so myself included) should be ashamed. Also let's not forget that those abstractions helped us solve problems and our friends in silicon valley(ok that no longer makes sense but imagine if SillyValley still just made HW) covered our mistakes. But yeah, we write crap a lot of the time.

But as other folks have said, it doesn't mean "don't optimize."

I've always used my own version of the phrase, which is: "Don't be stupid." As in, don't do dumb, expensive things unless you need to for a prototype. Don't start with a design that is far from optimal and slow. After profiling, fix the slow things. I'm pretty sure that's what most folks do on some level.


> I have lived the absolute nightmares that evolve from the willful misunderstanding of this quote.

how do you know which code was written using this quote in mind.


He doesn't know, but that quote makes for a cool talking point. Software is slow or bloated because of budget, deadlines, and skill levels - not because of a quote.

It has less to do with a quote and more to do with CS education (and the market) rewarding minimal functionality over performance, security, fault-tolerance, etc.

The average university CS student in USA (and India I presume) is taught to "hack it" at any cost, and we see the results.


> I have lived the absolute nightmares that evolve from the willful misunderstanding of this quote.

Then the quote wasn’t the problem. The wilful misunderstanding was the problem.


Not the OP, but I suspect it means focus on what questions are being asked first, and even then, look for opportunities to simplify wherever you find them.

So many of us spend so much time getting enamoured with technical solutions to problems that no one cares about.


Every single Linux kernel currently operating within the borders of any of these states should turn itself off and refuse to boot until an update is installed after these bills are rolled back.

We should also update all FOSS license terms to explicitly exclude Meta or any affilites from using any software licensed under them.


I probably don't have all the info on the various laws across the US and EU that are being pushed, but I'm confused why Linux distros don't just update their licensing and add a notice on the installation screen that it is illegal to run their OS in places where these laws exist?

Heck, Linus Torvalds should just add an amendment to the next release of the Linux Kernel that makes it illegal to use in any jurisdiction that requires age verification laws.

This would obviously cause such a massive disruption (especially in California) that the age laws would have to be rolled back immediately.

This seems like a no-brainer to me but I am admittedly ignorant on this situation. I'm sure there's a good reason why this isn't happening if anyone cares to explain.


That would be a violation of the copyright law or the GPL licence - you aren't permitted to take GPL code and redistribute it with some extra restrictions added on to it.

If it's not (fully) your code, you aren't free to set the licence conditions; Linus can't do that without getting approval from 100% (not 99% or so) of authors who contributed code.

What one can do is add an informative disclaimer saying "To the best of our knowledge, installing or running this thing in California is prohibited - we permit to do whatever you want with it, but how you'll comply with that law is your business".


You can if you own the copyright to the content. I don't know the state of Linux, but this is a reason the FSF (and many other projects) requires people assign their copyright to them when they submit code.

It also helps when you take an offender to court. If I contribute to a project but don't assign copyright, then they cannot take offenders to court if my code was copied illegally. The burden is on me to do so.

Of course, all code released prior to the change still remains on the original license.


The FSF stopped requiring copyright assignment in 2021.

The Linux kernel is licensed GPLv2. The GPLv2 license forbids adding addition terms that further restrict the use of the software.

A "Linux distro" is not the Linux kernel. It's possible for some distros to add such license terms to their distribution media, but others like Debian and Debian-based ones adhere to the GPL so no go.


Because they want market share, and throwing a hissyfit over being asked to add an "I am over 18" checkbox is not good PR. If Debian starts refusing to work in California because it doesn't want to add a checkbox, it will simply be replaced by someone who adds that checkbox and doesn't throw the fit.

As the article says, it's not about just checking a box:

"Every OS provider must then: provide an interface at account setup collecting a birth date or age, and expose a real-time API that broadcasts the user's age bracket (under 13, 13 to 15, 16 to 17, 18+) to any application running on the system."


There is no requirement that the OS has to verify the person's ID. It literally just requires a dropdown menu to select your age bracket.

Fine, a drop-down menu, not a checkbox. They're throwing a hissy fit over a drop-down menu with 4 items.

You're missing the rest of it. It takes whatever you put in that dropdown menu and broadcasts it to the rest of the operating system including -- for instance -- your browser. The browser then uses that information to decide what to show you. The same would apply to any other app designed to receive it.

You can call what's happening in this thread a hissy fit, but how does that compare to $70 million in lobbying to get this added to operating systems? Isn't that a bit more of a fit? When you look at who is behind the bills, do you look at their history and wonder whose best interest they might have at heart?


How many other things in the operating system are like that already? Every file in your home directory, for instance?

I disagree slightly. It may not be good business, but it could be good PR, situationally. I expect a lot of 2nd-tier distros will refuse to implement it, and see a boost in their installs as a result.

Debian, Ubuntu, etc., they'll all fall right in line because the clear and immediate losses will outweigh any PR issue.


When they fall in line and add the age bracket drop-down menu, we'll keep using them because throwing a hissy fit over a distribution allowing you to select your age bracket is very obviously stupid.

Stupid take.

The issue is obviously not with adults needing to click a drop-down.

Some of the main issues with this legislation are:

1) Makes it much easier for predators of all kinds to identify and target children on their computers

2) Impossible to implement (i.e., servers don't have a person)

3) The infrastructure this bill introduces will be used by the state and corporations to destroy our last vestiges of privacy and anonymity


The age bracket dropdown will not be used to destroy your last vestige of privacy and anonymity.

It's a slippery slope, my friend. Take care now.

Would be funny indeed... And also curious why nobody does that.

https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

    6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.
It would be in violation of the GPL and such a license would not be an OSI approved license.

https://opensource.org/osd

    5. No Discrimination Against Persons or Groups

    The license must not discriminate against any person or group of persons.

> should turn itself off

If this was somehow introduced without anyone noticing and deployed, imagine the damage it would cause.

If we're fantasizing here, I like to imagine two major OS makers trying to comply these laws, fail miserably, and let FOSS OSes and kernels more recognition in the desktop market.


Honestly, like the Left-pad incident [1], getting things to go suddenly dark is extremely effective at getting people to drop everything else to fix an issue.

Ideally, getting these servers to auto turn off the day this goes into effect ("In compliance with this new law, Linux is now temporarily unusable. Please <call to action>.") would be glorious for getting the bill staved off, or killed.

It would hurt some productivity, but that is a risk these lawmakers taking donations are probably willing to make.

1 - https://en.wikipedia.org/wiki/Npm_left-pad_incident


It would make people move quickly to use a forked version of the kernel and would be an all around blunder by the Linux foundation

My comment was half in jest (I wasn't super serious about it.) In another sibling comment below I wrote how it's still possible to leverage this without actually implementing it.

Side note, this comment is evidently quite controversial, it went from +3 to +1. If anyone is angry at me I would like to assuage them that I am not, in fact, any owner or maintainer of anything in the linux distribution system.

"some"? It would hurt a lot of productivity lol. If all linux boxes turned themselves off suddenly, I think the internet would fall over pretty fast. I dont know how much of the internet runs on windows or apple (or others), but I cant imagine it's very much

> It would hurt a lot of productivity lol.

I know. That's exactly the point.

In such situations where one party (Meta) has enough money to lobby and is playing dirty, it's a massively asymmetric situation. In such cases, if you really want to make sure you're heard (which I'm not sure distributers want or care about tbh), you've got to play the game too.

Malicious compliance, if you will.

PS: For a "practical" variant, simply a warning might be sufficient - given how many hospitals/critical infra uses linux. For eg "There is a chance this server will fail to work on x date due to this y law. Not as glamorous/all-guns-blazing, but probably much more sensible and practical.

PPS: For an even more "safer" variant, one could go "Post x, please note that using linux/this server is a violation of law y. Please turn off the server yourself manually. Failure to comply with these instructions and violating the law will be borne entirely by the (no informed) sysadmin/manglement.


Most hospitals I know of, at least in the UK, still use windows, its why WannaCry was such a big deal here

It still blows my mind that anyone trusts npm after this whole incident.

> Every single Linux kernel currently operating within the borders of any of these states should turn itself off and refuse to boot

What exactly do you think Linux is? I would say that Linux would be forked in like 2 seconds, a bunch of different companies would start offering "attested Linux," and all you'd have to do was change your repos and update.

I would say that, but what would really happen is that we'd find out that Canonical, Red Hat, and a bunch of other distributions had been talking to the government for a year behind closed doors and they're already ready to roll out attested Linux. Debian would argue about it for six months, and then do the same thing. Hell, systemd will require age attestation as a dependency. Devuan and any other stubborn distribution would face 9000 federal lawsuits, while having domain names blocked, and the Chinese hardware necessary to run them seized at the ports with the receivers locked up on terrorism charges.

I have no idea where the confidence of the IT tech comes from. You (we) are something between a mechanic and a highly-skilled janitor.


Someone would just submit a patch overriding this

Microsoft would love that.

Obviously not a serious proposal, but I do like the alt mentioned below:

Update the terms to indicate that you can do what you want, but this OS is probably not compliant with states run by evil dipshits.


Might also be due in part to the latest iOS and iPhone 17 Pros being some of the shittiest, laggiest, lowest quality smart phones ever made.


It’s shocking to see this legislated.

As if companies are just out here wantonly destroying otherwise valuable goods that could have been easily sold at a profit instead.

I guarantee this problem is far more complex and troublesome than the bureaucrats would ever understand, much less believe, yet they have no problem piling on yet another needless regulatory burden.


They quite clearly are. Burberry was caught a while ago https://www.bbc.com/news/business-44885983, but it's well known that every major upmarket brand was doing it to avoid the loss of prestige of sending the items to outlets.


you can try to reason with the people who post comments like the one you're responding to, but the truth is they are just there waiting for anything a regulator does to desparage it, defend corporate and capital, and change nothing about the status quo. The worst part is that they do it thinking they are so edgy for knowing exactly why just another piece of regulation will clearly not work. Funnily enough, the EU track record proves that, apart from some exceptions, these type of regulations work really well. USB-C. Data Roaming across all of Europe. Laws on single use plastics. Etc. But yeah, it's just another regulation! EU BAD!


It’s a fair criticism, but note the Draghi report:

“The regulatory burden on European companies is high and continues to grow, but the EU lacks a common methodology to assess it. The Commission has been working for years to reduce the "stock" and "flow" of regulation under the Better Regulation agenda. However, this effort has had limited impact so far. The stock of regulation remains large and new regulation in the EU is growing faster than in other comparable economies. While direct comparisons are obscured by different political and legal systems, around 3,500 pieces of legislation were enacted and around 2,000 resolutions were passed in the US at the federal level over the past three Congress mandate: (2019-2024). During the same period, around 13,000 acts were passed by the EU. Despite this increasing flow of regulation, the EU lacks a quantitative framework to analyse the costs and benefits of new laws.”


> pieces of legislation were enacted and around 2,000 resolutions

I'm wondering if this includes regulatory agencies which in the US operate under the executive

I would guess it's included but the wording (act, resolution) is very "legislative" coded


That's a fair criticism, but a far cry from the blanket anti-regulation reaction that we get from some people here.


I agree with Draghi 100%, he’s a genius and I would love to see him take Von Der Leyen’s place. That said, the original comment has a very different intent.


> you can try to reason with the people who post comments like the one you're responding to, but the truth is they are just there waiting for anything a regulator does to desparage it, defend corporate and capital, and change nothing about the status quo. The worst part is that they do it thinking they are so edgy for knowing exactly why just another piece of regulation will clearly not work. Funnily enough, the EU track record proves that, apart from some exceptions, these type of regulations work really well. USB-C. Data Roaming across all of Europe. Laws on single use plastics. Etc. But yeah, it's just another regulation! EU BAD!

How about extending others some good faith?

These are political disagreements with decades (sometimes centuries) of history, and unless you're fifteen years old, there's a better explanation for the fact that others disagree with you than "I am the single smartest person in the universe, and all my political opinions are so irrefutably correct that anyone who disagrees must be doing so in bad faith and out of ignorance".

The vast majority of people want what's best for their societies, and have different views as to how best achieve that goal, that arise from diverse life experiences.


> The vast majority of people want what's best for their societies, and have different views as to how best achieve that goal, that arise from diverse life experiences.

I'd personally disagree with that assessment. I think the vast majority of people want what's best for them and the cohorts they're in. Which is quite different from wanting what's best for society as a whole.


> These are political disagreements with decades (sometimes centuries) of history, and unless you're fifteen years old, there's a better explanation for the fact that others disagree with you

The better explanation is that they have acquired their political tastes mindlessly and are now defending them in an equal manner. The presumption of good faith is wasted on them.

> The vast majority of people want what's best for their societies, and have different views as to how best achieve that goal, that arise from diverse life experiences.

That's incorrect. Just take a look at the housing situation in the US: what's best for society is to build, but a majority of the people (the current owners) are blocking that because it suits them.


> The presumption of good faith is wasted on them.

That's just assholery masquerading as enlightenment.

> That's incorrect. Just take a look at the housing situation in the US: what's best for society is to build, but a majority of the people (the current owners) are blocking that because it suits them.

Maybe if you call the majority stupid some more, that's famously convincing in a democracy. You'll definitely build sustainable coalitions for the policies you want this way.

Capital loves this kind of left wing politics. Off-putting and impotent.


Who did I call stupid (except for you personally ?). The current owners are not stupid, they're self-interested, hypocritical and parassitical.


And they love your style of politics for being profoundly unthreatening to them.


What politics ? We're talking on an irrelevant site here.


The comedic irony of your personal attack and smug dismissal isn't lost on me.

Let's try to stay focused on the subject matter and leave personal jabs aside.


No people were named. There was no personal attack.


LOL paranoia much? No idea who you are.


I don't doubt that some luxury organizations destroy unsold inventory rather than allow it to diminish the status of their brand. My claim is that if they could have sold that inventory at a profit, they would have.

It's theirs to do with as they please. They paid for it to be made.

If you don't like how they run their business, don't buy the overpriced garbage they sell.

People seem to be so concerned about externalities like CO2 emissions, but it's difficult to believe this problem represents a scale even remotely meaningful in that area. It feels like the plastic straw bullshit that took over the US for a few years. A useless, symbolic gesture that causes far more harm than good.

As a side note, it's a weird feeling to jump to the defense of an industry I generally despise, but the regulation just seems so ludicrous.


>It's theirs to do with as they please. They paid for it to be made.

This is not how that works. You have to pay for things within a legal framework setup by the government. If the legal frameworks changes then you have to deal with that.


Indeed. The government represents a legal framework for us all to operate in together. Sure.

If I pay for something to be made, that something belongs to me. It becomes private property and (at least in the US) I'm free to destroy a thing I own.

If you want to talk about options for protecting the environment, that seems great. There are ways to destroy textiles without fouling rivers or the air.

The OP article raises the spectre of "CO2 emissions" and "pollution" but doesn't provide any meaningful data (units or scale) related to these concerns.

My claim is that there is no way this activity represents any reasonable scale of impact relative to those separate concerns and that we already have lots of regulation related to keeping our water and air clear.

We can discuss ideas about how to do even better on those fronts, but this does not seem like a great way to have a large impact, if the environment is the actual concern.

How about all the laborers who were able to feed their families making these products that were destroyed? What happens to them when the company decides next year to be more conservative and make less stuff?

I'm not advocating for waste, I'm just pointing out that legislation like this often (almost always) comes along with unintended consequences that wind up causing more harm than good.


> It becomes private property and (at least in the US) I'm free to destroy a thing I own.

Only within the confines of the law. If I buy a skyscraper I can’t blow it up without permits. I can’t burn trash in my yard in the middle of the city. I can’t tear down a landmark in a historical district, even if I own it.


Right, but these are unlawful activities and I haven’t seen any claims that these companies are engaging in unlawful behavior.

Instead, the legislators are making currently lawful behavior unlawful.

That’s what I find upsetting.


And everything I described was at one point legal.


> Instead, the legislators are making currently lawful behavior unlawful.

That's how progress happens. Welcome to the real world.


Yes, these laws have unintended consequences. I think at the continental scale the EU operates every law or decision has that.

But the current incentives in the fashion market also has unintended consequences: companies producing a lot of garments only to destroy them to protect perceived intellectual property value.

And here's the thing: this brand image value is relative. So by forcing all companies to comply no one has to take the negative brand image hit that would be required to unilaterally decide to do this.


> My claim is that if they could have sold that inventory at a profit, they would have.

That's utterly incorrect. They don't just want profits - that would be easy to obtain by sending the merchandise to an outlet - they want high profits in a way that maintains high profits in the future too. Any discount "cheapens" the brand by giving customers the expectation of low(er) prices in the future.

> It's theirs to do with as they please.

Only within the bounds of the law.


Agreed. A good business will take both short and long term profit into consideration when making a decision. They will strive to maximize profit (within reason) in whatever they do.

I don’t presume to know anything about the fashion industry and generally find it uninteresting.

My point is that I assume the people running those businesses know what they’re doing. Many of them have been around for many decades.

I’m admittedly surprised to find so many people here with so much confidence in their own ability to effectively constrain an entire industry they obviously also know nothing about.


> My point is that I assume the people running those businesses know what they’re doing.

I agree, but their profit-seeking is myopic and there's no reason why it should be allowed.


>As if companies are just out here wantonly destroying otherwise valuable goods that could have been easily sold at a profit instead.

They are...

Many brands prefer to burn their clothes than to send it to thrift shops or outlets for brand damage.

The EU is now putting your brand image a notch down compared to 'not wasting shit'.


Companies should be free to do whatever they want, as long as they pay for all their negative externalities.

It is not OK for anyone to litter, also not companies.

One can speculate that this is an easy way to force the companies to pay for their externalities - given that production in third countries are much harder to touch for the EU.


Clothing items are so cheap to make it's hard to believe. I used to work in a distribution warehouse for a national baby and children's clothing chain. Containers would arrive from China and we'd enter items into the warehouse stock system. Cost basis for most items was under 10 cents.


> Companies should be free to do whatever they want, as long as they pay for all their negative externalities.

No they shouldn't. Sometimes it's not a matter of paying for the externalities. If you're doing harm at scale the only sane option is to stop doing that, period.

When we figured out that leaded gas was bad we didn't make companies pay for their negative externalities. We banned that shit and that was it.


> As if companies are just out here wantonly destroying otherwise valuable goods that could have been easily sold at a profit instead.

I remember watching a documentary in which they tracked a package of coffee returned to amazon (unopened). It traveled through half of Europe to end up in an incinerator in Slovakia, which is funny because amazon doesn't even operate there.

Big companies are doing a lot of weird shit because at their scale if it's even 1ct cheaper to burn 10 coffee pods vs reprocessing them back in their store it's going to make a huge difference in the long run.


Of course they're not. They're destroying goods that they can't sell at a profit because, for example, the cost of processing some unworn but returned goods outweighs the potential profit from those goods.

In TFA it's estimated that between 4% and 9% of clothing put on the EU market is destroyed before being worn. An admittedly high uncertainty, but even 4% of all clothing sold in the EU is still a heck of a lot of clothes.


Luxury brands do in fact intentionally destroy old stock to make sure their value doesn't drop due to excess supply. I suppose the next step is making everything extremely limited like hypercars?


However hypercars are not purposely limited. It takes an enormous amount of time and labor to build them unlike a handbag where the limit is artificial to sell more.


If you think Piero Ferrari isn't above playing the same games as Bernard Arnault, you're not paying attention.


> However hypercars are not purposely limited

Are you serious? Pricing theory includes both supply and demand, and limiting supply makes the remaining items more valuable by dint of rarity. Companies absolutely limit supply on items to maximize profits. How is this even a question?


Are they harder than ordinary cars?


Singer used to do this, they'd give favorable trade-in deals for old sewing machines so they could be destroyed and kept off the second hand market.


I personally know that L’oreal will buy back and destroy products of theirs from outlets, just to keep the prices up. These items are often bought in bulk on grey markets by discount outlets. Not only does L’oreal destroy the products, they pay for them to do so. None of this is shocking IMO.


They're wontonly destroying and or dumping shitty goods that they got for cheap by externalizing costs.


> I guarantee this problem is far more complex and troublesome than the bureaucrats would ever understand

if a manufacturer finds it too complex to not overproduce and not add all kinds of negative externalities then their business model is flawed or they’re not up to the task.

either way, it isn’t “the bureaucrats” fault they’re overproducing, and they absolutely are overproducing.


It couldn't have been easily sold because brands establish a floor below they don't want to go with value to maintain their perceived premium.

It's been known for ages that they operate like this. Some more ethical ones cut off the labels from the garment before they sell it in bulk. Most will destroy the items altogether.

This legislation targets this vanity and I applaud it.


Shocking? Why such drama? Is this AI text?

I don't see anything shocking here. Corporations doing corporatey things, which is maximizing profits and that can easily literally mean destroying unconsumed stuff since it would cost them 2 cents more per tonne to ship it and sell someplace cheaper. Ever heard the term economies of scale for example? Those distort many things in money flows.

Those corporations don't give a fuck about mankind, environment, future, long term stuff etc. Any approach to similar topics which gives them benefit of the doubt is dangerously naive and misguided from the start. It's up to society to enforce rules if its healthy and strong enough. Some are better off, some worse.


Major fashion houses have been caught destroying clothes to prop up the value of the brand.


It's about preserving brand image. Destroying a product is favourable compared to selling it at a discount and making the brand you spent so much marketing appear "cheap".


They absolutely do, source: warehouse job where you occasionally just opened boxes of unsold merchandise and smashed them. Something something tax write off. I never understood it. US based personal experience from almost two decades ago so take it was a grain of salt.


Luxury brands destroy their items to prevent their clothing from losing value.


Companies can and should participate in law drafting. If they have some not yet mentioned insight they should raise it or just take it to their grave.


Yeah, it is shocking. And that's why it needed to be legislated. Companies prove time and time again that they will take the easiest route to minimise losses and maximise profits, even if that means destroying the environment or wasting perfectly good merchandise to do so.

They're not destroying clothing because it's inherently unsellable, or hazardous, or damaged beyond repair. They destroy it because it's easier to dump excess stuff than it is to set up responsible channels to get rid of it.

Many "high fashion" shithouses intentionally destroy excess stock so that their precious branded status symbols can't get into the hands of the filthy proles, which would dilute their brand recognition.

These "regulatory burdens", as you call them, are the only thing holding back companies from further messing up the planet and I welcome them with open arms.


Not sure if sarcasm or cluelessness.


I was looking for a dead simple migration tool some years ago and didn’t want a random runtime (and version) dependency to maintain my database.

Found shmig and it’s been really fantastic.

https://github.com/mbucc/shmig


Having a background in fine art (and also knew Aral many years ago!), this prose resonates heavily with me.

Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.

I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?

For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.

I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.

Claude does tend to make my coarse implementations tighter and more robust.

I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.


If you think it's bad now, the early days of the web were absolutely filled with scumbag grifters who made small fortunes hiring contractors and then refusing to pay.

Many of them disappeared in the y2k dot com bust, but then seem to have reappeared in SF after 2008.

In the late 1990's, my second ever Flash app development client stiffed me on a $10k invoice.

He finally figured out 6 months later that he didn't have the source material to make changes and paid the full invoice in order to get it.

So I took precautions with the next client. It was a small agency that was serving a much larger business.

We were on 30 days net payment terms and I submitted the invoice when the project was done.

They didn't pay and within a couple weeks of gentle reminders, they stopped responding.

I smiled.

Exactly 30 days from the due date, I got a panicked call shrieking about their largest client website being down and did I have anything to do with it?!

I asked them what the hell they were talking about, they don't own a website. They never paid for any websites. I happen to own a website and I would be happy to give them access to it if they want to submit a payment.

They started to threaten legal nonsense, and how they had a "no time bombs clause in the contract."

I laughed because my contract had no such clause. If they signed such a contract with the client, that's not my problem.

I told them I wouldn't release the source files until the check cleared my bank, which could be weeks. A cashier's check arrived that morning and their source files were delivered.

By the end of it, the folks at the agency thanked me because that client wasn't planning to pay them and they hired me for other work (which, they had to prepay for).

Of course I don't know about the OP, but I'd bet the company was trying to stiff that contractor on their last check.


> because that client wasn't planning to pay them

Wait, you mean they used your little ruse as a means to be paid themselves??


Yes they did, and it worked!


This used to be true, but really is not anymore.


Also, I wasn't aiming at the official Youtube app, but at PipePipe etc. The great alternative Youtube clients Android has.


It’s incredibly sad to watch Google abandon the values that inspired so much trust and belief that there is a better way to build a company.

Long time Pixel user here who has always believed the story that Apple has the closed, but refined, higher quality experience and Google has the slightly freer, but coarser UX.

I was convinced to make the switch this year and the Apple iPhone 17 Pro + whatever iOS version is, by far the worst phone I’ve ever owned.

Photos are worse, low light is worse, macros are worse, the UI is laggy, buggy and crashes.

The keyboard and autosuggest is shockingly bad.

Incredibly popular apps on iOS (YT, X, etc) are just as bad and often worse.

iMessage is a psyop. The absolute worst messaging app in history with zero desktop access for non-Mac users?!

If you’re on Android, and especially pixel, please know that Apple has completely given up and no longer executes at the level you remember from 10-15 years ago.


The whole software world is shit now. The foundations were stable decades ago. Like Windows kernel, WinAPI, .NET, WPF, Linux kernel. But end user software is so terrible. Windows 11 with ads and unhelpful AI. macOS which is a bit less terrible, but still too bloated. Linux with its eternal changes between X, Wayland, Alsa, Pipewire, Pulseaudio, sysvinit, systemd, and endless choices. Both iOS and Android are terrible. iOS was perfect 10 years ago, it's absolute clownfest now. I would blame AI vibe coders, but it started before. I don't know who to blame. Why can't we just build solid minimal non-bloated OS that will last for decades without major rewrites. We've got so good foundations but so terrible end product.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: