Hacker Newsnew | past | comments | ask | show | jobs | submit | janeway's commentslogin

Yes, I searched for the same. No evidence this has anything to do with the European Union. More like a vibe-coded landing page with user signup form.

Edit: I am certain this is one or two people vibe coding then will pitch to VCs when the waitlist has 1000 people.

Listing major company logos in their banner: “The organizations listed here use similar technology (Nextcloud) as part of their operations. Their inclusion is for illustrative purposes only.”



“We have no cure. I don’t want to know.”

If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.” We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception. You do not wait for the cure before improving the measurement. You improve the measurement so that a cure becomes possible, targeted, and effective.

Medicine is no different. Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome. Breakthroughs do not emerge from vague labels and mixed populations. They emerge from precise, quantitative stratification that allows real effects to be seen. The danger is not that we measure too early. It is that we continue making irreversible clinical and research decisions using imprecise, binary classifications while biological insight and therapeutic tools are advancing rapidly. Building the probabilistic layer now is not premature. It is how we make future intervention feasible.


This analogy has a rather fatal flaw, which is that we already know people who've gotten Alzheimer's, and we also know for a fact people will continue to fall victim to it, at a pretty predictable rate. i.e. the detection has already happened! Anyone who was waiting for a potential victim to appear before researching the cure already has all the reasons they need to research it. Detecting whom exactly the next victim is going to be isn't really going to change anything as far as researching a treatment or cure goes. (Unless the person is super important or popular or rich, I guess?)

This is absolutely nothing like the asteroid example, where knowing that anybody is going to fall victim to it would itself be news of astronomical proportions. Previously there was a high chance the event wouldn't happen, and now it seems likely it will, so that entirely change the calculus of your priorities.

This just completely destroys the analogy. (There are other reasons it doesn't fit too, but one is enough.)


The reason the test and actually knowing who is likely to develop the disease is useful is that we don't know enough about the early pre-symptomatic stages of Alzheimer's. A lot of research has been focused on purging the plaques which form in the late stages of the disease and thus failed because these seem to be symptomatic rather than causative. The false positives are also very interesting from a research point of view because if someone is testing positive for the disease but it's not progressing this may give us a clue about how to control it.

The other slightly sad fact is that is also quite likely that any curative treatment will need to be started before you start to show symptoms, because the brain has already lost a lot of it's resilience by then.


[testing positive for the disease but it's not progressing] -- Yes, exactly this. There are people with two copies of the bad APOE4 gene. 95% of them develop early-onset Alzheimer's in their 50s. The medical community is now very intensely studying the remaining 5% to find out what's causing them to NOT get sick.


Can you provide the source for the 95% figure?


Sloppy wording. -- Fortea (2024) -- Over 95% of people with two copies have amyloid beta in their cerebrospinal fluid. (not full-blown Alzheimers symptoms, just early detection)

https://www.theguardian.com/society/article/2024/may/06/scie...


> Detecting whom exactly the next victim is going to be isn't really going to change anything as far as researching a treatment or cure goes.

Your reasoning relies heavily on this statement, which is only true if occurrence is entirely random, which is in most cases not true. A condition can easily mask the cause of the condition and then you have a confounder(-s) that you have no way of controlling. If you can build multiple strata with high risk ratios, you can find baseline similarities and differences in those groups. Early detection is highly important in knowing these confounders in the first place and then controlling for; and as GP mentions allows for more targeted research in treatment. Without this we could easily spend all the research effort on the effect (symptom) of a condition without even approaching treatment of the cause, i.e. prevention.

A very similar thing has happened with the infamous atherosclerotic plaques. AFAIK (correct me if you are aware of any evidence) there is currently no mechanistic model of how these atherosclerotic plaques form. Yet we spend so much effort in lowering the symptomatic side of increased cholesterol/LDL (which has well-known positives) even if there are known metabolic pathways for LDL increase, based entirely on correlational studies, when LDL is not even close to being the best predictor of cardiovascular conditions. LDL just happens to be easy to measure in a blood test and easy to control with oral medication.


And even if occurrence was random, there might be effects that can only be measured early on. By identifying patients before the onset of serious symptoms we can get a much more comprehensive medical history than by only looking once the symptoms are bad enough to make Alzheimer's obvious, or by monitoring large strata of the population in hopes of including enough future Alzheimer's patients in the sample


Not to sound like an LLM, but what else can I say—you're absolutely right!


Accurate detection in individuals is still important for testing any potential cure. Otherwise you can only do normal population studies over a very long time and pray that you didn't miss on any confounding variables. With this level of accuracy in diagnosing, you can do targeted testing.


While that is true, it doesn't change the sentiment behind “We have no cure. I don’t want to know.” if knowing the diagnosis doesn't help you personally. Sure you might have a sense of responsibility for mankind but you still know you can't do anything to save yourself.

With that said, lifestyle changes can slow down the onset of Alzheimer's, so knowing the diagnosis isn't totally useless.


A lot of people have enough of a sense of responsibility to donate blood, or donate their organs.

I've long had the suspicion that much of what is called Alzheimers or dementia is some form of prion disease. This study doesn't show that, exactly, but it shows that abnormal proteins may be directly correlated.

So - and I'm not saying this is the case - but suppose that the abnormal proteins identified in this study could be transmitted by blood transfusions or organ transplants. Wouldn't that itself be enough for your diagnosis to help you personally not transmit those proteins to someone else?

If your attitude is that no one else in the world matters once you get a bad diagnosis, then nothing really mattered to you before. Other people are working day and night trying to cure you, so there's no cause for that level of nihilism. You may as well try to help from the vantage point you have.


> We have no cure. I don’t want to know.

This is an incredibly short sighted, fragile-ego protecting, selfish instinct.

Making plans while you are cognizant is valuable, and the sooner you know, the longer and better plans you can make. Making plans with friends and family should be done sooner than later with these kinds of things.

It absolutely helps to personally to know, but people avoid emotional pain like the plague. So they delay and delay and then the emotional pain is amplified anyway when things come to ahead. It really is better to rip that band-aid off sooner... I think.


Maybe it is, and I'm not saying that's how I think. I would prefer to know the diagnosis. But that's not necessarily how everyone, or even most people, would act. So what if this is fragile ego and selfish? Are people not allowed to be weak, selfish?


In a case like this, it is possible and even substantially likely that a "Cure" represents a treatment you can apply after an early, routine blood test but before clinically significant symptoms arise, to prevent damage from becoming significant enough to represent an illness.

Late-stage Alzheimers', if not every stage, is very likely going to involve microscopic-scale physical damage to brain tissue that is functionally irreversible.

Blowing up an asteroid after you can see it in the sky with your naked eye will not save you.

-------------

It is also possible that what we call "Alzheimers" is actually biochemically five different disorders with distinct etiologies that have the same endpoint, and that it turns out we can cure two of them. Differentiating conditions for a biomedical catch-all category would be essential; "How accurate you can get the tests" is inseparable from this process of definition.


It may not change much as far as researching a treatment or cure goes, but it may help with other stuff, like being better prepared for the future, like organizing a health care worker or getting family members to help out or to look out for you, and other stuff, which you might not be able to do at a later stage of Alzheimers.

While there aren't any cures yet, certain treatments and lifestyle strategies may slow its progression, keep quality of life as long as possible and stuff like that for as long as possible. (And the sooner you start with that, the better)


More information is not always better for the patient. If you could detect the disease 5 years before symptoms began, there are certain psychological harms that come with that knowledge. These must be balanced against the things you mention about "slowing" the disease (unclear if any treatments do much for a given individual) and planning your future. You talk about quality of life, but quality of life declines the MOMENT you learn that you have a progressive, incurable, disease that will slowly rob you of your mind. It's not clear at all that knowing about your disease earlier is actually better for anyone.


I understand the concern about anxiety about their impending condition, but medical providers must not paternalistically decide to withhold a diagnosis from patients, at least not in all cases.

If I got an early diagnosis, it would motivate me to get my affairs in order to lessen the burden on my family and check off some bucket list items before it's too late. Don't rob me of that opportunity.

Before ordering the test, ask patients "If you were going to get Alzheimer's, would you want to know?"


>Detecting whom exactly the next victim is going to be isn't really going to change anything

What if you found for example all the diagnoses happened shortly after an HSV-1 infection? The more information as to what's going on the better, for research at any rate.


It's less like discovering the asteroid exists, and more like refining its trajectory enough to know where and when to intercept it


Off topic: Would insurance companies carry patients after being detected for Alzheimers?


In the US or in countries with working healthcare?


If you know they’re going to have it twenty years early, then you can try out preventative treatments. You can look at causes. You can get them to put power of attorney in place and prepare for their future.

Why are you so furious about the idea of people knowing?


No one is furious, but it is well established in epidemiology that more knowledge is not necessarily better for the patient. There are psychological harms that occur from the moment you learn that you have a disease that will slowly rob you of your mind. In general, you want fewer harms in your life.


You don't see any harm in a disease slowly robbing your mind, while you are not warned, and so you waste the time you have left?


Not really. Alzheimer's is about 60% of dementia, and is frequently misdiagnosed without a full workup. As primary care shifts to more poorly trained providers and doc in the box delivery models, you'll get less workups and more misdiagnoses.

A more objective blood test will make for more accurate diagnoses and and better treatment.


> “We have no cure. I don’t want to know.”

> If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.”

I don’t think the analogy fits, for a couple reasons.

1. People not wanting to know whether they have Alzheimer’s is because of the fear of a fate worse than death — living with Alzheimer’s.

2. People not wanting to know whether they have Alzheimer’s is not the same was not wanting a way to detect it. As you said, being able to measure it may help lead to a cure/treatment. I doubt people are against improving detection — they may just not want the detection to be applied personally.


Cure is the wrong word. Alzheimer’s can be best described as a failure of a system and "debris" accumulates faster than it can be "cleared". There are many moving parts and everyone is unique about the cause of their system failure.

Wrote up my current systems understanding here https://metamagic.substack.com/p/the-alzheimers-equation, but it makes clear why treatments that target only one variable are mathematically doomed to fail to work on everyone and why there will never be a single "cure". It explains without needing to read 10,000 papers why we keep getting research talking about treatment X helps in some, but not all cases or symptom Y is associated in some, but not all, etc.


This is some personal opinion that I would bet the vast majority of Alzheimer's researchers would not actually agree with. The current consensus is that Alzheimer's is a particular disease, or a cluster of similar diseases.

I'm not saying your wrong, just that the level of confidence in your assertions is not warranted.


After spending years tracking through the genetics, conditions, lab work, research papers and seeing individuals years into the condition, this model is the best I have and explains everything I currently know. Why the cluster of conditions result in the same outcome, why some treatments help some folks, but not others.

But that is sort of the point of science, you take all the evidence you have and create a hypothesis and iterate as you get more evidence. If I find evidence that suggests something else then I will be happy to tweak or abandon this. My level of confidence comes from the existing evidence and lack of evidence otherwise.


You forming a personal opinion after years of interest in the subject is fine. You asserting that opinion as a fact is the problem.

It is a tale as old as time. See the story behind the term. ultracrepidarian: https://en.wiktionary.org/wiki/ultracrepidarian#English


Versus https://en.wikipedia.org/wiki/Argument_from_authority

See also: https://www.science.org/content/article/potential-fabricatio...

Amateur's asserting their opinions as facts isn't great, but epistemologically it's no worse (and systemically, like less harmful) than when the experts do it.


Experts, when given the chance, have a tendency to speak with nuance and describe the degree of confidence they have in different statements.

Compare this with an amateur writing with certainty about a subject that subject matter experts continue to debate after decades of work.

I know which one of the two I would rather bother listening to.


You just moved the goal post.

Saying that experts are less likely to do X doesn't say anything about the relative harm of their doing so. If some rando on the streets is shouting their opinion about what causes Alzheimer's and asserting it's God's Own Truth, it's going to cause less overall harm that a carefully worded (but equally wrong) statement from an expert. (And the fact that we tend to hold experts in higher regard is the reason we should be more concerned about them stating their opinions as facts than about amateurs doing the same.)


I want to know so that I can make plans. Including end of life plans, in all senses.


Exactly - there are things that I would change now to make sure I make thing easier for myself and - more importantly - easier for the people around me.


Those plans should be in place regardless of the results of a blood test


I think there are many people (myself included) whose plans would change dramatically upon discovery of Alzheimer's, dementia, or some other degenerative disease. I might consider moving to somewhere with more liberal assisted suicide laws for example.


No, they shouldn’t. Makes no sense to plan for living with a mental disability if you’re not close to needing it.

I am absolutely not going to plan on a care facility right now. That sounds absolutely bogus.


Notarizing any wishes against some medical procedures in case a sudden accident ruins your ability to dissent prevents doctors from being forced to keep your body alive as long as possible.


That doesn't apply to Alzheimer's disease directly though. If you don't want to live when your conscious life is limited to short flashes of awareness among a deeply terrifying melange of visions of the past and hallucinations, DNR laws don't in any way force or even allow doctors to euthanize you. You can persist in this state for many years without ever triggering a DNR check.


That is sadly true but at least you can prevent them from feeding you through a tube when you forgot how to swallow


My genetics are such I'm more likely to drop dead of a heart attack too young.

If I were likely to develop alzheimer's, I'd make more and more expensive accommodation for power of attorney and trusts to shield assets while I was competent to do so.


I was more referring to an advance directive / living will sort of thing


That is one very, very tiny aspect of EOL planning.


Yes, like walking out into the woods before it gets too bad.


Like what? You should already have a will, life insurance, etc. even without the disease. All you're doing by knowing earlier is causing psychological harms to yourself and the people you tell, adding more years of anxiety, grief, and sadness for no gain. Think about the bigger picture.


Downsizing your house? Picking your long term care location? Changing your asset balance? Recording more photos, audio, and video?

Knowing an early, painful fate allows you to approach it with dignity.


That is more or less the plot of the movie "don't look up".

https://www.imdb.com/title/tt11286314/


These are 50yos, not elderly retirees. What if knowing caused your employer to deny you a promotion? Im in the military. This sort of diagnosis on one's file could have real impact on future prospects. People already fear ADHD tests for the same reason. I know a guy who is leaving the military after 20+ years flying transports. He is applying to airlines. If you were an airline, would you higher an experienced pilot with a positive alzheimer's diagnois in thier medical data?


An argument for a robust social safety net for when a diagnosis threatens your career in a safety-critical field.


While not a cure, there are many known modifiable risk factors for Alzheimer’s, so to some extent we know enough to deflect / mitigate dementia: https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


>Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome.

While you're right from the perspective of humanity taking the steps of gathering data then tackling the disease, most developed countries have single payer healthcare systems that require some level of cost-benefit analysis to approve covering new diagnostic systems.

Alzheimer's disease progression doesn't seem to have any notable preventative indications other than 'eat well, exercise and stay mentally active', all of which are standard recommendations.

Recall that this isn't an issue deciding between funding and non-funding. It's an issue deciding between funding Alzheimer's diagnostics, new GMP agonists, new screening options for highly preventable cancers, etc. Building out a dataset is nice but unless that's surplus money redirected from other programs it's going to come at a real flesh and blood cost.


Alzheimer isn't new. You should compare it with a situation like:

Imagine you're born and you eventually learn that there's an asteroid on a collision course with earth, from way before you were born. It's going to take many years to get here and you may die before it hits and so far no scientists have been able to come up with a way to deflect it. Do you care?

Adding newness to the situation makes it wildly different.


Okay but even that analogy is limited. Incurable progressive diseases almost invariably have lifestyle factors, supplements, or medications that can at least slow the rate of progression for many people. Those are also often more effective when started during early stage detection. There is literally nothing the average person can do about the asteroid.


Absolutely, the belief in scientific circles is that the way forward to develop cures (or at least treatment that slows down the progression) is to treat it early. When you get to the point where you start showing clear symptoms, your brain is already mush. If you have a potential treatment that attacks the root cause, you would have to catch the very early, pre-clinical, stages of the disease, but without good diagnostics there is no way to do that (short of giving the disease to a wide swath of the population, like a vaccine... but that gets expensive very quickly, and side effects become a bigger worry.


> We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception

That's not what would happen. We wouldn't mobilize. We'd fragment. Within days, the prediction would be declared partisan. One bloc would call it settled science; another would call it statistical hysteria. Billionaires would quietly commission private shelters while publicly funding studies questioning whether the asteroid even qualified as "large." News panels would debate whether the projected impact zone was being unfairly politicized. Conspiracy channels would insist the asteroid was fabricated to justify global governance. Others would insist the real asteroid was being hidden. Amateur analysts would flood the internet with homemade trajectory charts proving the professionals wrong. Death threats would arrive in astronomers' inboxes faster than research grants.


The film "Don't Look Up" is very similar to what you describe.


I get the logic, but I think the emotional side is a lot harder than the asteroid analogy makes it sound


Maybe if you can keep the results a secret from health insurance companies you’d have a point. However, not everyone has coverage under a large organization’s umbrella, and these people might be denied coverage.


“We have no cure. I don’t want to know.” isn’t the same as “We have no cure. We as a society don’t want to know.”

People can be fine with being tested so that epidemiologists can work on growing our knowledge and, at the same time, not wanting to know their own diagnosis.


I agree. I feel that many medical doctors do not share this mindset.


>”We have no cure. I don’t want to know.”

Is this a response to another comment or did I miss the quote in the article? Otherwise it’s just a straw man.


When can we get it?

I do want to know.

If it is positive, that is still helping you accurately deal with whatever is happening to you.


Your gut bacteria is the prevention and the cure.


That's a hypothesis. If we knew that for certain, it would be really big news; and we could investigate to learn why, and then we could isolate the causative factors and perhaps find other ways of deploying them… If you know something the rest of us don't, prove it: show us the evidence, show us the specific hypotheses, describe a repeatable experiment, show us the results if such an experiment has been performed.

If you don't know something the rest of us don't, don't be so arrogant about your pet theories. Such arrogance costs lives.


You do realize they already are investigating, right? Even looking into MS and parkinson. ALZHEIMER’S DISEASE (AD) and AD DEMENTIA Umbrella review (systematic reviews of SRs): https://www.nature.com/articles/s44400-025-00048-6

Systematic review + meta-analysis (basal microbiota; AD): https://alz-journals.onlinelibrary.wiley.com/doi/abs/10.1002... https://pmc.ncbi.nlm.nih.gov/articles/PMC11672027/

Replicated case-control + functional metagenomics (AD dementia): https://alz-journals.onlinelibrary.wiley.com/doi/full/10.100...

Large-cohort metagenomics (stage-specific / early pathology signals): https://pubmed.ncbi.nlm.nih.gov/40164697

Mendelian randomization (AD): https://pubmed.ncbi.nlm.nih.gov/38788075/ https://www.sciencedirect.com/science/article/pii/S227458072... https://pubmed.ncbi.nlm.nih.gov/40665707/ https://journals.sagepub.com/doi/10.1177/25424823261422629 https://www.medrxiv.org/content/10.1101/2025.08.20.25333769v...

Narrative / mechanisms (useful synthesis, not primary causal proof): https://www.sciencedirect.com/science/article/abs/pii/S15681...

MILD COGNITIVE IMPAIRMENT (MCI) / DEMENTIA (cognition-focused evidence) Systematic review (MCI or Alzheimer’s dementia; PRISMA, PROSPERO): https://www.mdpi.com/2035-8377/17/10/155 https://pubmed.ncbi.nlm.nih.gov/41149776/

Scoping review (MCI and AD gut microbiomes + interventions, through Feb 2023): https://pmc.ncbi.nlm.nih.gov/articles/PMC12825029/

Systematic review of microbiota-targeted interventions for cognition/dementia risk: https://www.sciencedirect.com/science/article/pii/S027153172...

RCT-focused systematic review/meta-analysis of probiotics for cognitive impairment risk/AD/MCI: https://pmc.ncbi.nlm.nih.gov/articles/PMC12645680/

FMT in dementia/MCI context (review of effects across neuro cohorts): https://www.sciencedirect.com/science/article/pii/S266635462...

MULTIPLE SCLEROSIS (MS) Microbiome signatures via global data integration / ML: https://pmc.ncbi.nlm.nih.gov/articles/PMC12383397/

Systematic review/meta-analysis of probiotics in MS (preclinical + clinical): https://journals.plos.org/plosone/article?id=10.1371/journal...

Systematic review + meta-analysis (antimicrobial exposure and MS risk; microbiome-disruption relevant): https://www.sciencedirect.com/science/article/abs/pii/S22110...

Mendelian randomization (gut microbiota causally linked to MS): https://pubmed.ncbi.nlm.nih.gov/39065244/ https://www.mdpi.com/2076-2607/12/7/1476

Broad MS gut dysbiosis and therapeutic modulation review: https://pmc.ncbi.nlm.nih.gov/articles/PMC12668904/

MS gut-brain-barrier and intestinal barrier review: https://www.frontiersin.org/journals/immunology/articles/10....

Example MS cohort biomarker/signature work: https://www.nature.com/articles/s41598-024-64369-x https://www.nature.com/articles/s41598-025-19998-1

PARKINSON’S DISEASE (PD) Multi-cohort metagenomic meta-analysis (Nat Commun, 2025): https://www.nature.com/articles/s41467-025-56829-3 https://pubmed.ncbi.nlm.nih.gov/40335465/

Large metagenomics cohort (Nat Commun, 2022): https://www.nature.com/articles/s41467-022-34667-x

Integrated multi-cohort gut metagenome (Movement Disorders, 2023): https://pubmed.ncbi.nlm.nih.gov/36691982/ https://movementdisorders.onlinelibrary.wiley.com/doi/10.100...

Metagenomic analysis (Movement Disorders, 2024): https://pubmed.ncbi.nlm.nih.gov/39192744/

PD causal-inference and MR discussion (review-type synthesis): https://pmc.ncbi.nlm.nih.gov/articles/PMC12512240/

MR study example (PD gut microbiota): https://journals.lww.com/md-journal/fulltext/2025/10310/caus...


> You do realize they already are investigating, right?

Yes. I also realise they have not reached the conclusion of this investigation. (Imagine this attitude towards a police investigation: "They're investigating Roger Rabbit, therefore he must have dunnit!")


Wow excellent thank you


A sign says: "Dogs must be carried on the escalator."

At first glance it seems clear. On a second read, it becomes obvious that what matters is not the dogs, but whether they are being carried.

Grandma calls out: "The chicken is ready to eat."

Many system outputs have the same problem. They look definitive, but they silently hide whether the required conditions were ever met.

When systems consume outputs from black-box algorithms, the usual options are to trust the conclusion or ignore it entirely.

In clinical genomics, the latter is traditional. For example, the British Society for Genetic Medicine advises clinicians not to act on results from external genomic services https://bsgm.org.uk/media/12844/direct-to-consumer-genomic-t...

This post describes a third approach, grounded in computer science. Before any interpretation, systems should record whether verifiable evidence is actually available.

The standard adds a small but strict step. Each rule first reports whether it could be checked at all: yes, no, or not evaluable. Then the evidence is used in reverse, not to confirm the result, but to try to rule it out. If removing or negating that evidence would change the outcome, it counts as real evidence. If not, it does not.

Crucially, this forces a simple question: could the same result have appeared even if the evidence were absent or different? Only when the answer is no does the result actually count as evidence.

The idea comes from genomics, where hospitals, companies, and research groups need to share results without exposing proprietary methods, but it applies anywhere systems reason over incomplete or black-box data.


Cool!

I have worked 100% in 3 comparable systems over the past 10 years. Can you access with ssh?

I find it super fluid to work on the HPC directly to develop methods for huge datasets by using vim to code and tmux for sessions. I focus on printing detailed log files constantly with lots of debugs and an automated monitoring script to print those logs in realtime; a mixture of .out .err and log.txt.


You can access via SSH either with password or with keys.

Our reference cluster has long queuing times during busy hours and requires 2FA for access, so we had extra incentives to have a self-contained solution to run on our development machines.


This topic is fascinating to me. The Toy Story film workflow is a perfect illustration of intentional compensation: artists pushed greens in the digital master because 35 mm film would darken and desaturate them. The aim was never neon greens on screen, it was colour calibration for a later step. Only later, when digital masters were reused without the film stage, did those compensating choices start to look like creative ones.

I run into this same failure mode often. We introduce purposeful scaffolding in the workflow that isn’t meant to stand alone, but exists solely to ensure the final output behaves as intended. Months later, someone is pitching how we should “lean into the bold saturated greens,” not realising the topic only exists because we specifically wanted neutral greens in the final output. The scaffold becomes the building.

In our work this kind of nuance isn’t optional, it is the project. If we lose track of which decisions are compensations and which are targets, outcomes drift badly and quietly, and everything built after is optimised for the wrong goal.

I’d genuinely value advice on preventing this. Is there a good name or framework for this pattern? Something concise that distinguishes a process artefact from product intent, and helps teams course-correct early without sounding like a semantics debate?


I worked at DreamWorks Animation on the pipeline, lighting and animation tools for almost ten years. All of this information is captured in our pipeline process tools, although I am sure there are edits and modifications that are done that escape documentation. We were able to pull complete shows out of deep storage, render scenes using the toolchain the produced them and produce the same output. If the renders weren't reproducable, madness would ensue.

Even with complete attention to detail, the final renders would be color graded using Flame, or Inferno, or some other tool and all of those edits would also be stored and reproducible in the pipeline.

Pixar must have a very similar system and maybe a Pixar engineer can comment. My somewhat educated assumption is that these DVD releases were created outside of the Pixar toolchain by grabbing some version of a render that was never intended as a direct to digital release. This may have happened as a result of ignorance, indifference, a lack of a proper budget or some other extenuating circumstance. It isn't likely John Lasseter or some other Pixar creative really wanted the final output to look like this.


Amazing. Your final point seems to make most sense - not the original team itself having any problems.


There’s an analog analogue: mixing and mastering audio recordings for the devices of the era.

I first heard about this when reading an article or book about Jimi Hendrix making choices based on what the output sounded like on AM radio. Contrast that with the contemporary recordings of The Beatles, in which George Martin was oriented toward what sounded best in the studio and home hi-fi (which was pretty amazing if you could afford decent German and Japanese components).

Even today, after digital transfers and remasters and high-end speakers and headphones, Hendrix’s late 60s studio recordings don’t hold a candle anything the Beatles did from Revolver on.


> There’s an analog analogue: mixing and mastering audio recordings for the devices of the era.

In the modern day, this has one extremely noticeable effect: audio releases used to assume that you were going to play your music on a big, expensive stereo system, and they tried to create the illusion of the different members of the band standing in different places.

But today you listen to music on headphones, and it's very weird to have, for example, the bassline playing in one ear while the rest of the music plays in your other ear.


That's with a naive stereo split. Many would still put the bass on one side, with the binaural processing so it's still heard on the right, but quieter and with a tiny delay.


Hard panning isn't naive. It's just a choice that presumes an audio playback environment.

If you're listening in a room with two speakers, having widely panned sounds and limited use of reverb sounds great. The room will mix the two speakers somewhat together and add a sense of space. The result sounds like a couple of instruments playing in a room, which is sort of is.

But if you're listening with a tiny speaker directly next to each ear canal, then all of that mixing and creating a sense of space must be baked into the two audio channels themselves. You have to be more judicious with panning to avoid creating an effect that couldn't possibly be heard in a real space and add some more reverb to create a spatial environment.


Maybe I'm misunderstanding him but I think he says the music track can have hard panning, and it's the headphone playback system that should do some compensatory processing so that it sounds as if it was played on two speakers in a room.

Don't ask me how it works but I know gaming headsets try to emulate a surround setup.


Yes, these sorts of compensation features have become common on higher end headphones.

One example:

> The crossfeed feature is great for classic tracks with hard-panned mixes. It takes instruments concentrated on one channel and balances them out, creating a much more natural listening experience — like hearing the track on a full stereo system.

https://us.sennheiser-hearing.com/products/hdb-630


No, they just didn't put much time into stereo because it was new and most listeners didn't have that format. So they'd hard pan things for the novelty effect. This paradigm was over by the early 70s and they gave stereo mixes a more intentional treatment.


A voice on the radio sounded better with vibrato, so that’s what they did before even recordings were made. Same when violins played.

These versions were for radio only and thought of as cheap when done in person.

Later this was recorded, and being the only versions recorded, later generations thought that this is how the masters of the time did things, when really they would be booed off stage (so to speak).

It’s a bit of family history that passed this info on due to being multiple generations of playing the violin.


Interesting!


And now we have the Loudness War where the songs are so highly compressed that there is no dynamic range. Because of this, I have to reduce the volume so it isn't painful to listen to. And this makes what should have been a live recording with interesting sound into background noise. Example:

https://www.youtube.com/watch?v=3Gmex_4hreQ

If you want a recent-ish album to listen to that has good sound, try Daft Punk's Random Access Memories (which won the Best Engineered Album Grammy award in 2014). Or anything engineered by Alan Parsons (he's in this list many times)

https://en.wikipedia.org/wiki/Grammy_Award_for_Best_Engineer...


> now

Is this still a problem? Your example video is from nearly twenty years ago, RAM is over a decade old. I think the advent of streaming (and perhaps lessons learned) have made this less of a problem. I can't remember hearing any recent examples (but I also don't listen to a lot of music that might be victim to the practice); the Wikipedia article lacks any examples from the last decade https://en.wikipedia.org/wiki/Loudness_war

Thankfully there have been some remasters that have undone the damage. Three Cheers for Sweet Revenge and Absolution come to mind.


Certified Audio Engineer here. The Loudness Wars more or less ended over the last decade or so due to music streaming services using loudness normalization (they effectively measure what each recording's true average volume is and adjust them all up or down on an invisible volume knob to have the same average)

Because of this it generally makes more sense these days to just make your music have an appropriate dynamic range for the content/intended usage. Some stuff still gets slammed with compression/limiters, but it's mostly club music from what I can tell.


This goes along with what I saw growing up. You had the retail mastering (with RIAA curve for LP, etc.) and then the separate radio edit which had the compression that the stations wanted - so they sounded louder and wouldn't have too much bass/treble. And also wouldn't distort on the leased line to the transmitter site.

And of course it would have all the dirty words removed or changed. Like Steve Miller Band's "funky kicks going down in the city" in Jet Airliner

I still don't know if the compression in the Loudness War was because of esthetics, or because of the studios wanting to save money and only pay for the radio edit. Possibly both - reduced production costs and not having to pay big-name engineers. "My sister's cousin has this plug-in for his laptop and all you do is click a button"...


> I still don't know if the compression in the Loudness War was because of esthetics,

Upping the gain increases the relative "oomph" of the bass at the cost of some treble, right?

As a 90s kid with a bumping system in my Honda, I can confidently say we were all about that bass long before Megan Trainor came around. Everyone had the CD they used to demo their system.

Because of that, I think the loudness wars were driven by consumer tastes more than people will admit (because then we'd have to admit we all had poor taste). Young people really loved music with way too much bass. I remember my mom (a talented musician) complaining that my taste in music was all bass.

Of course, hip hop and rap in the 90s were really bass heavy, but so was a lot of rock music. RHCP, Korn, Limp Bizkit, and Slipknot come to my mind as 90s rock bands that had tons of bass in their music.

Freak on a Leash in particular is a song that I feel like doesn't "translate" well to modern sound system setups. Listening to it on a setup with a massive subwoofer just hits different.


> Korn

It wasn't the bass, but rather the guitar.

The bass player tuned the strings down a full step to be quite loose, and turned the treble up which gave it this really clicky tone that sounded like a bunch of tictacs being thrown down an empty concrete stairwell.

He wanted it to be percussive to cut through the monster lows of the guitar.


Music, as tracked by Billboard, cross genre, is as loud as ever. Here’s a survey of Billboard music:

https://www.izotope.com/en/learn/mastering-trends?srsltid=Af...

I have an Audio Developer Conference talk about this topic if you care to follow the history of it. I have softened my stance a bit on the criticism of the 90’s (yeah, people were using lookahead limiting over exuberantly because of its newness) but the meat of the talk may be of interest anyway.

https://www.youtube.com/watch?v=0Hj7PYid_tE


As an ex audio engineer, I would say that the war ended and loudness won.


That makes sense, thanks for the reply!


It's still a problem, although less consistently a problem than it used to be for the reason entropicdrifter explained.

There's a crowdsourced database of dynamic range metrics for music at:

https://dr.loudness-war.info/

You can see some 2025 releases are good but many are still loudness war victims. Even though streaming services normalize loudness, dynamic range compression will make music sound better on phone speakers, so there's still reason to do it.

IMO, music production peaked in the 80s, when essentially every mainstream release sounded good.


I was obsessed with Tales of Mystery & Imagination, I Robot, and Pyramids in the 70s. I also loved Rush, Yes, ELP, Genesis, and ELO, but while Alan Parsons' albums weren't better in an absolute musical sense, his production values were so obviously in a class of their own I still put Parsons in the same bucket as people like Trevor Horn and Quincy Jones, people who created masterpieces of record album engineering and production.


> decent German and Japanese components

Whoa there! Audio components were about the only thing the British still excelled at by that time.


I wasn't aware of home hi-fi but British gear for musicians was widespread when I was growing up (Marshall, Vox, etc).

I was specifically thinking of the components my father got through the Army PX in the 60s and the hi-fi gear I would see at some friends' houses in the decades that followed ... sometimes tech that never really took hold, such as reel-to-reel audio. Most of it was Japanese, and sometimes German.

I still have a pair of his 1967 Sansui speakers in the basement (one with a blown woofer, unfortunately) and a working Yamaha natural sound receiver sitting next to my desk from about a decade later.


Wharfedale (1920s) and Cambridge Audio (1960s) were there, and are still making great home hifi.


British music of the 60s and 70s was pretty great to listen to on that hifi.


I've noticed this with lots of jazz from the 50s and 60s. Sounds amazing in mono but "lacking" in stereo.


That’s more due to mono being the dominant format at the time so the majority of time and money went to working on the mono mix. The stereo one was often an afterthought until stereo became more widespread and demand for good stereo mixes increased.


Because it's mono?


The same with movie sound mixing, where directors like Nolan are infamous for muffling dialogue in home setups because he wants the sound mixed for large, IMAX scale theater setups.


I've always been a fan of repos that I come across with ARCHITECTURE.md files in them, but that's a pretty loose framework and some just describe the what and not the why.

Otherwise, I wish I worked at a place like Oxide that does RFDs. https://rfd.shared.oxide.computer Just a single place with artifacts of a formal process for writing shit down.

In your example, writing down "The greens are oversaturated by X% because we will lose a lot of it in the transfer process to film" goes a long way in at least making people aware of the decision and why it was made, at least then the "hey actually the boosted greens look kinda nice" can prompt a "yeah but we only did that because of the medium we were shipping on, it's wrong"


You're assuming people RTFM, which does not happen at all in my case. Documentation exists for you to link to when someone already lost days on something finally reaches out.


Culture changes under the impact of technology, but culture also changes when people deliberately teach practices.


(Cough) Abstraction and separation of concerns.

In Toy Story's case, the digital master should have had "correct" colors, and the tweaking done in the transfer to film step. It's the responsibility of the transfer process to make sure that the colors are right.

Now, counter arguments could be that the animators needed to work with awareness of how film changes things; or that animators (in the hand-painted era) always had to adjust colors slightly.

---

I think the real issue is that Disney should know enough to tweak the colors of the digital releases to match what the artists intended.


Production methodolgies for animated films have progressed massively since 1995 and Pixar may have not found the ideal process for the color grading of the digital to film step. Heck, they may not have color graded at all! This has been suggested. I agree that someone should know better than to just take a render and push it out as a digital release without paying attention to the result.


> In Toy Story's case, the digital master should have had "correct" colors

Could it be the case that generating each digital master required thousands of render hours?


But the compensation for film should be a cheap 2-D color filter pass, not an expensive 3-D renering pass.


That's an invalid argument: Digitally tweaking color when printing film has nothing to do with how long it takes to render 3d.

They had a custom built film printer and could make adjustments there.


I know you're looking for something more universal, but in modern video workflows you'd apply a chain of color transformations on top the final composited image to compensate the display you're working with.

So I guess try separating your compensations from the original work and create a workflow that automatically applies them


That’s a great observation. I’m hitting the same thing… yesterday’s hacks are today’s gospel.

My solution is decision documents. I write down the business problem, background on how we got here, my recommended solution, alternative solutions with discussion about their relative strengths and weaknesses, and finally and executive summary that states the whole affirmative recommendation in half a page.

Then I send that doc to the business owners to review and critique. I meet with them and chase down ground truth. Yes it works like this NOW but what SHOULD it be?

We iterate until everyone is excited about the revision, then we implement.


There are two observations I've seen in practice with decision documents: the first is that people want to consume the bare minimum before getting started, so such docs have to be very carefully written to surface the most important decision(s) early, or otherwise call them out for quick access. This often gets lost as word count grows and becomes a metric.

The second is that excitement typically falls with each iteration, even while everyone agrees that each is better than the previous. Excitement follows more strongly from newness than rightness.


Eventually you'll run into a decision that was made for one set of reasons but succeeded for completely different reasons. A decision document can't help there; it can only tell you why the decision was made.

That is the nature of evolutionary processes and it's the reason people (and animals; you can find plenty of work on e.g. "superstition in chickens") are reluctant to change working systems.


Theory: Everything is built on barely functioning ruins with each successive generation or layer mostly unaware of the proper ways to use anything produced previously. Ten steps forward and nine steps back. All progress has always been like this.


I’ve come to similar conclusions, and further realized that if you feel there’s a moment to catch your breath and finally have everything tidy and organized, possibly early sign of stagnation or decline in an area. Growth/progress is almost always urgent and overwhelming in the moment.


Do you have some concrete or specific examples of intentional compensation or purposeful scaffolding in mind (outside the topic of the article)?


Not scaffolding in the same way, but, two examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations" are

- the fashion for unpainted marble statues and architecture

- the aesthetic of running film slightly too fast in the projector (or slightly too slow in the camera) for an old-timey effect


Isn’t the frame rate of film something like that?

The industry decided on 24 FPS as something of an average of the multiple existing company standards and it was fast enough to provide smooth motion, avoid flicker, and not use too much film ($$$).

Overtime it became “the film look”. One hundred-ish years later we still record TV shows and movies in it that we want to look “good” as opposed to “fake” like a soap opera.

And it’s all happenstance. The movie industry could’ve moved to something higher at any point other than inertia. With TV being 60i it would have made plenty of sense to go to 30p for film to allow them to show it on TV better once that became a thing.

But by then it was enshrined.


Another example: pixel art in games.

Now, don't get me wrong, I'm a fan of pixel art and retro games.

But this reminds me of when people complained that the latest Monkey Island didn't use pixel art, and Ron Gilbert had to explain the original "The Curse of Monkey Island" wasn't "a pixel art game" either, it was a "state of the art game (for that time)", and it was never his intention to make retro games.

Many classic games had pixel art by accident; it was the most feasible technology at the time.


I don't think anyone would have complained if the art had been more detailed but in the same style as the original or even using real digitized actors.

Monkey Island II's art was slightly more comic-like than say The Last Crusade but still with realistic proportions and movements so that was the expectation before CoMI.

The art style changing to silly-comic is what got people riled up.


Hard disagree.

(Also a correction: by original I meant "Secret of" but mistyped "Curse of").

I meant Return to Monkey Island (2022), which was no more abrupt a change than say, "The Curse of Monkey Island" (1997).

Monkey Island was always "silly comic", it's its sine qua non.

People whined because they wanted a retro game, they wanted "the same style" (pixels) as the original "Secret", but Ron Gilbert was pretty explicit about this: "Secret" looked what it looked like due to limitations of the time, he wasn't "going for that style", it was just the style that they managed with pixel art. Monkey Island was a state-of-the-art game for its time.

So my example is fully within the terms of the concept we're describing: people growing attached to technical limitations, or in the original words:

> [...] examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations"


Motion blur. 24fps. Grain. Practically everything we call cinematic


I wouldn't call it "fetishizing" though; not all of them anyway.

Motion blur happens with real vision, so anything without blur would look odd. There's cinematic exaggeration, of course.

24 FPS is indeed entirely artificial, but I wouldn't call it a fetish: if you've grown with 24 FPS movies, a higher frame rate will paradoxically look artificial! It's not a snobby thing, maybe it's an "uncanny valley" thing? To me higher frame rates (as in how The Hobbit was released) make the actors look fake, almost like automatons or puppets. I know it makes no objective sense, but at the same time it's not a fetishization. I also cannot get used to it, it doesn't go away as I get immersed in the movie (it doesn't help that The Hobbit is trash, of course, but that's a tangent).

Grain, I'd argue, is the true fetish. There's no grain in real life (unless you have a visual impairment). You forget fast about the lack of grain if you're immersed in the movie. I like grain, but it's 100% an esthetic preference, i.e. a fetish.


>Motion blur happens with real vision, so anything without blur would look odd.

You watch the video with your eyes so it's not possible to get "odd"-looking lack of blur. There's no need to add extra motion blur on top of the naturally occurring blur.


On the contrary, an object moving across your field of vision will produce a level of motion blur in your eyes. The same object recorded at 24fps and then projected or displayed in front of your eyes will produce a different level of motion blur, because the object is no longer moving continuously across your vision but instead moving in discrete steps. The exact character of this motion blur can be influenced by controlling what fraction of that 1/24th of a second the image is exposed for (vs. having the screen black)

The most natural level of motion blur for a moving picture to exhibit is not that traditionally exhibited by 24fps film, but it is equally not none (unless your motion picture is recorded at such high frame rate that it substantially exceeds the reaction time of your eyes, which is rather infeasible)


In principle, I agree.

In practice, I think the kind of blur that happens when you're looking at a physical object vs an object projected on a crisp, lit screen, with postprocessing/color grading/light meant for the screen, is different. I'm also not sure whatever is captured by a camera looks the same in motion than what you see with your eyes; in effect even the best camera is always introducing a distortion, so it has to be corrected somehow. The camera is "faking" movement, it's just that it's more convincing than a simple cartoon as a sequence of static drawings. (Note I'm speaking from intuition, I'm not making a formal claim!).

That's why (IMO) you don't need "motion blur" effects for live theater, but you do for cinema and TV shows: real physical objects and people vs whatever exists on a flat surface that emits light.


You're forgetting about the shutter angle. A large shutter angle will have a lot of motion blur and feel fluid even at a low frame rate, while a small shutter angle will make movement feel stilted but every frame will be fully legible, very useful for caothic scenes. Saving private Ryan, for example, used a small shutter angle. And until digital, you were restricted to a shutter angle of 180, which meant that very fast moving elements would still jump from frame to frame in between exposures.


I suspect 24fps is popular because it forces the videography to be more intentional with motion. Too blurry, and it becomes incomprehensible. That, and everything staying sharp at 60fps makes it look like TikTok slop.


24fps looks a little different on a real film projector than on nearly all home screens, too. There's a little time between each frame when a full-frame black is projected (the light is blocked, that is) as the film advances (else you'd get a horrid and probably nausea-inducing smear as the film moved). This (oddly enough!) has the effect of apparently smoothing motion—though "motion smoothing" settings on e.g. modern TVs don't match that effect, unfortunately, but looks like something else entirely (which one may or may not find intolerably awful).

Some of your fancier, brighter (because you lose some apparent brightness by cutting the light for fractions of a second) home digital projectors can convincingly mimic the effect, but otherwise, you'll never quite get things like 24fps panning judder down to imperceptible levels, like a real film projector can.


Reminds me of how pixel-perfect emulation of pixel art on a modern screen is often ugly, compared to the game played on a CRT.


> (which one may or may not find intolerably awful).

"Motion smoothing" on TVs is the first thing I disable, I really hate it.


Me at every AirBnB: turn on TV "OH MY GOD WTF MY EYES ARE BLEEDING where is the settings button?" go turn off noise reduction, upscaling, motion smoothing.

I think I've seen like one out of a couple dozen where the motion smoothing was already off.


I think the "real" problem is not matching shutter speed to frame rate. With 24fps you have to make a strong choice - either the shutter speed is 1/24s or 1/48s, or any panning movement is going to look like absolute garbage. But, with 60+fps, even if your shutter speed is incredible fast, motion will still look decent, because there's enough frames being shown that the motion isn't jerky - it looks unnatural, just harder to put your finger on why (whereas 24fps at 1/1000s looks unnatural for obvious reasons - the entire picture jerks when you're panning).

The solution is 60fps at 1/60s. Panning looks pretty natural again, as does most other motion, and you get clarity for fast-moving objects. You can play around with different framerates, but imo anything more than 1/120s (180 degree shutter in film speak) will start severely degrading the watch experience.

I've been doing a good bit of filming of cars at autocross and road course circuits the past two years, and I've received a number of compliments on the smoothness and clarity of the footage - "how does that video out of your dslr [note: it's a Lumix G9 mirrorless] look so good" is a common one. The answer is 60fps, 1/60s shutter, and lots of in-body and in-lens stabilization so my by-hand tracking shots aren't wildly swinging around. At 24/25/30fps everything either degrades into a blurry mess, or is too choppy to be enjoyable, but at 60fps and 1/500s or 1/1000s, it looks like a (crappy) video game.


Is getting something like this wrong why e.g. The Hobbit looked so damn weird? I didn't have a strong opinion on higher FPS films, and was even kinda excited about it, until I watched that in theaters. Not only did it have (to me, just a tiny bit of) the oft-complained-about "soap opera" effect due to the association of higher frame rates with cheap shot-on-video content—the main problem was that any time a character was moving it felt wrong, like a manually-cranked silent film playing back at inconsistent speeds. Often it looked like characters were moving at speed-walking rates when their affect and gait were calm and casual. Totally bizarre and ruined any amount of enjoyment I may have gotten out of it (other quality issues aside). That's not something I've noticed in other higher FPS content (the "soap opera" effect, yes; things looking subtly sped-up or slowed-down, no).

[EDIT] I mean, IIRC that was 48fps, not 60, so you'd think they'd get the shutter timing right, but man, something was wrong with it.


Great examples. My mind jumps straight to audio:

- the pops and hiss of analog vinyl records, deliberately added by digital hip-hop artists

- electric guitar distortion pedals designed to mimic the sound of overheated tube amps or speaker cones torn from being blown out


- Audio compression was/is necessary to get good SNR on mag tape.


true - but are you implying audio engineers are now leaning into heavy compression for artistic reasons?


Not necessarily heavy (except sometimes as an effect), but some compression almost all the time for artistic reasons, yes.

Most people would barely notice it as it's waaaay more subtle than your distorted guitar example. But it's there.

Part of the likeable sound of albums made on tape is the particular combination of old-time compressors used to make sure enough level gets to the tape, plus the way tape compresses the signal again on recording by it's nature.


I work in vfx, and we had a lecture from one of the art designers that worked with some formula 1 teams on the color design for cars. It was really interesting on how much work goes into making the car look "iconic" but also highlight sponsors, etc.

But for your point, back during the pal/ntsc analog days, the physical color of the cars was set so when viewed on analog broadcast, the color would be correct (very similar to film scanning).

He worked for a different team but brought in a small piece of ferrari bodywork and it was more of a day-glo red-orange than the delicious red we all think of with ferrari.


In some projects I work on I've added a WHY.md at the root that explains what's scaffolding and what's load bearing, essentially. I can't say it's been effective at preventing the problem you outlined, but at least it's cathartic.


Isn't the entire point of "reinventing the wheel" to address this exact problem?

This is one of the tradeoffs of maintaining backwards compatibility and stewardship -- you are required to keep track of each "cause" of that backwards compatibility. And since the number of "causes" can quickly become enumerable, that's usually what prompts people to reinvent the wheel.

And when I say reinvent the wheel, I am NOT describing what is effectively a software port. I am talking about going back to ground zero, and building the framework from the ground up, considering ONLY the needs of the task at hand. It's the most effective way to prune these needless requirements.


enumerable -> innumerable

(opposite meaning)


> (opposite meaning)

Funnily enough, e- means "out" (more fundamentally "from") and in- means "in(to)", so that's not an unexpected way to form opposite words.

But in this case, innumerable begins with a different in- meaning "not". (Compare inhabit or immiserate, though.)


Yeah, English has so many quirks. As a software dev, the "enum" type cane to mind, making this one easier to spot. (shrug)


> Yeah, English has so many quirks.

Arguably true in general, but in this specific case everything I said was already true in Latin.


Relevance? I'd say it's inarguable -- and the words being discussed are English.


Thanks, you are right. Wish I could edit it.


Chesterton’s Fence is a related notion.


It seems pretty common in software - engineers not following the spec. Another thing that happens is the pivot. You realize the scaffolding is what everyone wants and sell that instead. The scaffold becomes the building and also product.


"Cargo cult"? As in, "Looks like the genius artists at Pixar made everything extra green, so let's continue doing this, since it's surely genius."


Yes, I spend a majority of my professional life on similar systems writing code in vim and running massive jobs via slurm. Required for processing TBs of data on secured environments with seamless command line access. I hate web-based connections or vscode type system. Although open to any improvements, this works best to me. It’s like a world inside one’s head with a text-based interface.

Graphical data exploration and stats with R, python, etc is a beautiful challenge at that scale.


Aside from how slow and user hostile it is compared to a text editor, my biggest complaint about vs code is the load it puts on the login node. You get 40 people each running multiple vs code servers and it brings the poor computer to its knees.


Every job on an HPC cluster should have a memory and CPU limit. Nearly every job should have a time limit as well. I/O throttling is a much trickier problem.

I wound up having a script for users on a jump host that would submit an sbatch job that ran sshd as the user on a random high level port and stored the port in the output. The output was available over NFS so the script parsed the port number and displayed the connection info to the user.

The user could then run a vscode server over ssh within the bounds of CPU/memory/time limits.


That’s a really cool idea!


I know indeed that our sys-admins also don't like it.


> It’s like a world inside one’s head with a text-based interface.

I had a co-worker describe it as a giant Linux playground.

Another as ETL nirvana.


Wow, already stumbled into some good humour. Well done


I’ve just finished reading Walter Isaacson’s biography of Steve Jobs. His vision was extraordinary, recognising that even the design of the stores was integral to the product itself. Every layer of engineering was deeply intertwined with aesthetic design. I’ve always shared that belief, but I’m now fully committed to pursuing it without compromise in my own products. It’s proving even more challenging than I’d imagined to make highly technical things feel simple and intuitive for users.

I was recently thinking the exact same thing as the author here; as a teen I got my ipod and instantly respected the graceful design and felt shocked how shoddy my previous cheap mp3 player was in comparison.

I am also convinced that he was fully responsible for keeping Apple on this path and that it is almost impossible to stop others from diluting the craftsmanship towards mediocrity as the group size grows. Big CEOs get labelled as greedy exploiters in a single brushstroke by people who don’t seem to care to read up.


Non-AI experts gives their opinion about AI, noting that the data is messy. The goal of the method was to train and work on messy data. The quote is basically pointless.

Ironically, giving the original scientific article to AI for a summary and critique (chatGPT) would have provided more detailed info.


> Ironically, giving the original scientific article to AI for a summary and critique (chatGPT) would have provided more detailed info.

Interesting times ...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: