Hacker Newsnew | past | comments | ask | show | jobs | submit | germandiago's commentslogin

I have designed a backend with exactly the same underlying philosophy as you ended up: load balancer? Oh, a problem. So better client-side hashing and get rid of a discovery service via a couple dns tricks already handled elsewhere robustly.

I took it to its maximum: every service is a piece that can break ---> fewer pieces, fewer potential breakages.

When I can (which is 95% of the time, I add certain other services inside the processed themselves inside the own server exes and make them activatable at startup (though I want all my infra not to drift so I use the same set of subservices in each).

But the idea is -- the fewer services, the fewer problems. I just think, even with the trade-offs, it is operationally much more manageable and robust in the end.


Let me see if I understand it. The TL;DR is that instead of asking for VMs and fit things there you reserve the CPU and RAM and do with that whatever you want? Number of mVMs, etc.?

I am spanish and I know the details better than you I presume.

Guernica was in many ways something both the british and republicans made stand out for propaganda.

There have been far worse things in the spanish civil wars.

For example Cabra was bombed in a day of market with the intention of killing and without being any kind of strategic strategic objective, way further than Guernica from other objectives and with more dead civilians actually.

It is just less well known bc of who did it.


>with the intention of killing and without being any kind of strategic strategic objective

Wikipedia says "The airstrike was carried out in the mistaken belief that Italian mechanized troops were stationed in the village. Once over the target, the pilots mistook the market's awnings for military tents." (Carlos Saiz Cidoncha, 2006)

https://en.wikipedia.org/wiki/Bombing_of_Cabra


Thanks because I did not know this piece of information. So maybe what I had was incomplete.

A b9mbing that was not worse than Cabra's bombing or Paracuellos killings but for some reason it stayed at the top like a myth.

Maybe we do not know what Claude has been doing and he keeps it secret...? :D

Indeed this is a nice discovery and I think it is useful in its own right.

> I don't think the fact that the bug being in the language runtime is going to be much consolation. Especially if the software you were running was advertised as formally verified as free of bugs.

Reminds of what some people in the Rust community do: they fight how safe this is or not. I always challenge that the code is composed of layers, from which unsafe is going to be one. So yes, you are righ to say that. Unsafe means unsafe, safe means safe and we should respect the meaning of those words instead of twisting the meaning for marketing (though I must say I heard this from people in the community, not from the authors themselves, ever).


Reads to me like pushing for the own interests of the user writing the article.


> I wonder how much of this is simply needing to adapt one's workflows to models as they evolve and how much of this is actual degradation of the model,

I also wonder how much people are willing to adapt to non-reliability for the sake of laziness instead of, at some point, do a proper take the lead and solve a problem if you have the knowledge + realiable resoources.

It seems to me, the way you phrase it, that anything a human comes up with when coding must go through an LLM. There are times it helps, there are tasks it performs, but I also found quite often tasks for which if I had done it myself in the first place I would have skipped a lot of confusion, back and forth, time wasting and would have had a better coded, simpler solution.


> It seems to me, the way you phrase it, that anything a human comes up with when coding must go through an LLM.

This seems like a creative interpretation. I never said anything of the sort.


My bet: LLMs will never be creative and will never be reliable.

It is a matter of paradigm.

Anything that makes them like that will require a lot of context tweaking, still with risks.

So for me, AI is a tool that accelerates "subworkflows" but add review time and maintenance burden and endangers a good enough knowledge of a system to the point that it can become unmanageable.

Also, code is a liability. That is what they do the most: generate lots and lots of code.

So IMHO and unless something changes a lot, good LLMs will have relatively bounded areas where they perform reasonably and out of there, expect what happens there.


it won't be creative because it's a transformer, it's like a big query engine.

it's a tool like everything else we've gotten before, but admittedly a much more major one

but "creativity" must come from either it's training data (already widely known) or from the prompts (i.e. mostly human sources)


We don't even know what 'creativity' is, and most humans I know are unable to be creative even when compelled to be.

AI is 'creative enough' - whether we call it 'synthetic creativity' or whatever, it definitely can explore enough combinations and permutations that it's suitably novel. Maybe it won't produce 'deeply original works' - but it'll be good enough 99.99% of the time.

The reliability issue is real.

It may not be solvable at the level of LLM.

Right now everything is LLM-driven, maybe in a few years, it will be more Agentically driven, where the LLM is used as 'compute' and we can pave over the 'unreiablity'.

For example, the AI is really good when it has a lot of context and can identify a narrow issue.

It gets bad during action and context-rot.

We can overcome a lot of this with a lot more token usage.

Imagine a situation where we use 1000x more tokens, and we have 2 layers of abstraction running the LLMs.

We're running 64K computers today, things change with 1G of RAM.

But yes - limitations will remian.


Maybe I do not have a good definition for it.

But what I see again and again in LLMs is a lot of combinations of possible solutions that are somewhere around internet (bc it put that data in). Nothing disruptive, nothing thought out like an experimented human in a specific topic. Besides all the mistakes/hallucinations.


Yes, LLMs have a very aggressive regression towards the mean - that's probably an existential quality of them.

They are after all, pattern matching.

A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.

The funny thing is although I think are are 'metaphysically special' in our expression, we are also 'mostly just a bag of neurons'.

It's not 'natural' for AI to be creative but if you want it to be, it's relatively easy for it to explore things if you prod it to.


> A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.

I think we are far and ahead from this "mix and match". A human can be much, much more unpredictable than these LLMs for the thinking process if only bc looking at a much bigger context. Contexts that are even outside of the theoretical area of expertise where you are searching for a solution.

Good solutions from humans are potentially much more disruptive.


AI has all of human knowledge and 100x more than that of just 'stuff' baked right it, in pre-train, before a single token of 'context'.

It has way more 'general inherent knowledge' than any human, just as as a starting point.


Yet they never give you replies like: oh, you see how dolphins run in the water taking advantage of sea currents if you are talking about boats and speed.

What they will do is to find all the solutions someone did and mix and match around in a mdiocre way of approaching the problem in a much more similar way to a search engine with mix and match than thinking out of the box or specifically for your situation (something also difficult to do anyway bc there will always be some detail missing in the cintext and if you really had go to give all that context each time dumping it from your brain then you would not use it as fast anymore) which humans do infinitely better. At least nowadays.

Now you will tell me that the info is there. So you can bias LLMs to think in more (or less) disruptive ways.

Then now your job is to tweak the LLMs until it behaves exactly how you want. But that is nearly impossible for every situation, because what you want is that it behaves in the way you want depending on the context, not a predefined way all the time.

At that time I wonder if it is better to burn all your time tweaking and asking alternative LLMs questions that, anyway, are not guaranteed to be reliable, or just keep learning yourself about the domain instead of just playing tweaking and absorbing real knowledge (and not losing that knowledge and replace it with machines). It is just stupid to burn several hours in making an expert you cannot check if it says real stuff instead of using that time for really learning about the problem itself.

This is a trade-off and I think LLMs are good for stimulating human thinking fast. But not better at thinking or reasoning or any of that. And if yiu just rely on them the only thing you will emd up being professional at is orompting, which a 16 year old untrained person can do almost as well as any of us.

LLMs can look better if you have no idea of the topic you talk about. However, when you go and check maybe the LLM hallucinated 10 or15% of what it said.

So you cannot rely on it nayways. I still use them. But with a lotof care.

Great for scaffolding. Bad at anything that deviates from the average task.


First - I'm doubting your assumptions about "What they will do is to find all the solutions someone did and mix and match".

That's not quite how AI works.

Second - You'll have to provide some comparable reference for how 'humans' come up with creative solutions.

Remember - as a 'starting point' AI has 'all of human knowledge' ingested, accessibly instantly. Everything except for a few contemporary events.

That's an interesting advantage.


> First - I'm doubting your assumptions about "What they will do is to find all the solutions someone did and mix and match".

I never, ever got from a LLM a solution that either I could have never thought of or it was available almost verbatim in internet (take this last one with a grain of salt, we know how they can combine and fake it, but essentially, solutions looking like templates from existing things, often hallucinating things that do not exist or cannot be done, inventing parameter names for APIs that do not exist, etc).

When I give some extra thought to a problem (20 years almost in software business) I think solutions that I come up with are often simpler, less convoluted and when I analyze LLMs they give you a lot of extra code that is not even needed, as if they were doing guessing even if you ask them something more narrow. Well, guessing is what they are doing actually, via interpolation.

This makes them useful for "bulky", run fast, first approach problems but the cost later is on you: maintenance, understanding, modifying, etc.


I think the terminology is just dogshit in this area. LLMs are great semantic searchers and can reason decently well - I'm using them to self teach a lot of fields. But I inevitably reach a point where I come up with some new thoughts and it's not capable of keeping up and I start going to what real people are saying right now, today, and trust the LLM less and instead go to primary sources and real people. But I would have never had the time, money, or access to expertise without the LLM.

Constantly worrying, "is this a superset? Is this a superset?" Is exhausting. Just use the damn tool, stop arguing about if this LLM can get all possible out of distribution things that you would care about or whatever. If it sucks, don't make excuses for it, it sucks. We don't give Einstein a pass for saying dumb shit either, and the LLM ain't no Einstein

If there's one thing to learn from philosophy, it's that asking the question often smuggles in the answer. Ask "is it possible to make an unconstrained deity?" And you get arguments about God.


do they reason? Where was a video by AI researcher, that showed, that they do not reason but actually come with the result first and then try to invent "reasoning" to match it.


I mean humans do that too, and I don't think it's very unjustified. The "we deduce from a deep base premise P down a chain of inferences" picture is extremely incomplete and has been challenged all over the place - by normal people, by analytic and continental philosophers, by science itself, etc.

Not trying to say that LLM's are equivalent to humans but that the concept of reasoning is undefined.

And the fact that their performance does increase when using test-time compute is empirical evidence that they're doing something that increases their performance on tasks that we consider would require reasoning. As to what that is, we don't know.


But humans verify things. AI just fools you and I would say it is the biggest problem.I have with AIs.

They give me stuff that I do not know whether to trust or not and what surprises I will find down the way later.

So now my task is to review everything, remove cruft. It starts to compete against investing my time to deep-think and do it thoughtfully from the get go and come up with something simpler, with less code and/or that I understand better.


I mean yeah ultimately it's a tool and I've even leaned off of AI recently for coding because it was exhausting dealing with all its hallucinations


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: