Hacker Newsnew | past | comments | ask | show | jobs | submit | netcan's commentslogin

The music was new, black polished chrome And came over the summer like liquid night The DJ's took pills to stay awake and play for seven days

There's something special to the loss of promise. That's the thing most often mourned, the potential.

There was a period when information wanted to be free and the web wanted to be something that makes people better... one way or another. Maybe it would make democracy better, or bring down bad regimes. Maybe make people smarter. Maybe it would democratize education or commerce or something in some way... "heals the world."

That period is a different period for everyone... maybe overlapping with one's optimistic youth. The promise was also a different promise, for different people. Early social media. Wikipedia. FOSS.

Wasted potential is a mournful thing.


Path dependencies between invention and utilization... are complicated and hard to fathom.

Our mental models of developments like the industrial revolution, literacy, printing or suchlike tend to be a lot more straightforward than how things play out in practice.

When a bottleneck is eliminated... you tend to shortly find the next bottleneck.

Meanwhile, there is an underlying assumption everyone seems to make that "more software, more value" is the basic reality. But... I'm skeptical.

To do lists, wishlists, buglists and road maps may be full of stuff but...

Visa or Salesforce have already exploited all their immediate "more software, more money" opportunities.

The ones in a position to easily leverage AI are upstarts. They're starting with nothing. No code. No features. No software. With Ai, presumably, they can produce more software and make value.

Also... I think overextended market rationalism leads people to see everything as an industrial revolution...which irl is much more of an exception.

The networked personal computing revokution put a pc one every desk. It digitized everything. Do we have way better administration for less cost? Not really. Most administrations have grown.

Did law fundamentally change dues to dugital efficiency? No. Not really.

If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.


>If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.

It's possible capitalism will drive all enterprise to terrible codebases.


There is evidence for neanderthals making gum/glue from birch bark. It's useful for hating stone onto wood for tool making.

I wonder if this bone grease was an edible product or something else. Oils have many uses.


Yeah...

This thread and article have made me realize that a lot of different incentives exist to talk up the apocalypse.

It even neutralizes the Eliezers and their apocalypse mongering.


I remember thinking Altman seemed to be over-reacting and fanning the flames about "Ai bias" circa gpt3.

There was a when little panic about the fear of bigoted computers at that point.

But... it got a lot of earned advertising and they also sort of did a "pre-burn." They saturating the space with "bigoted Ai concern" for a while, and now I don't ever see it come up.

There's a "get ahead of the inevitable" thing going on. Also, obviously, prospectus hype.

Besides all that, these are geeks and they're excited. This is what an excited geek looks like.


> All tech business plans eventually lead to serving ads

IDK if this is true.

The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.

There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.

No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.

Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.


So the article isn't very good but the vibe coding debate is pretty interesting.

This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.

First question is how much is more software actually worth to you."

This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.

But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.

Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.

The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.

If you have no customers yet... you can unfettter the vibe and see if it works.

Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.

The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.

There's no way to do vibe coding safely, at scale, currently.


> how much is more software actually worth to you.

A really misunderstood vibe coding task, especially in more corperate settings, is code removal and refactorings.

I think this is the the fundamental misunderstanding about agentic development: people only see it as a tool to add code.


This smells like BS to me, and I have a bird’s eye view into several enterprises and startups.

LLMs are not being used for code removal or refactorings, it’s either to “hopefully unblock” this large project that has been behind deadline for 12 months, or to just speed up development (somewhat).


Sorry, the "I" should have been an "A" (which I have corrected).

You are right that they are not. And that is the issue, the misunderstanding.


>The thousand flowers died at Google because they had reached a point where opportunities are not everywhere.

It died because Google reached the enshittification penny pinching rent-seeking stage.


In a sense, everyone is a startup now... At least, every serious user of agents.

So... if you spend $3m to replace a $1m team... you are betting on that $3m cost coming down. It's a proof of concept. The first step is to find out if agents can do the job at all. At this point you are hoping future versions will get more efficient.

Trying to make something efficient before you know that it is even possible is hard.

Drop-in, profitable on day-1 isn't what the frontier looks like.


If we want to be like everyone else then yes it's true. However that business may or may not survive when token costs go up (or is fashionable to say now, "rug pull"). If you can be token efficient now, the path to profitability is much clearer.

There's already many things that can be done now to bring down token use. Better planning, tests, Language severs, MCP compression. Don't use claw, teams, swarms, Ralph loop, scheduled tasks unless there is a clear use case.


If token cost goes up, then the efficiency gains come from using fewer tokens... which is likely possible.

The point is that efficiency comes after, not before.


seems like what you're suggesting to token efficiency is to simply use less of it?

Less or be more productive with same amount?

Almost everyone needs a worker-owned co-op to capture more of the value they create.

Useful comment. Thanks.

Myopic is inevitable, to some extent. It's very hard to project this stuff.

Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.

We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.

He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.

It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.


I think if manual coding becomes "outdated" then there will just be no demand for junior engineers to manually code. People will probably still learn to code manually, just as there are folks who will still build their own furniture. There may just not be a business demand for it.

What that means 20-30 years from now when the seniors of today retire if there are no juniors right now is yet to be seen. People say that AI will probably have advanced far enough that it won't be a problem. But let's say somehow AI stagnantes, then I would guess that AI-generated code that is too difficult to debug will be treated as legacy and there'll be demand for manual coding again.

Companies that aren't able to afford the rewrite or maintenance will probably go out of business.

It's an interesting time we live in for sure.


>> What that means 20-30 years from now when the seniors of today retire.

I fear that many won't retire and instead completely leave the industry which is already happening. Its anecdotal, but when I first started as a junior dev, I was working with many intermediate devs who had a few years on me.

I kept ties with a group of about two dozen devs. We all went through a lot of the same stuff. Last year I attended two local conferences. Out of the 24 or so, who were all seasoned senior devs now? Only 3 of us remained in the industry. Granted, I'm in accessibility and another moved more into a UI/UX design role but we were all that's left.

The majority of a discussion at lunch was about why they left and it was pretty universal. They were seeing AI creeping into everything they did and just walked away. The list was long of what they disliked about it and really didn't see the huge upsides that the industry was pushing. They had money, they had other opportunities they choose to pursue far and away from the tech industry.

It was pretty eye opening to say the least. We always imagined sitting around a table in our 60's recounting our experiences in tech and now we're not even into our 40's and the industry is losing amazing talent every year that IMHO cannot be replaced by an LLM prompt.

I don't have a good feeling about where this is headed.


Out of curiosity, where did those who couldn’t or didn’t want to retire go?

A few went into the trades. Welding, carpentry and one started their own painting company. Another started a landscaping company. One moved out into BFE Montana, got a 125 acre hobby ranch and trains horses for equestrian riders.

The one theme is not only just get out of tech, but trying to be completely removed from it.

I know one of the guys moved out to Western ND to start a farm growing wheat and sunflower - only to find out most of the work is either automated or relies heavy on technology like the majority of John Deere tractors and other tools they need. He said he's still happier than he was working in Silicon Valley. lol


Thanks for sharing. Maybe I'm just hanging out with a lot of young devs (we're in our 30s and senior leaning) but we're all cautiously optimistic about AI. That being said we also don't have FU money so we're kinda forced to deal with it.

Maybe CS is one of those industries that just ends up cannibalizing itself with its success.


> Socrates wrote about what was being lost…

Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.

This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058


Socrates never wrote anything. At least, not as far as we know.

It's the opening page of the book Technopoly.

And here I thought I was being unique. I guess Socrates must be popular.

I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.

It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.

This kind of argument flies in the face of the fact that plenty of inherited rich people seem to lead very happy lives. Of course, they do find things to struggle with, but it's much more pleasant to struggle to score 72 at the golf course or to outbid a rival for a piece of contemporary art than to struggle for basic needs.

I don’t share your idea of a happy life.

I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.

Facing challenges and overcoming obstacles, friends and family is what makes me happy. When you’re rich, most people only care about your money, not the person you are. And I think that’s exactly what a happy life is about.


I guess to each their own. But in the little free time I have as a non-rich version, I like to face low-stakes challenges I myself choose, e.g. in my case those currently mostly are learning Chinese and learning to play a musical instrument. Those still provide obstacles, difficulties, the feeling of progress and moments of success/failure, but I can do them at my own pace and with no serious consequences if I fail.

I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.

In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.

What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).


It's fairly easy to be submarine rich, and fly completely below the radar. Just brush off questions about your work with vagueness. If you're not flashy, nobody will suspect you're rich

i agree, but i doubt anyone on hn is struggling for basic needs. so the struggle is almost always fun, and i think that goes for most white collar jobs. it's a fun struggle. getting to the office, doing some chores, and that's something AI is slowly killing off

But there is 150/200k year people using gpt for psychological help...

Automation is also for reducing drudgery - the work that prevents us from meaningful struggle by taking up resources that can be better applied elsewhere. Not all struggle (or pain) is created equal.

I wouldn’t count on reduced drudgery. The assembly line automated many movements needed for manufacturing. But which work involved more drudgery—-craftsman-style car production or standing on an assembly line at Ford?

With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.


The craftsman didn't move to the assembly line.

"struggle is the only reason for existence"

That is a bold and frankly unsupportable claim.


Humans don’t tend towards idle quiescence.

We seem to be insatiable inquisitive.

Curiosity doth struggle many cats.


Being inquisitive doesn't equate to loving, or needing, struggle in my brain. Also, struggle differs for many people. Running a half marathon was a struggle for me, but I can't compare it to a family who is struggling to pay bills.

If we take Maslows hierarchy of needs, me running a half marathon is self actualization. Something I'm privileged to be able to do. A family struggling to put food on the table is still on the Lower tier of the pyramid.


Yes, I tend to agree.

A lot of paraimony between your statement and Socrates' comments on the transition to writing.

Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.


I’ve grown to appreciate this aspect of standard examination as I’ve gotten older. Everyone wants to say “oh, you can just look it up now”, but how can you come up with higher level thinking, when you don’t have the fundamentals in your mind?

To use math as an example, you can always look up formulas. But after more than 1 "layer" of looking up, that quickly becomes impossible. Like, when I had to learn to calculate derivatives and primitives, I could look those things up. But when I got to linear algebra, I couldn't progress until I deeply internalized derivatives and primitives, because looking up formula A only for it to contain unknown formula B just becomes a mess.

Agreed. We've been able to "look it up" for a while. To use math as an example, we've had calculators for a very long time. But when I was in school they didn't let us use calculators until precalc. Now I use calculators even for simple math because I already understand the fundamentals and just need expedience.

Just because one can "look it up" doesn't mean it's necessarily the best thing to do at the moment. But it also doesn't mean that folks who look it up are necessarily losing any higher level thinking, though I concede that many people certainly delude themselves into thinking they understand the fundamentals and thus can use AI as a tool for expedience when they're really using it as a tool for thought.


I "purge" - or better yet choose not to retain - the data.

BUT, BUT! I keep the index.

My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)

> Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.

Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.

Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)


It just becomes more abstracted but the thinking is still there. And who is to say we aren’t going to keep reading books, delving into hobbies, or watching movies. All those concepts will then be mixed into the our brains and who knows what new things we will think of to extract out and desire to build with AI.

I think we'll continue to read books and stuff. But many books/movies will probably have devolved into AI slop (not that this hasn't been a trend for the last few decades to a lot of film buffs).

But hobbies like woodworking or instrument seem immune to slop... But people can be creative with what they can sloppify


> I'd say that by purging stuff from the brain we are losing thinking itself

The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.


We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live

And we don't need words to think; cognitive problem solving and language processing are separate processes [1]

We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.

[1] https://www.scientificamerican.com/article/you-dont-need-wor...


We also never run out of fuel. There will always be some energy left here and there to tap into.

Fuck thinking!

If I am free as “rational I,” then the rational in me, or reason, is free; and this freedom of reason, or freedom of the thought, was the ideal of the Christian world from of old. They wanted to make thinking – and, as aforesaid, faith is also thinking, as thinking is faith – free; the thinkers, the believers as well as the rational, were to be free; for the rest freedom was impossible. But the freedom of thinkers is the “freedom of the children of God,” and at the same time the most merciless – hierarchy or dominion of the thought; for Isuccumb to the thought. If thoughts are free, I am their slave; I have no power over them, and am dominated by them. But I want to have the thought, want to be full of thoughts, but at the same time I want to be thoughtless, and, instead of freedom of thought, I preserve for myself thoughtlessness. If the point is to have myself understood and to make communications, then assuredly I can make use only of human means, which are at my command because I am at the same time man. And really I have thoughts only as man; as I, I am at the same time thoughtless. He who cannot get rid of a thought is so far only man, is a thrall of language, this human institution, this treasury of human thoughts. Language or “the word” tyrannizes hardest over us, because it brings up against us a whole army of fixed ideas. Just observe yourself in the act of reflection, right now, and you will find how you make progress only by becoming thoughtless and speechless every moment. You are not thoughtless and speechless merely in (say) sleep, but even in the deepest reflection; yes, precisely then most so. And only by this thoughtlessness, this unrecognized “freedom of thought” or freedom from the thought, are you your own. Only from it do you arrive at putting language to use as your property. If thinking is not my thinking, it is merely a spun-out thought; it is slave work, or the work of a “servant obeying at the word.” For not a thought, but I, am the beginning for my thinking, and therefore I am its goal too, even as its whole course is only a course of my self-enjoyment; for absolute or free thinking, on the other hand, thinking itself is the beginning, and it plagues itself with propounding this beginning as the extremest “abstraction” (such as being). This very abstraction, or this thought, is then spun out further

- The ego and its own, Max Stirner


Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.

Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.

Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.

LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.


I would argue it's virtually impossible going forward for a junior engineer to run that harder path.

Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.

I've seen many junior unable to walk that necessary path before LLMs were a thing.


Socrates was histories first Luddite. He opened Pandora’s box. I wish him and Plato would be radically rejected as the garbage trash they are (basically just a defense of hierarchy and dialects)

Quoting my boy Max Stirner who also fking hated these guys

“This war is opened by Socrates, and not until the dying day of the old world does it end in peace.“ - The Ego and its Own, Max Stirner


Probably not your point but ultimately the Luddites were acting as a “Verein von Egoisten”, no?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: