Hacker Newsnew | past | comments | ask | show | jobs | submit | ej88's commentslogin

Ive been preparing somewhat for this, as someone who knows they aren't a top N% engineer. My current role involves a certain amount of sales and product in addition to SWE (and luckily I find it fun to talk to customers!)

I think it's prudent for a lot of swes to think about what a future looks like where most of the job is managing and unblocking agents.


my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful (especially in the hands of competent users), and that all the labs are extremely compute starved due to overwhelming demand.

I used to enjoy his writing a lot pre-AI around the time he was spending a lot of words on Musk, crypto, etc. More because it was an entertaining form of hate-reading about those topics than really informative, per se. Then he started doing this schtick with AI and I felt like I got hit hard with Gell-Man Amnesia because he so blatantly makes claims that anybody with a free ChatGPT account can dismiss handily, and it calls everything else he says into serious question.

> my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful

Yeah, I find that sort of critics to cause more harm than good. The economic case for closed source AI isn't there - in macroeconomic sense, and accounting for all costs, it's more expensive than the value it provides. There's data to back that up, so focus on economics.

On the other hand, hallucinating about what AI can or cannot do is useless, only research can provide the answer.


Theres no viable yt alternative because of network effects not the video hosting

Those same network effects dont exist (yet) on models


There is a bit of apples vs. oranges

It is unlikely that models will have network effect because (1) there is less of a two-sided marketplace and (2) people are already forming brand preferences. We also see significant convergence among the agent harnesses as well.

I'm currently building out an internal agentic orchestration platform for business and development and a requirement is to support multiple models and tools so people have an amount of choice.


Discoverability of other content and ad money. And then critical mass of viewers leading to sponsorships and other exploitative models of monetising outside Google.

Ads might be questionable model for lot of use cases. And network model only works for promotion but does not lock users in because content is only available in one place.


im not sure i understand your reply, but it sounds like you're agreeing with me that yts biggest advantage is the network effect?

Yes. And that won't work wit AI.

"The psychic toll of AI" -- It's sad, but each of these scenarios (barring the AI notetaker, which I haven't found to be an issue personally but ymmv) are indicative more of the culture of the company than the tool itself. From my experience it seems like the most frontier companies have the best AI-use culture.

I work at a very 'AI-pilled' company, but:

- Everyone reads and reviews every PR and leaves human comments

- Documentation is written well and tended to by humans

- There's no 'AI mandate'

- Whether features are possible are first explored by an agent but manually traced by a human through the codebase

You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.


Are there any companies that aren't AI-pilled at this point?

Odoo, Belgium, cloud ERP. Not very AI pilled, even if AI is considered and used somehow

Odoo suffers from others issues though. Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.

Also the very ugly death they gave OpenERP/Odoo on-premise.


It's Python 3, no Flask (but werkzeug) and XML templates. It works for hundred thkusand clients, and you can install Odoo on premise as you like. I'm 90% dedicated to that. So... explain the "tech debt" thing, as I don't get it. You don't need Rust or microservices for every use case. Don't be fooled by marketing style "old style technology" bias and set up an account. PostgreSQL with synchronous workers works perfectly for most people.

I am absolutely not a fan of "new style technology" as you might have understood.

I used to run Odoo on-premise for a small company about 3-4 years ago. The upgrade path (with the OpenUpgrade fork) was awful, many features (that WYSIWYG editor, Odoo Studio?) were locked to the cloud version, and there was little to no documentation. IIRC we even had to drop it because the delay between on-prem updates & cloud updates was too high.

And there were mentions of Flask in the logs, so no it wasn't just Werkzeug (which is synonymous with Flask since its inception, anyways).

I do not have fond memories of editing invoice templates blindly.


Ofc the cloud offering has much more, but you have to consider that no other major ERP software comes with the engine 100% opensource, in the kind of market. So yes, you may feel Odoo community a bit incomplete and probably don't want to pay the cloud version. But the alternatives are SAP, Microsoft, Oracle, some very fragmented open ecosystem, or some 90's GUI custom ERPs, right? I can tell you we use Werkzeug and not Flask, have reverse proxy nginx, use postgresql, and I don't see a lot of tech debt in that. Not much AI, all the reviews are manual and kinda strict.

most are not, e.g. if your company has any of these you're probably not ai pilled

- mandatory ai usage

- ai usage tied to kpis or performance reviews

- trainings on how to use claude code

- restrictions on what tools you can use

- layoffs

- engineers still typing every line of code by hand


Wait, I don't get it. Some of those are a bit contradicting, and for others I don't see how they _don't_ mean your company is "AI pilled"?

sorry, i shouldve defined it better. my point of view is an 'ai pilled' company is one that has a realistic understanding of the benefits and limitations of ai productivity, and leadership + employees are fully bought in, and theres a general high trust environment

if ai has to be enforced (mandatory usage, kpis, training, restrictions on tools) -> clearly the execs think the employees are not bought in

typing every line by hand -> self explanatory

layoffs -> this one is a bit of a stretch, but from what i've seen the best companies at leveraging ai are not laying people off, instead continuing to hire more to capture the market or capitalize on the demand. could be confounding variables though


in these cases arr = annual run rate, commonly used when your revenue is either going vertical (cursor - good) or your revenue is choppy and full of short term projects (mercor - bad)

alpha is your abnormal rate of return

more colloquially it's your 'edge' over your competitors


This is cool!

Seems like Claude is actually building almost like a layered Figma wireframe that you can do fine grained adjustments afterwards (e.g. adjust font size).

Interesting that Canva provided a quote of support. I'm not familiar with the differentiation, but it seems like this will directly siphon customers from Canva, right?


There's an "export to Canva" button in Claude Design, so perhaps they're hoping this will be another entry point for new users, or that they'll be able to "lock in" as the default design software for Claude users.

(I lead AI Products at Canva :)

Our mission is to empower the world to design, and we believe in making Canva available in every place where ideas begin. Being the most interoperable platform creates mutually better products, more value for community, and more value and growth for our company.

We've been working closely with Anthropic for many years, and we see this as complementary. Our MCP, integrations, and plugins have already introduced millions of new users to the full power of Canva, and we're excited to continue doubling down here.


I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.

I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.

In my opinion, how at-risk a job is in the LLM era comes down to:

1: How easy is it to construct RL loops to hillclimb on performance?

2: How easy is it to construct a LLM harness to perform the tasks?

3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?

Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.


> I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'

I wanted to talk about this more but couldn't quite figure out how to phrase it, so I cut a fair bit: with "incanters" I'm trying to point at a sort of ... intuitive, more informal practitioner knowledge / metis, and contrast it with a more statistically rigorous approach in "statistical/process engineers". I expect a lot of people will fuse the two, but I'm trying to stake out some tentpoles here. Users integrate a continuum of approaches, including individual intuition, folklore, formal and informal texts, scientific papers, and rigorously designed harnesses & in-house experiments. Like farming--there's deep, intuitive knowledge of local climate and landraces, but also big industrial practice, and also research plots, and those different approaches inform (and override) each other in complex ways.


> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?

It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.

We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.


Who is better positioned to pilot the LLM than a domain expert?

"Software engineer" as a job title has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder, for years prior to LLMs. People assuming the only, or even primary, function of the job is outputting code reveal a profound lack of understanding of the industry in my opinion. Beyond the first year or two it has been commonly accepted that the code is the easy part of the job.


> has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder

This is something that I would have thought HN readers were pretty familiar with. LLMs can make my code work faster or more prolific, but with 30yoe I spend a fairly significant chunk of my work time doing anything but code.


I'm occasionally reminded the HN's commenting base is much larger than my niche in the industry (VC backed startups + large public tech companies is my background). I had a similar reaction to people thinking Peter Bailis going from CTO at workday to "member of technical staff" at Anthropic was him trading a leadership position for closing jira tickets.


> Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer?

Because a machine can never take accountability. If a software engineer throughout the entire year has been directing AI with prompts that created weaker systems then that person is on the chopping block, not the AI. Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.


> Because a machine can never take accountability.

A business leader can though.

> Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.

I think you're missing the point. Why can't an LLM advance sufficiently to be a REAL senior software engineer that a business person/product manager is prompting instead of YOU, a software engineer? Why are YOU specifically needed if an LLM can do a better job of it than you? I can't believe people are so naive to not see what the endgame is: getting rid of those primadonna software engineers that the C-suite and managers have nothing but contempt for.


> A business leader can though.

If a 'business leader' is prompting out software through their agents, ensuring it works, maintaining it, and taking accountability... they're also a software engineer

These titles are mostly semantics


By this definition, pre-LLM "business leaders" circa 2008 with not even an understanding of Excel were already "software engineers" this whole time - just prompting out software through their meatspace agents, instead of their silicon ones.

Dismissal of arguments as "just semantics" is high school level argumentation.


clearly not the same when they were abstracted from the realities of building software and.. directly taking accountability for it!

by semantics, i mean the definition and pool of tasks, responsibilities, and outcomes a job is comprised of is shifting so fast that the borders of what is a 'software engineer' and 'business person' are melding together. software engineers are business people in their own way


I don't understand why humans abstract a business leader away from the realities of building software, while LLMs do not.

If the rhetoric is to be believed, the set of responsibilities falling to the role of "software engineer" is shrinking to zero, and all engineers are being forcibly "promoted" to the managerial class of shepherding around agents.


i would say theres more nuance than that (disclaimer: dont have a crystal ball)

software engineers who are comfortable doing business work - managing, working with different stakeholders, having product and design taste, being sociable, driving business outcomes are going to be more desired than ever

likewise, business leads who can be technical, can decompose vague ideas into product, leverage code to prototype and work with the previous person will also be extremely high value.

i would be concerned if i was an engineer with no business acumen or a business lead with no technical acumen (not counting CEOs obviously, but then again the barrier to starting your own business as a SWE has never been lower)


It's funny, that's why COBOL was originally developed in 1960: so that business people could write software themselves without needing software engineers. And it sort of worked, to an extent. History repeats itself.


Between then and now, what ever happened to "no code development" or whatever they called it, where all of the world's APIs could be connected with lines in a diagram?


low code / no code, and it's been around in one form or fashion since the 1990's, at least.


why would it be a manager? hire a cheap intern to be the scapegoat, if the job market is bad enough. no reason for liability to fall on the suits


That's how things work already in every workplace where there's any real danger. The companies construes its policies and paper trail in bad faith so that the employees are always operating contradictory to policy/training and then when something happens blame can be shifted on them.


You can say this about every single role.

Why can't VCs feed your pitch deck into an AI and get a business they own 100%?

If the only thing you're paying for is compute time...

Some.people are claiming it's about taste. Why can't an AI learn taste?


It's funny how we see some people who claim to have "taste" walking around in public wearing horrible Balenciaga shoes. Are they really just tasteless, or are they doing it ironically to troll the rest of us? I guess we'll never know. Maybe someday AI robots will achieve the same level.


It's not about whether they make mistakes (they do! although the exact definition of a mistake is nuanced), but whether they can take accountability if the software fails and millions are lost or people die. A large part of the premium paid on software engineers is to take accountability for their work. If a "business person" directs their agent to build some software and takes accountability -- congrats! They are also now a software engineer :)

The lines between a software engineer / business person / product / design and everything else will blur, because AI increases the individual person's leverage. I posit that there will be more 'software engineers' in this new world, but also more product people, more business people, more companies in general.


> It's why I just can't understand the mindset of software engineers who are giddy about this brave new world. There really is nothing special about your expertise that an LLM can't achieve, theoretically.

They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.


Many experienced software engineers will move into infrastructure or architect roles, if they haven't already. Experienced engineers are in the best position to use LLMs because they can validate the output as actually being correct, not just looking like it works. Newer folks are going to be in a bad spot.


The optimistic spin is, I think, software developer as a career dies, just like sysadmin. But just like dev-ops, a new to-be-named role (or set of roles) will arise


> "...software developer as a career dies..."

Web front-end and backend developer as a career dies, probably desktop/mobile application development too. However, some of the more specialized software developer roles are likely to survive; none of the people on the Linux kernel team have anything to worry about and the same goes for the GCC folks.


> domain experts will be fine

But I don't see how this holds up to even the slightest amount of scrutiny. We're literally training LLMs to BE domain experts.


I think these arguments tend to reach impasse because one gravitates to one of two views:

1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.

2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.


I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.

So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.

I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.

Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.


Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.

It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.


One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright.


I was thinking today that I need to pivot to making and selling shovels, but then other issue is is anyone going to need shovels in the future.


An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way.

They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.


People need to be careful about buying into the shorthand lingo with LLMs. They do not learn like we do. At the lowest level, they predict which tokens follow a body of tokens. This lets them emulate knowledge in a very useful way. This is similar to a time series model of user activity: the time series model does not keep tabs on users to see when they are active, it has not read studies about user behavior, it just reflects a mathematical relationship between points of data.

For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.


The domain expertise I'm referring to isn't vague, it literally doesn't exist as training data. There are no cases of problems and solutions to study that are relevant to the state-of-the-art. In some cases this is by intent and design (e.g. trade secrets, national security, etc) long before for LLMs arrived on the scene.

We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.


>They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.

Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.


In some sense, technology is "not normal" regardless.

If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.

In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.

Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.

It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.

Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.


Accountability is really a way to address liability. So long as people can sue and companies can pay out, or individuals can go to jail, there is always going to be a question of liability; and historically the courts have not looked kindly at those who throw their hands up in the air and say “I was just following orders from a human/entity”


Yes. I agree. There is a context and it makes sense within this context. But, the economy or industry is not there to produce accountability, liability, court-legibility. These are support structures, not end goals.

I said this in response to the example above, that humans are needed where accountability is a concern. This is pretty distant from the macro.

If we think of the 19th century economy... it was mostly about food, household products and suchlike. Now the economy is a lot harder to reason about and it's easy to miss the forrest for the trees... when talking about how technology will affect it.

Accountability is required to work with your payment processor, which works with visa and mastercard, that also have requirements, etc. Depending on where (of anywhere) paradigm shifts occur... we may or may not even need these functions.

That's why it's so hard to reason our way to predictions about upcoming Ai-mediated changes.


>nd historically the courts have not looked

This is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.


While this is true, this is somewhat mitigated by the fact that few sectors are truly monopolized and large corporations also sue each other.


A huge component of compulsory (either by statute or de-facto as a result of adjacent statute, like mandatory insurance + requirements thereof) professional licensure is that if you follow the rules set by (some entity deputized by) government the government will in return never leave you holding the bag. The government gains partial control and the people under it's control get partial protection.

"oh I'm sorry your hospital burned down mr plantiff but the electrician was following his professional rules so his liability is capped at <small number> you'll just have to eat this one"

I would wager that a solid half if not more of the economy exists under some sort of arrangement like that.


Right, but usually that also involves verifying that the electrician actually followed the professional rules, and if not, they have liability


So the court checks if they were "just following orders"?

Sounds to me like following orders is in fact this magical thing that causes courts to direct liability away from the defendant.


I think the point is supposed to be that "following the practices and procedures that limit their liability" = "doing their due diligence to reduce risk in accordance with their credentialing body".

We generally don't hold people liable for acts of God or random chance failures. For example, malpractice suits generally need to prove that a doctor was intentionally negligent on their responsibility.

Everything in real life has quantifiable risk, and part of why we have governing bodies for many things is because we can improve our processes to reduce the risk.

It's not just following orders :) it's recognizing that the solution to risks isn't to punish the actor but to improve the system.


I'm not thinking act of god type unforseable event I'm thinking "everyone knows this is stupid and wrong and will cause a problem eventually but it's easier to just follow the rule than challenge it" situations because "if I follow the rules my ass is covered".


Sort if like how one could be held liable for copyright infringement?


> How much of the job is a structured set of tasks vs. taking accountability?

More accurately, how many jobs are probabilistically mechanical. That is, how many jobs are really the execution of a serious Bayesian decisions with a strong prior. LLMs are really great at displacing such jobs.


Throw on a wizard hat and robe at some of the voice only vibe coders you see on YouTube and it’s essentially “incanters”. Hilarious.


They do, in the paper they mention they evaluate the LLM without tools


ai skeptic fanfic evolves in fascinating ways every day


Take it a step further: AI generated AI skeptic fanfic :D


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: