Come on, do you actually think "corporate engineers" care about what you are doing individually? Do you think they look specifically for you and make fun at your individual usage pattern? Do you think it gives them interesting information about your private life?
No one cares, there are millions of users, no one is going to look at your data, and even less be able to actually know which person a given user is.
We legit just want to know aggregated and objective information about how people use our products so we can make it better for you.
"Why do all corporate try to spy on our usage patterns?" Because the ones who don't have a crap product and all died long ago.
I get it, but as a "AI expert and senior leader" myself in my 1,000 people organization (in relative terms), the disconnect I have is:
A lot of what non-believers say matches "enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises". They would then say they need 2 weeks to work on a specific project, the good old way, maybe with some light AI use along the way.
But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
So overall, I think the lack of enthusiasm is largely a skill issue. Not having the skill is fine, but not being willing to learn the skill is the real issue.
I see things changing, as "non-believers" eventually start to realize that they need to evolve or be toast. But it's slower than I imagined.
I am a strong believer and selected as power user because of AI usage metrics, but I also see perverse incentives -- a colleague was desperately searching for me on the Claude token usage leaderboard (I was part of a different group he did not have access to) -- it was clear he was actively trying to climb that leaderboard.
Meanehile our average PR loc balooned to ~2000loc -- generated with Claude, reviewed with copilot but colleagues also review it with Claude because it gives valid nitpicks that bump up your github stats, while missing glaring functional/architectural issues, overenginerring issues.
No way this doesn't blow up down the road with the massive bloat we're creating while getting high on the "good progress" we're making.
Yes, your 3 minutes prompt got merged.
So was my friends(ex-programmer now manager) non-ai generated PR that a technical TL got stuckstuck on for 2 weeks.
Different perspective? Survivor bias? High authority?
Blame your engineering culture not AI if metrics such as Github stats, number of nitpick reviews and token usage is what is used to judge one's performance.
In a sane engineering culture, actual customer-visible impact is what is measured, and AI is just a tool to improve that metric, but to improve it massively.
> But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
this is a nice anecdote but i think the real issue is the forcing and kpi-nization of llms top-down for nearly everything
there are still code-quality issues, prompting issues for long-running tasks, some things are just faster and more deterministic with normal code generators or just find-and-replace etc
people are annoyed at the force-feeding of llms/ai into everything even when its not needed
somethings can be one-shotted and some things cant, and that is fine and perfectly normal but execs don't like that because its not the new hotness
> somethings can be one-shotted and some things cant
True but my point is that people vastly underestimate what is one-shottable.
In my experience, 80% of the times an average "non-believer" SW engineer with 7 years experience says something is not one-shottable, I, with my 15 years of experience, think it is fact one-shottable. And 20% of the time, I do verify that by one-shotting it on my free time.
I believe that this has happened in some cases but am very skeptical that it is widespread and generalizeable at this point. My own experience is that software engineers thinking they can easily solve a problem in a domain they know nothing about overrate their ability to do so ~99% of the time.
Well "non-believers" don't see any gain from being faster, right? That'll just set expectations of "do a lot more for same". Fear of being "toast" will get you the loyalty you'd expect from fear.
the best way I found to deal with non-believers is to have claude run code reviews on their own work. I’ll point it to an older commit and get like 3-page markdown file :) works really, really well.
on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)
You can get it to generate a 3-page markdown file for any random code, or its own code it just generated. If requested it will produce a seemingly plausible looking review with recommendations and possible issues.
How impressed someone get from that will depend on the recipient.
output, not recipient. try it on your own code. not everything on the example 3-page markdown you'll agree (much like you push back on the PR) but in significant number of occasions code changes were made based on the provided output
Recipient, as in the person who the output is intended for.
And I have seen what an AI do when it provides a code review, and it is very much like something that plausible looks like a code review. A lot of suggestion and nitpicks that at surface looks like plausible comments, but without any understanding. How much value a programmer get from that depend on the programmer. For me it reminds me of the value that teddy bears has on a support desk, or why some users are actually helped by being forced to go through layers of faq/ai suggested solutions before they are allowed to talk to a real person. Sometimes all that a person need to improve something is time to think about the code from a new perspective, and an AI code review can help the person find that time by throwing a bunch of shallow comments at them.
Unsure of this really tracks tho. How are you evaluating for the bias that they’re not merging it because you’re “their leader of 1000 people org” and not because you’re actually an engineer deep in the trenches that knows the second or third order effects of slop?
This is a genuine question btw, I see plenty of instances of this in my own org.
1. I am also on the receiving end of this. My boss often codes and vibecodes, and no one feels like they have to merge their stuff. We only merge it if it meets the high quality standard we have. And there is no drama for blocking a PR in our culture.
2. I am fairly deep in the trenches myself and I know when my PRs are high quality and when they are not. And that does not correlate with use of AI in my experience.
I've been on this ride about three or four times over decades. Every new major wave of technology takes a surprisingly long time to be adopted, despite advantages that seem obvious to the evangelists.
I had the exact same experience with, for example, rolling out fully virtualized infrastructure (VMware ESXi) when that was a new concept.
The resistance was just incredible!
"That's not secure!" was the most common push-back, despite all evidence being that VM-level isolation combined with VLANs was much better isolation than huge consolidated servers running dozens of apps.
"It's slower!" was another common complaint, pointing at the 20% overheads that were the norm at the time (before CPU hardware offload features such as nested page tables). Sure, sure, in benchmarks, but in practice putting a small VM on a big host meant that it inherited the fast network and fibre adapters and hence could burst far above the performance you'd get from a low end "pizza box" with a pair of mechanical drives in a RAID10.
I see the same kind of naive, uninformed push-back against AI. And that's from people that are at least aware of it. I regularly talk to developers that have never even heard of tools like Codex, Gemini CLI, or whatever! This just hasn't percolated through the wider industry to the level that it has in Silicon Valley.
Speaking of security, the scenarios are oddly similar. Sure, prompt injection is a thing, but modern LLMs are vastly "more secure" in a certain sense than traditional solutions.
Consider Data Loss Prevention (DLP) policy engines. Most use nothing more than simple regular expression patterns looking for things like credit card numbers, social security numbers, etc... Similarly, there are policy engines that look for swearwords, internal project code names being sent to third-parties, etc...
All of those are trivially bypassed even by accident! Simply screenshot a spreadsheet and attach the PNG. Swear at the customer in a language other than English. Put spaces in between the characters in each s w e a r word. Whatever.
None of those tricks work against a modern AI. Even if you very carefully phrase a hurtful statement while avoiding the banned word list, the AI will know that's hurtful and flag it. Even if you use an obscure language. Even if you embed it into a meme picture. It doesn't matter, it'll flag it!
This is a true step change in capability.
It'll take a while for people to be dragged into the future, kicking and screaming the whole way there.
You're not forced to use only an LLM for data loss prevention! You can combine it with regex. You can also feed the output of the regex matches to the LLM as extra "context".
Similarly, I was just flipping through the SQL Server 2025 docs on vector indexes. One of their demos was a "hybrid" search that combined exact text match with semantic vector embedding proximity match.
Skydio | Robotics / Drones / Cloud / Web engineers | San Mateo CA, Zurich CH, or Hybrid | https://www.skydio.com/careers
Skydio is the leading U.S. drone company and world leader in autonomous flight. Our drones are used for inspecting the energy grid, de-escalating life and death scenarios in public safety, inspecting bridges, giving soldiers better situational awareness on the battlefield, search and rescue missions, and more. We serve the core industries that our civilization runs on and have life-saving impact.
If you’re interested in being a core member of a 200+ person world-class engineering and research team that is defining the future of a major emerging industry,
We’re looking for a diverse combination of engineers, researchers, and managers with strong SW skills and experience across complex products. We’re particularly interested in people with robotics, web, deep learning, game dev, streaming or cloud experience.
I am a Senior Director of Engineering there and a YC S18 alumni, you can reach me at { vincent dot lecrubier at skydio dot com } but please apply online first!
The other thing is that it is WAY too easy to distract yourself from your solvable problems by focusing on the big ones - you have to fight that with ferocity.
Why get out of debt? The country is a brazillian trillion in debt we’re doomed.
Why invest for retirement or save? The market is fraud anyway.
Why exercise and lose weight? The planet is doomed anyway.
Not OP, but one example where it is a bit harder to do something in Rust that in C, C++, Zig, etc. is mutability on disjoint slices of an array. Rust offers a few utilities, like chunks_by, split_at, etc. but for certain data structures and algorithms it can be a bit annoying.
It's also worth noting that unsafe Rust != C, and you are still battling these rules. With enough experience you gain an understanding of these patterns and it goes away, and you also have these realy solid tools like Miri for finding undefined behavior, but it can be a bit of a hastle.
> CHMv2 is derived from single-date imagery, where the acquisition process selects the best available image within a target period (2017 -2020). This limits the direct use of the released CHMv2 data for attributing
canopy height to a specified year of interest. To support change applications, we provide the image acquisition date associated
with each prediction in the dataset metadata.
So generally a few years out of date, but the dataset is transparent about when each image was taken.
> We additionally release a global GeoTIFF of input image acquisition date, where pixel values encode year minus 2000 (e.g., 18.25 indicates April 2018)
That being said, I am sceptical on how accurate mono-depth models can be on a single tree basis. I would probably trust them to do large scale biomass estimates, but probably not single tree height assessments.
Dealing with broken Linux installs might be your definition of fun, but it's very possible to be a nerd and not find that particular thing fun, and prefering Macbooks
Yeah, that's kind of what I mean. This way will always be restrictive and not flexible enough. We could get some style guidelines injected instead without other restrictions. Let people use all the API access possible instead.
> Can someone elaborate on how growth is aligned with the general interest?
Empirically, the past 200 years have seen high growth globally, and human well being has improved massively as a result. Life expectancy has skyrocketed, infant death, hunger have gone down to near zero, literacy has gone up, work is much more comfortable, interesting and rewarding, etc. But at a more fundamental level, our material quality of life is that of literal kings. The 1st decile poorest people in the US or Europe have much better living conditions than a king of 500 years ago. We are so lucky to benefit from this, yet we completely forgot that fact. You complain about congestion and advertizing, but with degrowth you would complain about hunger and dying from cold during winter.
>But at a more fundamental level, our material quality of life is that of literal kings.
This cannot be overstated. To wit, a Honda Accord (or equivalent mid-range car of today) is objectively superior to a Rolls Royce from the 90s in terms of amenities, engine power/efficiency, quietness, build quality, safety, etc. The same is true for quality-of-life improvements across a vast swath of consumer goods, and therefore consumer lifestyles.
Without growth, it's unlikely we'd see those improvements manifest. Carefully consider the lifestyle of someone living several decades ago. Would you honestly want to live such a lifestyle yourself? That's where degrowth likely leads. As the article says, "I feel it’s impossible to convince Europeans to act in their self interest. You can’t even convince them to adopt air conditioning in the summer."
> Carefully consider the lifestyle of someone living several decades ago. Would you honestly want to live such a lifestyle yourself?
Sure, I lived it, and it was very pleasant at the time and in many ways better than now in retrospect. e.g. always-on access to infinite content engines like YouTube, TikTok, X, Facebook, etc. is probably a net negative, both for individuals and society. I wouldn't want to go back a century or more and give up air conditioning, dishwashers, washing machines, air travel, electric lights. But a few decades, sure, in a heartbeat.
I agree, 30 years ago a working man doing 40 hours a week in a factory could still support a family on one income and expect to own a house. We hit a peak 30-40 years ago.
I hear this often, but I think this discounts the fact that this was mostly true for the US/Western Europe at a time where they enjoyed unilateral super-powerism as a result of winning WWII. I'm not sure that kind of prosperity is normal (though I hope it could be).
I'm worried the harsh reality for most humans is that life is often not that easy. And if it is, it won't be for long
But there is still enough wealth for all of those houses to exist. That tells me the world is wealthy enough, but it is in the hands of different people
No one cares, there are millions of users, no one is going to look at your data, and even less be able to actually know which person a given user is.
We legit just want to know aggregated and objective information about how people use our products so we can make it better for you.
"Why do all corporate try to spy on our usage patterns?" Because the ones who don't have a crap product and all died long ago.
reply