It’s so strange. I think there’s a few different groups:
- Shills or people with a financial incentive
- Software devs that either never really liked the craft to begin with or who have become jaded over time and are kind of sick of it.
- New people that are actually experiencing real, maybe over-excitement about being able to build stuff for the first time.
Forgetting the first group as that one is obvious.
I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason. Software work has become a grind for them and vibe coding is actually a relief.
Group 3 I think are mostly the non-coders who are genuinely feeling that rush of being able to will their ideas into existence on a computer. I think AI-assisted coding could actually be a great on-ramp here and we should be careful not to shit on them for it.
You’re missing the group of high performers who love coding, who just want to bring more stuff in the world than their limited human brains have the energy or time to build.
I love coding. I taught myself from a book (no internet yet) when I was 10, and haven’t stopped for 30 years. Turned down becoming a manager several times. I loved it so much that I went through an existential crisis in February as I had to let go of that part of my identity. I seriously thought about quitting.
But for years, it has been so frustrating that the time it took me to imagine roughly how to build something (10-30 minutes depending on complexity) was always dwarfed by the amount of time it took to grind it out (days or sometimes weeks). That’s no longer true, and that’s incredibly freeing.
So the game now is to learn to use this stuff in a way that I enjoy, while going faster and maintaining quality where it matters. There are some gray beards out there who I trust who say it’s possible, so I’m gonna try.
Good point and I’m exactly at the same point as you with this. Working on letting go of the idea (and to be honest just the habit) that it’s somehow ‘cheating’ at the moment.
Not a troll. I’ve been doing a lot of self reflection on this topic lately. Some people seem to enjoy software for the act & craft, where the outcome / artifact is secondary or irrelevant. I don’t. Some people enjoy the artifacts it produces, for their utility or economic value. Not really me either. Often people frame it as this dichotomy, but I’ve realized my enjoyment and self-fulfillment comes from creating an artifact that is genuinely good and that I can be proud of creating. Too much AI robs me of this. I’ve created cool stuff with AI that leaves me feeling nothing because I didn’t really create it.
This is all valid. Your original comment came across as a troll because it implied that nobody could ever feel good about stuff they built with AI. Asserting that you know more about the emotional state of strangers on the internet than they know themselves is arrogant.
Well, it’s a genuine question. Like, if I have a machine in my house where I give it a recipe and it spits out the food, should I feel good about having “cooked” that food? Or what if someone prompts an AI for some art, should they feel proud of “creating” that art? I think not. And it’s the same with code. Depending on how much of the work you actually did should influence how you talk and feel about a creation. So many people lazily prompt an AI and then come here to post about something they “made” and I think that’s wrong.
I’m thinking there’s probably degrees to it. Like there is some stuff I absolutely want to hand craft, but then other stuff I don’t mind so much.
One of the interesting discussions at work (I’m in gamedev) has been about tooling and where AI fits in there.
Previously you’d spend sometimes significant time writing a tool, then polishing it up and giving it to the team (think things like editor extensions that make your workflow easier).
But AI can make this kind of bespoke tool dev so cheap now that it’s possible for every single dev to have their own tool that matches the way they work exactly. At that point, do you really need to spend the long 80% effort of polishing and getting it ready for mass consumption?
Stuff like that is interesting. I still can’t imagine never looking at the AI-generated code, but I’ve seen people take the approach of “I’m not interested in the code, only in what the thing does. If it’s wrong, I ask the agent to fix it”.
Yes I'm exactly like you as well. I've been coding for 30+ years, I still love coding and system building etc, but sometimes the level of frustration to find the information and then get something working is simply too high.
Over a weekend, I used ChatGPT to set up Prometheus and Grafana and added node exporters to everything I could think of. I even told ChatGPT to create NOC-style dashboards for me, given the metrics I gave it. This is something that would have painstakingly take several weeks if not more to figure out, and it's something I've been wanting to do but the cognitive load and anticipatory frustration was too high for me to start. I love how it enables me to just do things.
My next step is to integrate some programs that I wrote that I still use every day to collect data and then show it on the dashboards as well.
On a side note, I don't know why Grafana hasn't more deeply integrated with AI. Having to sift through all the ridiculous metrics that different node exporters advertise with no hint of naming convention makes using Grafana so much harder. I cut and pasted all the metrics and dumped it into ChatGPT and told it to make the panels I wanted (ex. "Give me a dashboard that shows the status of all my servers" and it's able to pick and choose the correct metrics across my Windows server, Macbooks and studio, my Linux machines, etc), but Grafana should have this integrated themselves directly into themselves.
I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason.
I think it's easy to dismiss that group, but the truth is there was a lot of flux in our industry in the last decade before AI, and I would say almost none of it was beneficial in any way whatsoever.
If I had more time I could write an essay arguing that the 2010s in software development was the rise of the complexity for complexity's sake that didn't make solving real world problems any easier and often massively increased the cost of software development, and worse the drudgery, with little actually achieved.
The thought leaders were big companies who faced problems almost no-one else did, but everyone copied them.
Which led to an unpleasant coding environment where you felt like a hamster spinning in a wheel, constantly having to learn the new hotness or you were a dinosaur just to do what you could already do.
Right now I can throw a wireframe at an AI and poof it's done, react, angular, or whatever who-gives-a-flying-sock about the next stupid javascript framework it's there. Have you switched from webpack to vite to bun? Poof, AI couldn't care less, I can use whatever stupid acronym command line tool you've decided is flavour of the month. Need to write some Lovecraftian-inspired yaml document for whatever dumbass deploy hotness is trending this week? AI has done it and I didn't have to spend 3 months trying to debug whatever stupid format some tit at netflix or amazon or google or meta came up with because they literally had nothing better to do with their life and bang my head against the wall when it falls over every 3 weeks but management are insisting the k8s is the only way to deploy things.
That in itself feels like second-system syndrome but instead of playing out over a single software project it’s the large-scale version playing out over the entire industry.
> I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason.
I say this kindly, but are you sure that _you_ aren't the one in group 2, and _they_ aren't the ones learning new things?
A lot of the discourse around ai coding reminds me of when I went to work for a 90s tech company around 2010 and all the linux guys _absolutely refused_ to learn devops or cloud stuff. It sucks when a lifetime of learned skills becomes devalued over night.
That’s pretty fair, I’m currently in the “trying to get over the feeling that it’s cheating” phase and also just haven’t formed the habit yet of reaching for AI as a tool in my toolbox; particularly in things like pre-review AI-assisted code review, which I’ve found really useful but sometimes don’t think of doing when I could.
I don’t think that is true. I know several very high-performing engineers (some who could have retired a long time ago and are just in it for the love of the game) who use AI prolifically, without lowering any bars, and just deliver a lot more work.
EDIT: Sorry I realised you were asking more about categorisation and not downloading.
——
The closest thing I can think of is Tube Archivist, which seems made for archiving large YouTube collections, including things like comments on videos.
I’ve had mixed luck with it and it’s a bit too heavy for my fairly limited needs. Youtube-dl hasn’t worked for me for the last month or so on it —- oddly enough I have a MeTube instance on the same physical machine (different VM) which is a lighter web UI for yt-dlp and which is still working fine. That’s Youtube’s fault I assume and not the fault of Tube Archivist.
I don't think so. The tag doesn't just say "this was written by an LLM". It says which LLM - which model - authored it. As LLMs get more mature, I expect this information will have all sorts of uses.
It'll also become more important to know what code was actually written by humans.
Same here, I very distinctly remember the first time I got to use desktop-class LCD monitors (it was at a new job at the time) and four things stood out:
- The screen size. Going from a 17” or maybe 19” CRT at home to a 19” LCD but without the CRT bezel — the screen looked HUGE.
- The clarity and flatness. The lack of smudging on text, the consistent geometry, being able to see the screen edge right up to the bezel without any wasted space (which you often had on a CRT if you wanted an image without excessive pincushion / bulge).
- The relative lack of ghosting when compared to laptop LCD screens I’d used in the past.
- The colour gamut. Looking back I think those monitors I first saw were relatively wide gamut monitors being used with Windows XP and no colour profiles. The colour saturation was impressive (not accurate, but striking).
I never remember CRTs looking better than any desktop LCD from that point on overall, but I dare say I just didn’t have access to any high-end CRTs at the time.
I also never remember CRTs having true black levels close to OLED, which is another thing I hear people say sometimes. I mean you could get deep blacks, but you’d be sacrificing brightness and white/gray detail at the white end. Again though might have just been the CRTs I knew of at the time.
I went from a 19" CRT capable of 1600x1200@75Hz to a 17" LCD capable of 1280x1024@60Hz, basically because that CRT would've taken up a huge chunk of desk real estate in my dorm.
My first impressions were that the screen was bright as hell, sharp (but I was torn on whether that was good or bad, given the blockiness that it introduced), thin and light (awesome!), and sucked to run at anything but the native resolution. After a few hours, I realized that my eyes weren't getting tired looking at it, and that it was nice not to have the subtle hum around anymore.
The CRT was a decent screen (for 1999), and the LCD was a decent screen (for 2003). Of course, I just got used to the differences, since the LCD was much more practical in my life. I still have it in storage right now.
You forgot one thing: flickering. At 60 Hz, a CRT is murder on the eyes. A few years ago I used a CRT for the first time in like ten years and my eyes hurt almost immediately.
I was never incredibly disturbed by 60Hz though I did notice it.
You reminded me of something I had forgotten though — remember when 100Hz / 120Hz TVs first became a thing? That I noticed!
I think most of my PC CRTs ran at 72Hz / 75Hz IIRC. At least with the monitor I had I remember pushing it to 90Hz but that would add bluriness / lose clarity so I never used it at that rate.
I agree with this, but I don’t trust that it will stay this way.
It always seems like there’s no real way to ‘get ahead’. They’ll always find a way to make the system cost such that it barely pays itself off, by introducing fees or cutting rebates.
For example, there was a proposal in Australia to raise our fixed grid access fee from something like $1 a day to $5 a day.
Or consider even just the feed-in-tariff for solar — that’s gone down as solar power has gotten cheaper, which is expected, but it’s another thing that increases that mythical payback period for the system.
Now to be clear I think the tech is wonderful and would 100% have a big battery and solar system if I could, but not for financial reasons.
For all intents and purposes you’re just pre-paying for the next X years of your electricity. I would at least want my battery warranty to be four times X, which it currently is not. Now in 5 years there might be battery tech that gets to that multiplier that I want and THEN I could start thinking of it as investing in ‘free electricity’.
But I’m sure the government and electricity suppliers will close any loopholes they can to prevent that.
There’s a whole world out there that doesn’t seem to be addressed by the original comment. On one end of that scale you have things like bespoke software for small businesses, some niche inventory management solution that just sits quietly in the corner for years. On the other end, there’s the whole world of embedded software, game dev, design software, bespoke art pipeline tools…
It can seem that the majority of software in the world is about generating clicks and optimising engagement, but that’s just the very loud minority.
- Shills or people with a financial incentive
- Software devs that either never really liked the craft to begin with or who have become jaded over time and are kind of sick of it.
- New people that are actually experiencing real, maybe over-excitement about being able to build stuff for the first time.
Forgetting the first group as that one is obvious.
I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason. Software work has become a grind for them and vibe coding is actually a relief.
Group 3 I think are mostly the non-coders who are genuinely feeling that rush of being able to will their ideas into existence on a computer. I think AI-assisted coding could actually be a great on-ramp here and we should be careful not to shit on them for it.
reply