is telling non-sexual partners about ones' own proclivities a means to seek validation for behavior that the person internally defines as socially fringe or uncommon?
The only craftsmen are the ones at the edge of the lingo tree?
To use your own analogy, as a machinist myself : I can master the concept of the lathe and bow drill without learning simulation-driven CAM, and I would be no less a machinist than the guy pressing buttons on a brand new Haas.
If you work via notepad.exe and assembly with a compiler and linker ready in the next window, fine! the work is what matters.
It stops at the tools you use, "it's a tool you use every single day". If it's not a tool you use every day, you don't need to learn it.
If you don't use language servers, you don't engage with development environments which rely on them, you need not learn them.
If you're making chips on a Monarch 60 you don't need to learn shit about CNC. If you're pushing buttons on a Haas you do.
If you're coming from a Monarch and want to try pushing buttons on the Haas on the kids are using, you need to learn how CNC works. That's your job. If you want to switch from notepad to Zed, you need to learn how language servers work.
What model you using? With codex and gpt-5.4 set to xhigh (and now gpt-5.5) seems to have zero issues helping me with rigging and fixing glb/fbx models, works as a charm. One time I instructed it to iterate together with screenshots because it was a gnarly task, but usually it figures out everything even when headless.
Yeah, 36 hours is honestly pretty disappointing. The old Withings ScanWatch easily ran >2 weeks with HR and notifications enabled, I'd have expected similar performance from the Casio.
in my car circles the 968 was seen as a total pos that was really just sort of trying to compete with the RX-7 and Fairlady, do a worse job at being a good sports car than them, and push the brand into further cheapened territory towards the every-person for the sake of financial incentive while inflating the cost of their premium offering, the 911.
1:1 example, but i'm not sure those were the points being made here.
The 968 is such a weird choice for this when the Boxster exists, did basically everything better, was a major commercial success, and has spawned a line of cars that many argue are better than the 911 except for the name and traditionalist-fandom over exact engine position that prevents Porsche from giving them all the biggest engines and fanciest tech.
But the Boxster didn't try to replace the 911 on day one. Or even go after the other 300ZX/Supra/whatever 2+2s on day one. It was instead nearly a whole-cloth "what if pure 2-seater convertible driver's car, but the best possible version" upscale-Miata initially, which wasn't an existing segment at all, and being roadster-first was a key separator from the also-2-seater Corvette.
(The iPhone or iPad were arguable Apple's Boxster "entry-level that ends up dominating sales and growing into full blown new product lines" anyway, except that the comparison eventually falls down because the form factor difference with the Mac is much more of a fundamental separation. So maybe Apple's Boxster is instead the laptop in the first place, which wiped out most of their desktop workstation business by the early-2010s at latest.)
Yeah this is looking at the 968 with rose tinted glasses. But a lot of the comparison does check out and the Neo is a fine on-ramp for first time macOS users just like the 968.
Porsche killed the 944S Turbo because it was accidentally faster than the Carrera and 930, and that was taboo. Its successor, the 968, was the awkward compromise.
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People ask for examples because they want to know what other people are doing. Everything you mention here is VERY reasonable. It's exactly the kind of stuff no one is going to be surprised that you are getting good results with the current AI. But none of that is particularly groundbreaking.
I'm not trying to marginalize your or anyone else's usage of AI. The reason people are saying "such as" is to gauge where the value lies. The US GDP is around 30T. Right now there's is something like ~12T reasonably involved in the current AI economy. That's massive company valuations, data center and infrastructure build out a lot of it is underpinning and heavily influencing traditional sectors of the economy that have a real risk of being going down the wrong path.
So the question isn't what can AI do, it can do a lot, even very cheap models can handle most of what you have listed. The real question is what can the cutting edge state of the art models do so much better that is productively value added to justify such a massive economic presence.
That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.
It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.
It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.
I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.
Why not try it yourself? Inference providers like BaseTen and AWS Bedrock have perfectly capable open source models as well as some licensed closed source models they host.
You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?
Bedrock and other third party open weight hosted model costs are not subsidized. What could possibly be the investment strategy for being one of twelve fly-by-night openrouter operators hosting the latest Qwen?
It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.
None of that is concrete though; it's all alleged speed-ups with no discernable (though a lot of claimed) impact.
> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People will stop asking for the proof when the dust-eating commences.
reply