Hacker Newsnew | past | comments | ask | show | jobs | submit | lubujackson's commentslogin

Recognize that this, despite assurances, will be used to gamified pricing, just as McDonalds is doing now. I am sure prices will be "consistent in every store" but few will pay those prices because they will be much more than normal. However, if you've shopped at Walmart in the past week that's 10% off and if it is a Tuesday morning you get that extra 5% early bird price and if you spend at least $100 you will earn 10 WallyBucks or whatever waste of time gamified coupon scheme they come up with.

The gamified pricing won't be done through the price labels, but through Loyalty Programs, Apps and Coupons.

I use both pretty heavily. Cursor has an "Ask" mode that is useful when I don't want it to touch files or ask a non-sequitur. Claude may have an easy way to do this, but I haven't seeked it.

Cursor also has an interesting Debug mode that actively adds specific debug logging logic to your code, runs through several hypotheses in a loop to narrow down the cause, then cleans up the logging. It can be super useful.

Finally, when making peecise changes I can select a function, hit cmd-L and add certain ljnes of code to the context. Hard to do that in Claude. Cursor tends to be much faster for quicker, more precise work in general, and rarely goes "searching through the codebase" for things.

Most importantly, I'm cheap. a If I leave Cursor on Auto I can use it full time, 8 hours a day, and never go past the $20 monthly charge. Yes, it is probably just using free models but they are quite decent now, quick and great for inline work.


The majority of Ask/Debug mode can be reproduced using skills. For copying code references, if you're using VS Code, you can look at plugins like [1], or even make your own.

Cursor's auto mode is flaky because you don't know which model they're routing you to, and it could be a smaller, worse model.

It's hard to see why paying a middleman for access to models would be cheaper than going directly to the model providers. I was a heavy Cursor user, and I've completely switched to Codex CLI or Claude Code. I don't have to deal with an older, potentially buggier version of VS Code, and I also have the option of not using VS Code at all.

One nice thing about Cursor is its code and documentation embedding. I don't know how much code embedding really helps, but documentation embedding is useful.

[1] https://marketplace.visualstudio.com/items?itemName=ezforo.c...


Stories are particularly troubling because we have the concept of "suspending disbelief" and readers tend to take a leap of faith with longwinded narratives because we assume the author is going somewhere with the story and has written purposefully.

When AI can write convincingly enough, it is basically a honeypot for human readers. It looks well-written enough. The concept is interesting and we think it is going somewhere. The point is that AI cannot write anything good by itself, because writing is a form of communication. AI can't communicate, only generate output based on a prompt. At best, it produces an exploded version of a prompt, which is the only seed of interest that carries the whole thing.

Somebody had that nugget of an idea which is relevant for today's readers. They told the AI to write it up, with some tone or setting details, then probably edited it a bunch. If we enjoy any part of it, we are enjoying the bits of humanity peeking through the process, not the default text the AI wrote.


Right, but in the present case we have exactly what you're describing—a story, almost fully written by AI but with some human cherry-picking in the mix. And readers are finding it a phenomenal story and then wanting to vomit retrospectively in learning about the authorship. It just seems patently obvious to me that this is not where the sentiment is going to stay—it will hit the margin, like the people who decide to not own a cell phone, or those who would rather listen to analog audio; there will be a market for it but it will exist at the margin. Eventually, especially for young people, more and more of what they consume will be AI generated and they won't care because it's indistinguishable from human work.

Or, I digress, it will be distinguishable from human work but because it's so much better than anything that a human could have ever created. These AI tools that we have now are as dumb as they will ever be. If we ever reach AGI or superintelligence or whatever—or even if not, even if these tools just advance for 10 more years on their current trajectory—it's easy for me to imagine some scenario where the machines can generate something so perfect to your liking that you just prefer it to anything a human ever would have created, storytelling and all.

You can take the general case where AI can just generate a better movie than a team of humans ever could plausibly generate. After all, AI doesn't have any of the physical constraints of a movie studio—the budget, the logistics of traveling from location to location, the catering, the fact that the crew has to sleep, has to coordinate schedules, all that. AI, with some human involvement or not, could just keep iterating on some script on a laptop overnight until its created an optimized version which is more satisfying to humans than any other human made movie ever created. Or in a narrow case it could create the perfect movie for you, given what it knows about you and your interests. All human movies would look inferior.

For my kids, who I'm sure are going to grow up in a world where this type of art is embedded everywhere—and where the human version is almost certainly going to be worse—I don't think the desperate cries to see the last scrap of human ingenuity will mean anything. All of these people throwing rocks at Waymos and others boycotting companies for generating ads rather than shooting one with a video studio; it's so obviously helpless, desperate and obviously futile in the face of what's coming.

I mourn the future that seems plausible here but I also welcome it as inevitable. The technology is coming, and people are going to have to adapt one way or another.


You're talking about content. Only content can be "perfect" as you say.

When I'm listening to music, looking at art, seeing a play or a short film I want to feel connection to the humans behind it. AI is by definition missing that connection. That's what makes me retrospectively vomit at AI writings like these. That connection requires that the humans behind it are imperfect, the solo can have one or two sloppy notes, but at least it's genuine interaction. We have seen this same yearning for connection with all the "Don't use LLM to comment, use your true style of writing with its flaws" rules.

I'm 100% certain mainstream studios will be producing "perfect" content with AIs just like current mainstream pop stars have 10 ghost writers working on each song to create "perfect" songs. The good stuff will exist in the fringes as always and I'm ok with that as I've already been for years.

And the future may not be as settled as you think it is. Leaders try to sell you their vision of the future by saying it is settled and that things are certain, but that is because they want you to believe that, because if you and the masses believe so, it's more certain for the future to settle the way the leaders want. But you can also actively refuse that future and find a different future that's worth believing in yourself.


The riff comes first, the people come second. One of the nice things about punk and metal is how anti celebrity in a fundamental way both genres are. In histories of the genres, you will usually find such and such band made such and such invention that led to certain new structures being accessible. Of course the social background of the scenes where it emerged is important too but the history is traced first in terms of the riff. Or aka books like glazing a particular rockstars life history are rare, even though there are some "superstars" in metal and punk. The culture is very "only analog is real, digitals fake shit" but idk in some other ways they seem much closer to having not much difficulty accepting a valid musical work regardless of origin.

I don't quite understand what you're getting at with this comment? In metal and punk it's pretty cornerstone of the genre to be authentic, and in metal to value human skills (all the solo parts, fast playing). I've played and listened punk and metal my whole life, but will also enjoy early Lady Gaga, Eminem, Kendrick etc. celebrities because I recognize their authenticity and skills. Sabrina Carpenter and Drake go over my head because of blatant ghost writing and even though they have good tunes, I vomit retrospectively.

So what is AI bringing to the fans of these genres that the fans might value? Because it's not authenticity nor is it skills. What is the point you're trying to make?


I am saying on surface it might seem they should be the staunchest opponents and as I said the culture is "only cassette tape is real otherwise fuck off and die" but simultaneously its also one of the least image/player focused genres in some ways, what is being played is of much higher priority than who in specific is playing it.

Hmm I can think of various examples where the guitarist was changed and people dismiss the new guitarist. Take a look at Megadeth for example - every new solo guitarist gets compared to Marty Friedman even though he hasn't been in the band for 26 years. So a lot of it is player focused.

But your point also stands here, every new guitarist must play the solos as close to the original ones as possible, otherwise it's not the same experience. So on the music level "what" is of much higher priority still. But I wouldn't say it is as black and white as you make it out to be.


Some of course have a very unique style that seems very hard to replicate. Personally I haven't yet found a single band that manages to faithfully execute classic era Slayer. But there are countless bands today who make very good execution of norwegian black metal and swedish death metal.

Edit: And a lot of modern black metal for example doesn't even bother with stating who they are. Member lists are pseudonymous or anonymous. I think this "anti god" culture makes metal different from other genres in some ways.


Ok I'm not as up to date with modern black metal, that pseudonymity seems cool.

There's also upcoming math rock band Angine de Poitrine who are also anonymous https://www.youtube.com/watch?v=0Ssi-9wS1so . In these cases you can argue that the person doesn't matter but in my opinion it still does. There's a person inside that costume, who has made the decision to be anonymous as part of the whole experience. That's part of their expression.

Of course there's then bands like Ghost who have mainstreamed this too - the players wearing the costumes are usually just contract musicians and don't have anything to do with Tobias or the music other than playing for money. Good for them but f that, you are just a robot at that point.


There's anonymity/pseduonymity where we have a entity that does not do any performances and releases cassettes with members acknowledged as "M., K. and J." or even nothing and there is "anonymity/pseudonymity" where a band tries to use that as its own image (eg Kanonenfieber). Obviously I meant more like the former which is legitimately a music first person irrelevant presentation, but modern black metal is a wide spectrum, it has some of the most image conscious crap out there too, if anything I think its probably the most superficial and image focused of the main metal genres. It's just that anonymity hasn't historically been part of death metal culture that much but I feel its actual presentation is quite workman like in many ways.

Are you an AI? This looks like it was at least ran through an LLM judging by the heaps of em dashes.

Nope — you can browse my HN comment history and I've been using em dashes far before LLMs were a thing

Ah sorry, fair enough. I guess you were, are and will have been getting this question quite frequently :D

I used dashes quite frequently in the past, but stopped using them to avoid being associated with llm generated text :(

It's a shame really, because it is a useful gramatical tool.


That's all speculation, and it may prove to be true.

But:

> readers are finding it a phenomenal story

is not true across the board.

I thought to myself, explicitly, and fairly early "This is a fun and thoughtful idea, but the writing is kinda crap" before I realized (maybe a third if the way through) "ah, right, this is genAI. That tracks."

Despite my deep-seated hatred of LLMs, I choose to finish the piece and see if I was being unfair to the actual work ("the output", in the soulless descriptor used by programmers who've never once written a real story or crafted a song).

As a longtime avid reader of fiction, lit nerd, and semi-pro musician, I understand writing and artistry better than the average HN poster, and couldn't help but see the flaws in this.

People who don't have deep knowledge of literature don't catch the tells or flaws as well, but are still understandably angry when they find out they burned their time reading clanker output, and are understandably depressed that they were suckered into it because they haven't spent a lifetime developing a deep understanding of the discipline.

It's possible that genAI approaches will surpass humans in every field we invented.

So far, though, in every field I understand deeply, I see the uncanny mediocrity of the average in every LLM output I have subjected myself to.


I remember reading about researchers trying to understand what triggered baby birds to cry for food when the mother bird came back to the nest. They found they could make a red stick for the head and a yellow stick for the beak, amd the babies would yell just as loudly for food.

What stuck with me is that they found that by elongating the yellow stick, the baby birds would yell even harder than when their mother was there. In other words, are instincts amd impulses are imprecise and can be manipulated.

This is not a new thing, though. In some ways, this is something art has long manipulated - no love is more tragic than Romeo and Juliet's, for example.


Yes, this is from Niko Tinbergen's classic monograph "The Herring Gull". It's the origin of the term "supernormal stimulus".

https://en.wikipedia.org/wiki/Supernormal_stimulus


Thank you, I've been looking for this term for twenty five years.


Impressive, they must be using some optimizing algorithm to get that many pseudoscientific claims per word.



This is Apple's "Nintendo moment" when they realize they can package old hardware and win on polish and ecosystem.


> This is Apple's "Nintendo moment" when they realize they can package old hardware and win on polish and ecosystem.

The A18 Pro isn't even two years old yet; it debuted in iPhone 16 Pro and 16 Pro Max September 2024. What's funny is none of the PC laptops manufactures can match the speed and quality of the Neo.

The benchmarks for the A18 Pro are impressive; its Single Thread Performance beats all mobile processors [1]; remember this processor was created for a phone:

        Apple A18 Pro              4,091

        Apple M1 8 Core 3200 MHz.  3,675

        Apple A15 Bionic           3,579

        AMD Ryzen Z1 Extreme       3,546

        AMD Ryzen 5 PRO 230        3,538

        Apple A14 Bionic           3,382

        Intel Core i5-1235U        3,090

        Apple A13 Bionic           2,354

        Intel N150                 1,902

        Intel N100                 1,893

        AMD Ryzen Embedded R1505G  1,820
[1]: "A18 Pro Benchmark" - https://www.cpubenchmark.net/cpu.php?cpu=Apple+A18+Pro&id=62...


Outside of some specialized benchmarks only Geekbench 6 is more or less usable for comparisons between generations or manufacturers.


out of curiosity, what makes Geekbench 6 better?


Differences in score correlate to differences in performance across platforms and generations.


They already had that exact strategy between 2012 and 2020.


Apple have historically moved forward minimum requirements for macOS and apps a bit aggressively. They need to slow that down now if they want us to take the macbook neo seriously.


Good. So many software developers have gotten so lazy with RAM usage in the past few decades. I hope the Neo is a kick in the pants to get everyone in the Apple ecosystem to take memory usage seriously.

More efficient software benefits everyone.


> So many software developers have gotten so lazy with RAM usage in the past few decades.

Fewer developers want to write ASM or C, today. Slower to market, slower to roll out features, etc. While that may seem like a good thing, and probably could be, the market doesn't like it.

Developer choose heavy weight frameworks or don't make use of modern features in said frameworks to improve performance. And in some cases, performance can be 'good enough'. If I pretended to be a developer, if my app performs well enough, it's not my problem what else is running on your system. Besides, the OS governs it all regardless.

That said, macOS has a terrible memory leak _somewhere_ that impacts even OOTB apps and this hasn't been corrected for the last two major releases.


You don't need to program in ASM or C to write a memory efficient program. Swift, Go, Rust, C++ and C# are all reasonably memory efficient at the scales we're talking about.

Usually you just have to actually look at memory usage and trim the obvious fat. But so many developers these days treat memory as an infinite resource, and don't have a clue how to use profiling tools to even investigate memory usage. That and, maybe stop shipping a copy of Chrome with your application.

I'm hopeful that LLMs will improve the state of application development. Claude can write sloppy code, but it also knows how to write rust and swift, and it knows a lot of tricks for optimisation if you prompt it.

There's 3rd party libraries which know how to interact with spotify. I wonder how many claude code tokens it would take to make a simple, native spotify client. Or discord client. Or client for Teams or Slack.


It's really quite bad. 'Telegram Lite' is using 1.16GB with just a single chat vs Signal using 193MB. Somehow vscode (including their renderers) manages to come in pretty low compared to even Apples native apps.


> Somehow vscode (including their renderers) manages to come in pretty low compared to even Apples native apps.

Because the issue isn't electron, it's not freeing resources which you can do in any language/platform.


> vs Signal using 193MB

That’s still an order of magnitude worse than it should be. You don’t need 200mb of ram for a chat app.


I’d disable major OS updates and stay on Tahoe, and only upgrade if other Neo owners report it’s ok to do so. Ive been burned by iOS updates that made the phone sluggish enough times.

Not necessarily a reason to avoid the Neo, for the right use case. If I had secondary school kids they’d get one of these, but something to bear in mind.


Except I can buy two or three Switches with Neo's price tag.

Nintendo Switch - 279 euro

Nintendo Switch 2 - 489 euro

Neo with a proper SSD size - 800 euro.


Except you have to run Tahoe


I feel like people are sleeping on Cursor, no idea why more devs don't talk about it. It has a great "Ask" mode, the debugging mode has recently gotten more powerful, and it's plan mode has started to look more like Claude Code's plans, when I test them head to head.


Cursor implemented something a while back where it started acting like how ChatGPT does when it's in its auto mode.

Essentially, choosing when it was going to use what model/reasoning effort on its own regardless of my preferences. Basically moved to dumber models while writing code in between things, producing some really bad results for me.

Anecdotal, but the reason I will never talk about Cursor is because I will never use it again. I have barred the use of Cursor at my company, It just does some random stuff at times, which is more egregious than I see from Codex or Claude.

ps. I know many other people who feel the same way about Cursor and other who love it. I'm just speaking for myself, though.

ps2. I hope they've fixed this behavior, but they lost my trust. And they're likely never winning it back.


Don’t use the “auto” model and you will be fine.

You just described their “auto” behavior, which I’m guessing uses grok.

Using it with specific models is great, though you can tell that Anthropic is subsidizing Claude Code as you watch your API costs more directly. Some day the subsidy will end. Enjoy it now!

And cursor debugging is 10x better, oh my god.

I have switched to 70% Claude Code, 10% Copilot code reviews (non anthropic model), and 20% Cursor and switch the models a bit (sometimes have them compete — get four to implement the same thing at the same time, then review their choices, maybe choose one, or just get a better idea of what to ask for and try again).


> get four to implement the same thing at the same time, then review their choices

Why would you do that to yourself? Reviewing 4 different solutions instead of 1 is 4 times the amount of work.


You wouldn't do that for everything. I'd reserve it for work with higher uncertainty, where you're not sure which path is best. Different model families can make very different choices.


Yes, this exactly.

Also, if there is a ui design then they could look wildly different.

I rarely use this feature, but when appropriate, it is fantastic to see the different approaches.


Same here. Auto mode is NOT ok. Sadly, smaller models cannot be trusted with access to Bash.


I used to love Cursor but as I started to rely on agent more and more it just got way too tedious having to Accept every change.

I ended up spending time just clicking "Accept file" 20x now and then, accepting changes from past 5 chats...

PR reviews and tying review to git make more sense at this point for me than the diff tracking Cursor has on the side.

Cancelling my cursor before next card charge solely due to the review stuff.


You can disable this if you want, it's under "Inline Diffs" in the Cursor settings.


In the coworking I am in people are hitting limits on 60$ plan all the time. They are thinking about which models to use to be efficient, context to include etc…

I’m on claude code $100 plan and never worry about any of that stuff and I think I am using it much more than they use cursor.

Also, I prefer CC since I am terminal native.


Tell them to use the Composer 1.5 model. It's really good, better than Sonnet, and has much higher usage limits. I use it for almost all of my daily work, don't have to worry about hitting the limit of my 60$ plan, and only occasionally switch to Opus 4.6 for planning a particularly complex task.


Cursor tends to bounce out of plan mode automatically and just start making changes (while still actually in plan mode). I also have to constantly remind it “YOU ARE IN PLAN MODE, do not write a plan yet, do not edit code”. It tends to write a full-on plan with one initial prompt instead of my preferred method of hashing out a full plan, details, etc… It definitely takes some heavy corralling and manual guardrails but I’ve had some success with it. Just keep very tight reins on your branches and be prepared to blow them away and start over on each one.


I love to build a plan, then cycle to another frontier model to iterate on it.


Creator here - this was much more technically difficult than it seems, because rendering pixel perfect backgrounds with scaling width is non-obvious. Hit 'Esc' and you can set some options, including changing the background pattern.

If you want to see a big file, I have a direct link to load the Blocktronics WTF4 ANSI which is over 4000 lines: https://sure.is/ansi/?wtf4=1


Wow nice. Is there a fastscroll in mobile?


On the loading page you can click the word "Esc" to open the options menu and set the speed there. If you select "full line render" you can then set it from .25x to 10x speed and it doesn't do the character-by-character rendering.


Impossible to touch esc on mobile, maybe fat finger but it just start showing the image.


A context management system that keeps your docs synced to your code and gives LLMs a way to navigate docs easily: https://github.com/yagmin/lasso


Wow. This is the first take on AI that I find really shocking, and it is totally believable.

But worth noting the difference between a system you interact with and a visualization of such system, injected into your eyeball, is the lack of secondary effects (or first effects?). I suppose those can ALSO be faked and maintained, but at some point the illusion is more work than the system that would normally create it.

It seems likely that the fake system would be much more work than the code needed to generate it for most modern uses, but when we are spinning up bespoke code that will be run once, for a single user? The fake output might be more efficient.

I don't think this means this is the future. But it certainly means it could become a preferred approach for certain use cases, when there is no need for reuse.

I love finding ideas like this, ehich feel like true "AI-native" thinking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: