Hacker Newsnew | past | comments | ask | show | jobs | submit | ShyCodeGardener's commentslogin

You're forgetting:

4. Once Spotify wrests power from the labels, they start the enshitification process themselves.


I'm old school - I tend not to blame people (or companies, for that matter) now for things they might do in the future.

They are pointing out that some locations are not a good place to grow specific things and that there is a lot of water wastage in doing so. Attempting to grow crops in the desert vs. in a temperate climate probably uses more water for the same amount of crops (unless they are desert plants, I guess). This is what's being pointed out. If I decide to grow tomatoes on the moon and then ship them back to Earth to be consumed, it's fair game for people to point out how much of a waste of resources that is vs. just growing them on Earth.

Don't be disingenuous. They already were dividing things out by type of usage, like talking about water park usage vs. the usage of an entire city for all purposes. They are already admitting that "water usage of a city" isn't only about quenching thirst and maintaining hygiene, it's not a stretch to assume that they also realize that they can be water wastage in agriculture as well. They can't split out every instance of wastage that could be eliminated, and it's ridiculous to expect them to.

I don't think it's a joke about left-pad, but the idea that the complexity increases tremendously when you take a cloud of "small" things all communicating with each other. You've just pushed the complexity elsewhere. Claude can easily crunch the small microservice, but you're pushing the complexity to communications issues, race conditions, etc.

Whenever I see one of these comments, it's always from someone that tried it at the start and then gave up because of a bad experience. And many times there are more people commenting back that this was essentially the 1.0 version and that the current 2.0 version is much better. So as someone that uses none of these products (old voice assistants vs. ai ones) it's really hard to evaluate if any of these anecdotes mean anything.

You could have tried Alexa+ at the start when it was shitty compared to plain Alexa, and maybe it's better now. But equally none of the people that comment that it is "amazing" in its current iteration qualify their statements with their experiences comparing and contrasting the old version vs. the new version making them seem either unqualified to make statements based on how much "better" it is than the old version or at worse they are shills (paid or not). The best take is that they are comparing (e.g.) day-one Alexa+ vs. the current Alexa+ without a comparison to the original Alexa.

... which is to say that it really feels like there are no clear conclusions that could be drawn from all of this.


No matter how good the LLM features are, I just want to turn my lights on and off and check the time. A perfect LLM could maybe perform on par with a simple deterministic command system for these tasks, but not better. All an LLM does is introduce the possibility that a command that worked fine yesterday will randomly not work

Also, one of my first interactions with this Alexa+ thing was “how long is it until 8:45am”, one of only a few commands I use it for to work out how much sleep I’m getting, and it proceeded to ask me what the current time was… I immediately turned it off after that


> All an LLM does is introduce the possibility that a command that worked fine yesterday will randomly not work

Aren't hallucinations part of GenAI? I would assume that "AI" voice recognition doesn't have that baked in, but I'm not working in either of those spaces so maybe I'm missing the details. So many things are being looped into the "AI" umbrella that would have just been called machine learning or pattern recognition a decade ago (e.g. "facial recognition" vs "AI" at a time when "AI" also means chatbots like ChatGPT).


The point is Amazon is adding an “Alexa+” mode that uses LLMs. The plain voice recognition + keyword matching or however the old version works is more reliable (I assume, I didn’t use the new mode much because it immediately failed at what I wanted)

> that tried it at the start and then gave up because of a bad experience

I've had enough bad experiences with products that never got better, or just got worse (Exhibit A: Windows 11). Like most primates, I am capable of learning, and I've learned that once a consumer product/service goes bad there's little hope of a turn-around. I accept that you're telling me that it's gotten better, but of the people I know IRL who also use an Echo, none of them have told me that Alexa+ is worth trying, let alone committing to.

Yes, it's on me for not giving Alexa+ a second chance, but I'm not willing to give Alexa+ a second chance because, as a technology product/service customer, I just don't feel respected by the industry I work for (...lol); if Amazon, Microsoft, Google, et al won't respect me, why should I venture outside my comfort-zone for... what benefit, exactly?


> I accept that you're telling me that it's gotten better,

I'm not telling you this. I'm basically saying that with Alexa/Alexa+ and with Google's Gemini vs Goole Now(?) I've seen many posts like this. Where someone complains about the AI version, but then there are other posts that come in and claim how much better it is. Even for things like Claude Code you get people complaining about how many mistakes it makes, and then people coming in and saying that it's because they are "doing it wrong". Either "Claude has improved by 10x in the last 6 months. It's so amazing! If you used it a year or so ago it doesn't even compare!" or "You aren't using the most expensive tier of Claude which increases context and thinking abilities that are hobbled in the cheaper versions!"

I never really see a comparison on the same level and it sounds like people talking past each other or some people having legitimate complaints and then others coming in to shill for a product.

I'm not in anyway implying that "You should totally try this out now that they fixed everything" or anything of the sort. I even stated that I don't use any of these tools, and I was commenting as something more akin to an "outsider."


The current photos app on Win 11 has accumulated a whopping one gigabyte of - what actually?

I don't run Windows 11 so I haven't taken a look, but I speculate it's because it contains a bunch of ML blobs for Windows Photo's image-classification and photo subject/contents keyword search.

On Windows 10, the Photos app package is about ~140MB on my computer. A good chunk of that is because the package includes a lot of dependencies - including platform deps that I'd expect would be part of the UWP runtime in the OS - kinda like how since the introduction of Swift/UIKit/etc in iOS the IPA packages all bundle their platform dependencies, even though they're demonstrably redundant, because UIKit isn't an OS-provided framework anymore... I'm not up-to-date in the iOS dev scene so I'm unsure why Apple went with that approach.


I'm not an Alexa user myself but I have watched my wife interact with it for around 5years now.

The new Alexa powered by an LLM is objectively better that previous Alexa in a few ways. This much was apparently from day one and has only gotten smoother.

1. It can reliably execute direct or vague-ish commands "play X movie in app Y" or "play x show" and can infer X movie is only available in app Z so use that.

2. Speech recognition seems better (less instances of 5x round trips)

3. Conversational with multi-turn --- my wife can have a back and forth clarifying a topic.

4. Seems to understand intent a bit better. (user asked A so they are probably thinking about B)

Those may seem small but they were a tremendous source of annoyance for her -- and thus for me -- "Alexa is not listening, do something!"


> It can reliably execute direct or vague-ish commands "play X movie in app Y" or "play x show" and can infer X movie is only available in app Z so use that.

...how does that work, exactly? (or rather: what's the context here?); there's no possible way for an Alexa+-powered Amazon Echo to control my AppleTV or interface with VLC on my desktop.


Presumably, FireTV?

It's not the early 2000s where just messing around and wasting time on this stuff is cool in itself. None of that time wasted turned into much long term apps that stuck with me. Maybe a banking app and a trail running app.

I ruined multiple dinners with timers that didn't work (with a time/labor cost).

I had to get out of bed in the freezing to turn the lights out. It's easy to hit the lights when I go to bed but annoying having the tool fail and getting back out.

Music stuff didn't work well because I used Youtube Music not Spotify.

Those were my 3 use cases for Google voice, and it failed them all enough I just stopped using it all together. Who cares if it works today if in another month they just change something and break it again? They've shown it's not a tool to use for tool things, it's a 'gee wow' thing. I don't need to be impressed. I need not burnt food.


There is always a risk that at the time that you pinned, the code was already compromised, but it lowers your attack surface to pin. As long as you've pinned while the code was not compromised, then someone changing the package for an already pinned version will fail the install because hash check fails.

It's "if I pin the dep, I know that someone won't compromise the package repo and the next time I install 2.6.3 I can be sure that the same package is getting downloaded and installed."

This specific risk isn't just not having things version pinned. It's not having a hash of the package to check against to make sure you're getting the same package every time.


It's standard naming if you write a plugin for something to do `{main-package}-{sub-package}` as the naming convention. `django-rest-framework` isn't an official Django project, but it's part of the Django ecosystem. This looks like "running PyTorch with our added stuff to make your life easier" so this naming convention isn't out of the norm.

How has the blast radius changed though? The vibecoders that weren't developers before? If someone switched from pip installing themselves to having Claude do it, I don't see how that increased the blast radius.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: