Hacker Newsnew | past | comments | ask | show | jobs | submit | overgard's commentslogin

Can't answer for an RTX 5090, but for an RTX 5080 16GB of RAM (desktop), I get about 6 tokens/sec after some tweaking (f16->q4_0). Kind of on the borderline of usable.. probably realistically need either a 5090 with more RAM or something like a Mac with a unified memory architecture.

My M5 Pro is getting ~11 tokens per second via OMLX for an 8 bit quant.

A Mac is not going to be all that much faster than a 5080 with any models, other than the ones you can’t currently run at all because you don’t have enough GPU+CPU memory combined.

You’re much better off adding a second GPU if you’ve already got a PC you’re using.


At this rate they're not going to need to do layoffs.. nobody sane is going to want to work there.

Pretty mixed feelings on this. From the page at least, the images are very good. I'd find it hard to know that they're AI. Which I think is a problem. If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

I also don't like that these things are trained on specific artist's styles without really crediting those artists (or even getting their consent). I think there's a big difference between an individual artist learning from a style or paying it homage, vs a machine just consuming it so it can create endless art in that style.


> If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

Not a lawyer, but that reads as compelled speech to me. Materially misrepresenting an image would be libel, today, right?


Well, considering that AI generated content can't be copyrighted (afaik at least), I think we're in very different legal territory when it comes to AI creating things. While it's true that deepfakes could be considered libel.. good luck prosecuting that if you can't even figure out where the image came from.

The problem is it's all too easy to generate - you can't really do much about an individual piece of slop because there's so much of it. I think we need a way to filter this stuff, societally.


You might be onto something. I find every image unsettling. they're very good no doubt, but maybe it disturbs me because all of it is a complete copy of what someone else created. I know, I know, there is no pure invention. That's not what i mean. Humans borrow from other humans all the time. There's a humanity in that! A machine fully repurposing a human contribution as some kind of new creation, iono i'm old, it's weird and i don't like it.

Maybe i'm just bloviating also.


No the reason you feel uncomfortable is because it is theft - a wealth transfer.

Not sure why we need to pretend what is and isn’t going on here.


Trying to watermark or otherwise label them as AI generated is a lost fight, we should assume every image and video we see online may be AI generated.

This helps the segment of society that is interested in applying critical thinking to what they see. I am not sure that is anything like a majority or even a significant plurality. It seems like just about every image or video gets accused of being AI these days, but predictably the accusations depend on the ideology of the accuser.

Maybe I'm stupid and naive but I just don't really see how any of this is _fundamentally_ different from Photoshop. Trusting the images you're looking at on the internet has been impossible for a long time. That's why we have institutions and social relations we place trust in instead.

It's really a matter of scale. Photoshopping something takes time, and unless you're good there are ways to tell. Typing something into an AI is so much faster which means you can do it at scale, and you don't need to be skilled to do it.

Also, kind of out of scope for this discussion but deepfake videos are really the most scary.


It makes it more accessible. The amount of people who can prompt chatGPT is significantly higher than the amount of people who can edit a photo in photoshop and make it perfect.

>If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

Can you name any countries that you think are functioning, and what their laws are on watermarked AI images?


I dunno, I have some band posters that are pretty cool pieces of art that obviously had a lot of thought put into them (pre-AI era stuff). I don't think I'd hang up an AI generated band poster, even if it was cool; I'd feel weird and tacky about it.

I was hosting a Karaoke event in my town and really went out of my way to ensure my promotional poster looked nothing like AI. I really really really did not want my townfolks thinking I would use AI to design a poster.

My design rules were: No gradients; no purple; prefer muted colors; plenty of sharp corners and overlapping shapes; Use the Boba Milky font face;



I mean: https://imgur.com/a/BYikxEI

The difference is very stark:

- The AI has a hard time making the geometric shapes regular. You see the stars have different size arms at different intervals in the AI version. This will take a human artist longer time to make it look worse.

- The 5-point stars are still a little rounded in the AI version.

- There is way too much text in the AI version (a human designer might make that mistake, but it is very typical of AI).

- The orange 10 point star in the right with the text “you are the star” still has a gradient (AI really can’t help it self).

- The borders around the title text “Karaoke night!” bleed into the borders of the orange (gradient) 10-point star on the right, but only half way. This is very sloppy, a human designer would fix that.

- The font face is not Milky Boba but some sort of an AI hybrid of Milky Boba, Boba Milky and comic sans.

- And finally, the QR code has obvious AI artifacts in them.

Point I’m making, it is very hard to prompt your way out of making a poster look like AI, especially when the design is intentional in making it not look like AI.


I hear what you’re saying and at the same time I don’t agree with some of your criticisms. The gradient, yep, it slipped one in. The imperfect stars? I have seen artists do this forever, presumably intentional flair. The few real “glitches” would be trivial to fix in Photoshop.

But they are very different certainly. ChatGPT generated a poster with a very sleek, “produced” style that apes corporate posters whereas you went with a much more personal touch. You are correct that yours does not look like typical AI.

My point is certainly not that the AI poster is better, only that it’s capable of producing surprising results. With minimal guidance it can also generate different styles: https://imgur.com/a/zXfOZaf

I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better. A year or two ago the poster would have been full of nonsense letters.

> And finally, the QR code has obvious AI artifacts in them.

I wonder if this is intentional, to prevent AI from regurgitating someone’s real QR codes.

ETA: Actually, I wonder how much of the “flair” on human-drawn stars is to avoid looking like they are drag-and-drop from a program like Word. Ironic if we’ve circled back around to stars that look perfect to avoid looking like a different computer generated star.


My point is not that the AI version looks bad (although it does) it is that I hate AI, and so do many people around me. And I hate AI so much, and I know so many people around me hate AI as much, that I am consciously altering my designs such to be as far away from AI as I can. This is the moving from Seattle to Florida after a divorce of creative design.

About the stars. I know designers paint unperfect stars. I even did that in my design. In particular I stretched it and rotated slightly. A more ambitious designer might go further and drag a couple of vertices around to exaggerate them relative to the others. But usually there is some balance in their decisions. AI however just puts the vertices wherever, and it is ugly and unbalanced. A regular geometric shape with a couple of oddities is a normal design choice, but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that.


> I am consciously altering my designs such to be as far away from AI as I can

I don’t think this is a productive choice, but it’s certainly yours to make.

> but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that

I find this such an odd thing to say. It’s way easier to draw a wonky star than a symmetrical one. Unless “drawing” here means using a mouse to drag and drop a star that a program draws for you.

Vintage illustrations are full of nonsymmetrical shapes. The classic Batman “POW” and similar were hand drawn and rarely close to symmetrical.


I draw mine in Inkscape (because I like open source more then my sanity) and inkscape has special tools to draw regular geometric shapes. You don‘t need to use those tools, you can use the free draw pen, or the bezier curve tool, or even hand code the <path d="M43,32l5.34-2.43l3.54-0.53" />, etc. But using these other tools is suboptimal compared to the regular geometric tool.

Apart from me, my partner also does graphic design, and unlike me she values her sanity more then open source so she uses illustrator for her designs. In adobe’s walled garden world of proprietary software it is still the same story, you generally use the specific tools to get regular shapes (or patterns) and then alter them after the they are drawn. You don‘t draw them from scratch. If you are familiar with modular analog synthesizers, this is starting with a square wave, and then subtracting to modulate the signal into a more natural sounding form.


> I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better.

What’s the mechanism that makes an AI ‘better’ at looking non-AI? Training on non-ai trend images? It’s not following prompts more closely. Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.

To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art. To be non-AI, art needs to counter all averages and trends that the models are trained on.


> What’s the mechanism that makes an AI ‘better’ at looking non-AI?

I don’t know. Better training data? More training data? The difference over the past year or two is stark so something is improving it.

> Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.

The fact that humans are actively trying to make art that does not look like AI makes it clear that AI is not so obvious as many would like to pretend. If it were obvious, no one would need to try to avoid their art looking like AI.

> To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art.

Obviously.

> To be non-AI, art needs to counter all averages and trends that the models are trained on.

So in order to not look like AI, art just has to be so unique that it’s unlike any training data. That’s a high bar. Tough time to be an artist.


I don't know why you're downvoted, I think that's a reasonable use of AI and it looks pretty good.

Edit: I think I misread what you were saying, but I do think it's a nice poster! I get that design is going to have to avoid doing things that AI does, which is kind of unfortunate, because AI is likely trained on a lot of things that are generally good ideas.


Honestly, it's no wonder there's a lot of pushback. We have these irresponsible CEO's talking non stop about taking peoples jobs at a time when people are struggling to make ends meet, all while taking in insane cash infusions. Why wouldn't people loathe AI at this point when the marketing is "we're going to fuck you over and there's nothing you can do about it".

Billboard company puts up billboards saying "don't hire humans".

HN comments: "I just don't understand why people hate AI".


Maybe I lack imagination, but I just can't figure out what I'd use this for. I'm finding AI helpful in writing code (especially verbose Unreal Engine C++ code) as a companion to my designs, but, I really don't want it using my computer. I dunno, I guess the other use case would be summarizing slack or discord but otherwise this seems to me like a solution in search of a problem.

I feel the same way, the AI browsers and the Agentic team of agents stuff I just really dont understand why I would want it. I use AI every day but theres always a clear separation, as in I'm using it to get an output I want, not getting it to use things for me. It screws up the output maybe 30% of the time, so why would I risk it actually being able to do things and touch stuff I care about.

Going on an old legacy website, downloading reports, summarizing them, and then doing things based on those

Or basically any app without MCP capabilities

I ask the AI daily to summarize information across surfaces, and it's painful when I have to go screenshot things myself in a bunch of places because those apps were not made to extract information out of them, and are complete black boxes with a UI on top


Feels like a lot of summarizing, which is just something I rarely need. YMMV depending on your job of course.

I don't really see how that makes it ok. I stole your wallet, but I gave the cash to a homeless person!

Amusingly, one of the ads on the page for me is a very obviously AI generated image of a man with sciatica. I say very obviously because his hands are on backwards..


I remember back in the early Windows XP era when things got so bad that Microsoft basically had to make a hard pivot towards security and reliability.

I think they may need to do that once again. Almost every product of theirs feels like a dumpster fire. GitHub is down constantly, Windows 11 is a nightmare and instead of patching things they're adding stupid features nobody asked for. I think they need to stop and really look closely at what they're prioritizing.


I'm like 99% convinced that most of the AI conversation upvotes at this point is astroturfing. I just don't see the correlation with the sentiment I get from talking to people in the real world (mostly negative AI sentiment) vs what I see here


I'm convinced that the majority of people hyping up AI don't actually interact with many people in real life, let alone people that aren't software engineers.


To those of us on the cutting edge, the opinion of the average person when it comes to these things is totally irrelevant. I see the benefit and possibilities with my own eyes, I don't need the confirmation or denial of the average person.

All that said, I've already set up a few of my non tech close friends with Cowork and they are huge fans of it now. It's somewhat shocking how much menial repetitive work the average white collar job entails.


The average person dislikes AI not because it isn't useful. Anyway, we're more or less all on the cutting edge together


I think some companies are just behind the curve, so this sentiment seems bizarre to some.

At my big tech, AI is every conversation with everyone, every day. Becoming AI native is a huge deal for us. Literally everyone is making AI usage a core part of their job and it's been a big productivity accelerator.

Perhaps it's different where you work, so you don't see the sentiment.


As someone working somewhere very much like this, the "everyone" mentioned is actually a few people who are under the mistaken impression that the rest, keeping their head down, are equally interested and on board.

Your post was written almost verbatim by my coworker last week, who has no idea that I and half the team are not doing any of this stuff.


> AI is every conversation with everyone

Wow that sounds horrible.


Yes it is.


"AI native". I don't know if you intend it but you sound like a linked in lunatic, nobody talks like that


There's definitely some people working overtime to overhype AI on here. like 50% of the comments on this are from simianwords who only posts when people say negative AI sentiments.


Not necessarily. Personally, I'm both in love with AI (likely to upvote a convo) and scared about the short/medium term societal changes its job displacement will bring.


Honestly, I think there's a big divide, and those of us who are using AI intensively might just be increasingly "going dark" & distancing ourselves from those "real world" people. It's becoming detrimental being around people who are so actively negative about AI. It feels like being around people who still insist the sun orbits the earth. Those people are actually happier believing what they believe, so why spend any more time trying to convince them they're wrong?

I spent 2024 on Mastodon and I absorbed their groupthink that AI was useless... I wish I could get that year of my life back. I wish I had that extra year headstart on AI compared to where I am now. So much of my coding frustrations that year that might have been solved from using AI. I am reluctantly back on X - I hate what has been done to Twitter, but that's where so much of the useful information on using AI is being shared.

Well, back to it. Claude has been building another local MCP server tool for me in the background.


> It feels like being around people who still insist the sun orbits the earth.

100% feeling this divide as well.

People that deny the benefit of AI in 2026... I can't even engage with them anymore. I just move on with my life. These people are simply not living in reality, it will catch up to them eventually (unfortunately.)


at which point they will learn to write markdown files too.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: