Hacker Newsnew | past | comments | ask | show | jobs | submit | Growtika's commentslogin

For my agency this won't replace Figma or designers. It's just a really useful tool to express yourself and communicate intent.

Before these tools, when a client wanted a specific section built, we'd spend hours hunting references across the web. The output always ended up feeling like a mesh of 2-3 sites, never fully unique. Then we'd burn more time explaining the intent to the client's designers and devs, usually with multiple rounds because words don't convey layout well.

Now we throw a quick mockup together in Claude or Lovable and send it. The designer gets the idea in 30 seconds instead of a 45-minute call, then pushes it further with their own taste and the client's branding.

It's not replacing designers. Most clients don't know what they want until they see it. These tools collapse that feedback loop from weeks to minutes, so the designer actually spends their time on the parts that need human taste, not on decoding a vague brief.


> It's not replacing designers.

Except it is. Plenty of places will say this is all good enough and not hire, or even lay off, the UI/UX person. I've seen this firsthand.


I have seen this as well, except the UI ends up all looking similar, because the harness prompt and training data doesn’t change much

The average becomes the same shade of gray. Familiarity breeds contempt. New types of design will emerge that are expensive to copy, because differentiation drives competition


Which is perfectly adequate for 95% of applications. Hell, it’d probably improve most applications if they adopted some proven shade of gray.

Why do people feel that each and every tool they use needs its own unique look and feel? And why are people willing to pay more for that? In some cases, sure. For my smart sprinkler app.. I don’t give a damn if it looks like 1000 other apps.


I’d actually prefer if all of them looked and worked the same, especially useful if you have elderly family members you need to teach how to use app for XYZ. all government websites (especially functional ones that citizens use to do something on) for instance should be exactly the same

Y’all would have much more productive conversations about AI if you were even for a second able to differentiate the aspects of $x that you care about as a craft via which the majority of people care about. HN has truly become the embodiment of the “this is fine” meme.

Yeah, agreed. And realistically they are correct. I’d argue that “good enough” is how most things are done.

Especially if you’re working on an established product with an existing design system. New features / layouts are really easy now.

These tools don’t solve big design problems, but they do resolve all the little design decisions often left up to devs at implementation time.


In those cases you’ve seen firsthand, who is actually using Claude design (or similar tools) to create the good enough design?

The important point is that 2 years ago these AI tools were like 20% percentile for UX designers, today it is as good as a junior or normal UI UX Developer, 2 years from now it will be in 90th percentile, etc

But again: who is actually using/commanding the tool to create the designs?

Someone who is wearing more hats than they used to.

And then in 2 years after what, what will happen?

99% of UX Designers will be out of the job apparently

Especially when the recession is around the corner. Thanks, Uncle Trump

Indeed. Kitbashing is a thing, and it was always a thing. Designers I worked with would spend hours doomscrolling pinterest, google images, etc. looking for their, uh... 'spark' when they were given a briefing.

This is just a really cool way of building.

I'm impressed. I tried Google Stitch but it was slow and useless. Sad, because Gemini has a pretty good creative flair, ironically enough.


Stitch has been very good for me to prototype some designs, and the exporting design feature is great.

But jeez, is it buggy, slow and unintuitive at times.

Complete shift in google's old engineering culture of high quality - they seem to be shipping quickly in favor of stability


That culture died forever ago, google has been launching half-baked shit that they kill in 18 months after no updates for a decade now.

"in favor" is hard to parse; "instead"?

> this won't replace Figma or designers

If every designer takes less time to do their job, you need less designers. There’s no getting around that simple math.


As if designers spend most of their time actually "designing". Same flaw of thinking with programmers and AI replacing their jobs. As if the main problem (and a major time waster) is actually programming and not dealing with a million of other things.

But AI is replacing programmer jobs. AI is taking on a part of those 'million other things' as well, and if you can shrink your organization due to AI use, many of those other things just go away.

I don’t know a single engineer that has been terminated and replaced with AI. I know lots of engineers that were terminated due to shifting jobs offshore for cost savings, over hiring during COVID, and a poor economy.

The job of the programmer/Designer will be to answer questions about what the program can do/not do and tweak it outside of Claude's abilities. To be able to answer those questions (like: can we do this ? will it fail under pressure ? etc) requires a deep understanding of the programs which you only have if you actually build them (with or without AI).

So, less jobs for sure, but not like 50% less jobs.


But it is not replacing programmer jobs, it's a fantasy world

Which is why compilers decimated the software industry.

...assuming all design needs are currently being met. Countless companies would benefit greatly from hiring a designer but haven't been able to, because the cost has been too high relative to the gain. If design becomes cheaper and faster, those companies enter the market — so lower cost per project doesn't necessarily mean fewer designers overall.

which makes the output of designers more cheep. which leads to an higher demand in sense of quantity and quality. which requires more designers.

Except there is getting around that simple math. Did you consider Jevons paradox? If design becomes more efficient, it will be used in more cases, and in return there will be more demand!

1. This workflow is beneficial for a per design pricing model not hourly.

2. Long term you can expect the minimum bar for aesthetically pleasing design to be raised and there to be overall less demand for human produced generic design.

3. This will mean all designers get pushed into the same corner or complex, unique, uninferable design and trying to fight it out.


> Most clients don't know what they want until they see it.

This is also true for entire applications. I vibecode apps in 2 days then say: is this what you want? Then they are enthusiastic and I will say that it will take 2 to 4 weeks to properly create it. They are always confused and then I start teaching them the conceptual basics of how apps are built and what corners are cut. Clients usually listen because they know we’re near the finish line


That's how I see most of the system design capabilities of Opus. I've been an engineer for 15 years. It's so nice to describe what you want, it gets you in the ballpark, and you can refine and tweak to get it just right. Sometimes it's fun to have it design crazy stuff so you can play out a crazy idea without wasting too much time. I can see how it could easily translate to the visual design space.

Now, if I could only get a model to draw arch diagrams....


There's multiple tools that generate architecture diagrams from text/code (Mermaid, LikeC4...). You start using one and tell the model to generate the text for the architecture diagram and it'll do it. Stuff like https://erode.dev tries to use LLMs to keep it in sync.

Spec-mode in the Kiro IDE generates requirements and design docs for a feature; the design always contains one or more Mermaid diagrams of the arch in the feature.

I'd be curious to hear from you in 3 months whether or not it ended up replacing designers at your agency, and why. Perhaps you could commit to answer?

You replace designers with who? Who is the magic person (who is not a designer) doing the work? CEO? CTO? I heard these stories when Midjourney started and everyone on gamedev channels where screaming how artists/graphic designers would be irrelevant... I was always puzzlde by who is going to be doing their work if they are gone...

I think the long term idea is that AI becomes continuous in the sense that it behaves like a regular employee, not something you have to prompt. So at a mid sized company say 100 "entities", the CEO has directors still, they have managers, but the managers are managing AI agents not humans.

But I don't think that's how it plays out. I think you still need to imbue talent, skills and direction into these tools and I don't see management, who did not have the skills initially, being able to do that task across multiple business aspects and agents simultaneously.

I think for now and perhaps until/if AGI, the sweet spot is having skilled individuals with experience using the tools to known good results. You still can't really delegate to the tools, you have to work with them. The benefit to management that a human has is they can delegate to a human, even when they completely lack the skillset they are delegating.


I’m not saying it will replace all designers immediately. But same as with developers: it may kill many junior positions quite quick. That’s why I’m asking the guy to share his experience 3 months from now.

Whoever used to assign tasks to the designer might now be able to assign them to a bot instead.

Or the design may be part of a larger task that has been delegated to a bot.


It's a pipe dream sorry

original title:

Half of planned US data center builds have been delayed or canceled, growth limited by shortages of power infrastructure and parts from China — the AI build-out flips the breakers


The dark mode looks better. In the light mode they have to improve the contrast. The lines in the visualisations are almost invisible, also the data table on the left

https://vers.sh/hdr_legacy/images/manager_vs_manager_of_mana...


I have a love/hate relationship with WordPress. I prefer it over any other CMS for reasons I can't fully explain. Probably nostalgia. I love the editor, the sidebar, the ability to find niche plugins in an endless marketplace, and how easy it is to get a site running.

What I don't love is how insecure it is. You're entirely dependent on plugins (if you're not a developer), and if WordPress updates and a plugin goes inactive, you're sitting on a vulnerability. It adds real stress for something that's just supposed to be a fun personal site.

I stopped building on it two years ago, even though I still like it more than Webflow and most alternatives I've tried. It's a bit sad.


This feels like one of those stories that spreads because it’s too good not to.

People see that Claude is a top contributor in OpenAI related repo and jump to conclusions about what it means

Same with the Ballmer/Mac story. It sort of happened, but not really how people tell it. The Mac was there, but he wasn’t actually using it the way the story implies.

https://www.engadget.com/2008-04-30-steve-ballmer-uses-a-mac...


A couple years back John Reilly posted on HN "How I ruined my SEO" and I helped him fix it for free. He wrote about the whole thing here: https://johnnyreilly.com/how-we-fixed-my-seo

Happy to do the same for you if you want.

The quickest win in your case: map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it. "Hey, you covered NanoClaw but linked to a fake site, here's the real one." You'd be surprised how many will actually swap the link. That alone could flip things.

Beyond that there's some technical SEO stuff on nanoclaw.dev that would help - structured data, schema, signals for search engines and LLMs. Happy to walk you through it.

update: ok this is getting more traction than I expected so let me give some practical stuff.

1. Google Search Console - did you add and verify nanoclaw.dev there? If not, do it now and submit your sitemap. Basic but critical.

2. I checked the fake site and it actually doesn't have that many backlinks, so the situation is more winnable than it looks.

3. Your GitHub repo has tons of high quality backlinks which is great. Outreach to those places, tell the story. I'm sure a few will add a link to your actual site. That alone makes you way more resilient to fakers going forward. This is only happening because everything is so new. Here's a list with all the backlinks pointing to your repo:

https://docs.google.com/spreadsheets/d/1bBrYsppQuVrktL1lPfNm...

4. Open social profiles for the project - Twitter/X, LinkedIn page if you want. This helps search engines build a knowledge graph around NanoClaw. Then add Organization and sameAs schema markup to nanoclaw.dev connecting all the dots (your site, the GitHub repo, the social profiles). This is how you tell Google "these all belong to the same entity."

5. One more thing - you had a chance to link to nanoclaw.dev from this HN thread but you linked to your tweet instead. Totally get it, but a strong link from a front page HN post with all this traffic and engagement would do real work for your site's authority. If it's not crossing any rule (specific use case here so maybe check with the mods haha) drop a comment here with a link to nanoclaw.dev. I don't think anyone here would mind if it will get you few steps closer towards winning that fake site


This is very generous of you!

If I was the author, however, I'd still feel like I've been put in a predicament where I need to spend personal agency to fix something that Google has broken.

While that may just be a fact of life, my internal injustice-o-meter would be raging. Like, Google is going to take hours of my life because they, with all their billions of capital, can't figure out the canonically-true website when it's RIGHT THERE in the GitHub repository?

Ugh. I guess that's just the day we live in. But it makes me rage against the machine on the author's behalf.


I had the exact same thought while reading the above comment, as helpful and generous as it is. Google's entire business model is to help people find things on the internet. They're an insanely well resourced company with all kinds of smart programmers. They have a moral and financial incentive to direct people to canonical sources of information. And STILL it's on this open-source dev to do all the steps outlined just to get the situation corrected?


Google's business model is to help Google's customers pay money to Google. Google Search's customers are mostly scammers who run adverts. Helping the user find a thing is at odds with helping the user find a scam that pays Google money.


This is somewhat true; despite what HNers seem to think, online ads are not very effective (in terms of convincing people to buy things), and Google 'screws over' its advertising customers as often as it delivers deficient search results to users.


The billions of capital are exactly why they don't care about you. Also, Google didn't break anything. The only person who can claw out a place in this giant machine for yourself is you - all while billions of others attempt to do the same.


I can’t be the only one blasting killing in the name of in my noise canceling headphones the moment I read your comment..


Author already is spending personal agency

So the feeling is fine, and if he’s going to bother at all, which he is, he should be doing it efficiently. Everything so far was panic and inefficiency


How many Google search results would point to OP's site?

If Google didn't exist, how many Google search results would point to OP's site?


> This is very generous of you!

No it's not, it's a sales pitch that intentionally ignores some of the things pointed out in the article. The author has invested time into proper SEO optimization, legit websites already link to it et cetera, it's all explained in the article.

From the perspective of a spammer: They need like 2 million MAU to earn below minimum wage. You're never getting those figures by doing something legit and actually useful to a tiny subset of people. You either need a vague site beyond any point of usefulness to anyone or you need a network of knockoff sites. The reason you can't compete with these shitty SEO spam version of your site is because they already have a network of "authoritative" (in Google's eyes) sites and all they have to do is to link from them to a new one to expand their shitty network.

From the perspective of SEO agencies: They can't guarantee results. They can tell you vague, easily-googleable best practices and give you an output of some SEO SaaS that's far too expensive for an individual to purchase. Ahrefs(.com) is the prime example of this, the cheapest paid version costs $129/month. Do you care about SEO that much? No, so you go to these agencies and give them money for them to give you the output of such a tool. But that SaaS also only contains vague and nebulous "things to fix" to follow "best practices" because they also cannot know what drives traffic to your competitor from the outside perspective.

My best suggestion would be to start a website from day one. Doesn't matter how good the website is at first, Google favours sites that exist for longer. If you're creating a website after the knock-off version already exist, you might as well give up immediately, it's gonna be near impossible to recover from that.


> No it's not, it's a sales pitch that intentionally ignores some of the things pointed out in the article.

Sales pitch or not, someone offering their time to help me with a problem is feels generous to me. To each their own, I suppose.

But again, you reinforce my point in your last sentence. Now anytime I want to make any little toy project (because how can anyone know when their toy project will blow up overnight?) I have to make a full blown website just to ensure I don't get SEO-spammed into oblivion?

My point still stands. Google is the problem and while we likely can't effectively do anything about it, it's frustrating as hell.


I never said Google isn't the problem, what I said is that going to an agency isn't gonna fix that problem any more than running a SaaS tool yourself will, because they're not Google and they have no insight into what Google made one website prioritised over the other. Because, as you've pointed out, Google is the problem.

> I have to make a full blown website just to ensure I don't get SEO-spammed into oblivion?

No, I said a crappy one on purpose. How good is it doesn't matter, the sooner the Google knows about the domain, the better. Might as well be a copy of your README file using one of the million SSGs GitHub supports that will turn that README file into a website. The only thing that matters is that the website exists and that Google knows about it before the other one.

That's why many people purchase the domain on day 1 before they even start building the thing and also why many have like a dozen domains in their account that is like a boulevard of broken dreams there to remind them once a year they haven't done anything with them.

Still cheaper than a SEO agency or in most cases even one month of ahrefs access.


Helping each other out is human isn't it?


Lame to have to do all this pointless busy work just to "win" the SEO battle.


If Nanoclaw generates some revenue, you should trademark the name and also buy nanoclaw.com. Move the site to the .com domain and then do the steps above. All things being equal, ".com" TLD should get you higher page rank than your existing ".dev". Google is ranking ".net" fake page higher than ".dev". If your page wasn't on .dev TDL it might be second already.


All this work to solve one website's problem... You can be sure MANY other open source projects are facing the same issue. It's just not a viable solution. There is something wrong with Google. Google has to fix it.


For someone that kinda gets SEO this was a bit enlightening, thank you. I have nothing for myself to use this information but helps me understand.


> Google Search Console - did you add and verify nanoclaw.dev there?

Did you read the post before promoting yourself?

> Submitted to Google Search Console probably 15 times.

> map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it.

The links are already correct:

> NanoClaw got covered in The Register, VentureBeat, The New Stack, all linking to the real site.


Fantastic advice


"Open social profiles for the project" haha if only it were that easy...


great feedback!


Great website design. Aesthetic next level


Genuinely curious, what felt off? Ideas are mine, AI just helped clean up the English (I added a disclaimer)


The writing style just has several AI-isms; at this point, I don't want to point them out because people are trying to conceal their usage. It's maybe not as blatant as some examples, but it's off-putting by the first couple paragraphs. Anymore, I lose all interest in reading when I notice it.

I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.

[1] https://en.wikipedia.org/wiki/False_friend


For me it's a general feel of the style, but something about this stands out:

>We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.

The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.


It reads like an Apple product page.


Most of the subheadings starting with "The" and "What Actually" is a bit of a giveaway in my view.

Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.


> Using them isn't an advantage, but not using them is a disadvantage. They handle the production part so we can focus on the part that actually matters: acquiring the novel input that makes content worth creating.

I would argue that using AI for copywriting is a disadvantage at this point. AI writing is so recognisable that it makes me less inclined to believe that the content would have any novel input or ideas behind it at all, since the same style of writing is most often being used to dress up complete garbage.

Foreign-sounding English is not off-putting, at least to me. It even adds a little intrigue compared to bland corporatese.


You admit it yourself here:

> I run a marketing agency. We use Claude, ChatGPT, Ahrefs, Semrush. Same tools as everyone else. Same access to the same APIs.

Since you use it for your job of course you use it for this blog, and that will make people look harder for AI signs.


> AI just helped clean up the English

Why?

I get using a spell checker. I can see the utility in running a quick grammar check. Showing it to a friend and asking for feedback is usually a good idea.

But why would you trust a hallucinogenic plagiarism machine to "clean" your ideas?


It did not feel off at all. I read every single word and that is all that counts.

I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.

The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.

So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.


Fair point. This is more mindset than case study. The proof is still being built across client work. Though I'd say the same was true for SEO in the early days. People speculating on what made Google rank certain sites higher, what made pages index faster, etc. The frameworks came before the proven playbooks


Had some laugh reading the list. Davos is in the heart of Europe >> Davos is in the middle of nowhere


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: