Hacker Newsnew | past | comments | ask | show | jobs | submit | khalic's commentslogin

Some people seem to give very little thought to semantics and semiotics lately, to the point where people use words vaguely without even looking it up.

And of course child porn

[flagged]


That's what it was doing. Like literally. Chatgpt it or Google it. Supporting grok is paying money to a csam generator.

Edit I cannot reply to the post below me. I have gone entirely over to local models so I am paying zero dollars to any of the us defense contractors that are also tech companies. It's awesome.


Grok was used to create CSAM

[flagged]


Musk partied with Epstein.

[flagged]


What’s the correlation between people defending Musk, Twitter and kiddie diddlers?

I don't know either, I don't see the correlation with X and Musk either, as if he is the one developing the platform and not thousand of workers and leaders. What does the CEO of a platform has to do with what people post on it? The CEO of HN is responsible for what you just posted?

Kinda funny how people are selective about it, when you land on a website, you check who is in charge of it and for each CEO change you redo a decision? When you host your Postgres in the cloud, I hope you check as well who is in charge of Railway or Supabase, who knows? :/


There's only thing I find sadder than untouchable billionaires that never see any consequences for their actions: the people who think they need to stick up for them.

> What does the CEO of a platform has to do with what people post on it?

That CEO is actively promoting political viewpoints (via his account, his platform and his AI model) that are detrimental to my country and the way I want to live my life.

> When you land on a website, you check who is in charge of it and for each CEO change you redo a decision?

No. But if the CEO is very publicly a first-class a-hole, chances are I'll hear about it and I'll actively avoid doing business with them. That goes for the car dealership in my village, as well as the websites I interact with.


I'm not from the US so I don't really care, X is an international platform and almost all the content I see isn't US related (which kinda make me think that people should just set their account from outside of the US to just avoid this?), but from your point of view, it seems more of a disagreement of beliefs, wouldn't this reasoning apply for your beliefs as well? If the CEO of a certain platform was agreeing with your beliefs but 50% of the population don't, you are practically saying that people disagreering should boycott said platform, but isn't that how you just end discourse between people and create an echo-chamber?


Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom

This project is a gigantic waste of resources, it’s fine tuned on politics of the CEO, was used for CSAM generation and just sucks overall

It’s a model made for 36% of Americans. The rest of the world can’t care less.

Considering how few Americans there are and how little of that 39% even uses technology, that's what 20 million people at a maximum?

That seems like a decently sized market. Maybe not for an AI lab though.

Sure it's a good market for a normal company. For a social media company it's pretty isolated and really limits the products that can come out. But their current selling points: propaganda, csam, and psychosis engagement are quite strong amongst that population.

The resource waste he's talking about is horrendous, read more here: https://time.com/7308925/elon-musk-memphis-ai-data-center/

I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.

Even with grock it's only broadening things to creepy corporate right of silicon valley.


I'll take the fake corporate "left" over white supremacy any day.

Silicon Valley...left? Huh?

We’ve been automating every single industry that we touched for decades without a single word, bringing up tepid responses like “it’s capitalism” or “business is business” when called out on it.

But now that the time has comes for us to automate and change, we’re all up in arms and using ridiculous arguments like this post to fight it.

The hypocrisy is mind blowing


I hope we'll still be able to sell our beautiful artisan chrome extensions in second hand flea markets in the future

Thanks for the time capsule. Boy that’s a lot of ads (live video on a computer! Wow!).

The poor marijuana spider tried really hard


Open weight and open source are not the same

This is a pretty banal comment at this point. Open source is the term used in the LLM community. It's common and understood. Nobody is going to release petabytes of copyrighted training data, so the distinction between open source vs weights is a rather pointless one.

First you steal all the code, then you want to redefine the term? Is it never enough with you AI guys? Where's the humility, where's the good?

Sorry, too busy "stealing code" to answer right now.


its still a pointed one.

"open source" keeps being redefined by people with wealth and power to restrict our computing rights.

eventually its just gonna be "proprietary microsoft code that runs on microsoft servers, but you can see a portion of the results"


"Open source" as a term has evolved due to its success. It wasn't some malicious attempt at redefining things from the technical elite. It was a natural shifting of language, as happens with all words, as it entered more common usage.

It's entirely reasonable that this colloquial understanding would be applied to new categories such as AI models. I'm sure it'll be applied to many other things that don't fit the OSD either. That's just language for you.


Tell this to the Allen project, Apertus Project, SmoLLM, etc, etc, etc

cries in rust interrupts

And you don't think those are pretty low-level?

If they're doing the same thing as the interrupts that the article is talking about, they are low-level.

If they aren't, then the comparison is by name only.


Yes it’s not bad, although it’s not meant to be a chatbot, post training is limited, so it won’t feel as smooth as TOTL of course. The number of supported languages is mind boggling.

Focus was on open data, languages and auditability.

Their loss function is fancy, not sure about the effects


I can’t help but draw a connection with the numerous budget cuts from this admin, including the almost-crisis from last year with NIST.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: