The same company intentionally driving minors towards this content (despite claiming to care about them) is also lobbying in secrecy for requiring all of us to scan our ID and face in order to use our phones and computers.
They don't care about child safety as long as it doesn't become so bad as to impact their revenue negatively. But they see that governments all over the world push for some kinds of age restrictions, and they know they are a prime target and it is hard for them to push back against that.
The reason they are (not so secretly) lobbying for requiring us to ID ourselves at the device level is that they don't want to be the gatekeepers. They want to make creating an account as effortless as possible and having to prove your age is a barrier that make turn off some people, including adults, and they may instead turn to services that don't require age verification. By moving the age verification in the OS, not only the responsibility shifts to the OS or hardware vendor, but it also removes the disadvantage they have against services that don't require age verification.
If you read between the lines, you will see that they have the same stance: "put age verification at the OS level, so that people don't discriminate against us". They know they are not in a position to argue against "child safety" laws, so instead, they lobby for making it worse for everyone instead of just themselves.
Meta is like one giant cancer that grew a few small tumors of benign[1] nature, like some of their efforts in open source and open research (React, Llama, etc.).
Cancer is a great metaphor because its a perversion of natural, healthy processes. So called social media is nearly that, but actually grotesquely unhealthy.
People are dramatically unwell when they are not social, but that unregulated process is also negative up to and including being lethal.
Exactly. It started out as something good: see what friends and family are up to. But now: scroll infinite algorithmically placed or sponsored rage bait trying to trigger you into behaving the way that advances certain corporate or foreign interests at the expense of whatever was left of our already tattered social fabric and our collective mental or literal health.
Do you require everything you read to spell out everything for you point blank? Are you unable to connect dots?
The DARPA lifelog project ended the day Facebook was announced by a college dropout no one had ever heard of before. Facebook just happened to have the exact same goals / features as the lifelog project. Must just be a giant coincidence huh?
Oh yes, because intelligence agencies are known for broadcasting their moves to everyone.
I can guarantee you believe in a lot of things that you have no actual evidence of happening - just some perceived authority figure you trust for whatever reason, telling you it happened.
Also -
WHYY.org has received support through NewsMatch partner funds, which often includes contributions from large technology firms like Facebook (Meta) to support local journalism. These funds are generally used to match donations, helping stations like WHYY increase their financial sustainability and support public media.
Besides bloggers / youtubers who have written / talked about this, there's a single news story returned by Google, which I sourced. If there were other articles to source from, I would have. Given that our internet was created by DARPA and has always been under the control of intelligence agencies / governments, it's not shocking that there aren't a plethora of sources regarding Facebook emerging from DARPA.
sure. I don't think its a strong argument. You can control someone for a bit, but giving a 20 year old that much power and resource over this long an amount of time is far too loose a leash to constitute a robust plan. If we're going full tin foil then I think its more likely he's literally a robot than a front man for some shadowy cabal.
Distracting from actual stories like DARPA's lifelog program ending the same day Mark Zuckerberg announces Facebook to the world, with dumb videos from the onion, is really doing the world a great service.
No it didn’t. That was just like the first free sample from the drug dealer. Give a “good” free service to rope them in, always with the next steps in mind.
I disagree. I feel like earlier social networks hadn't yet huffed the "lean startup" gas and weren't obsessed with engagement and thus were not yet trying to hook their users into an engagement cycle like where we are today.
I feel like the Myspace/Friendster and early Facebook were nowhere near as harmful (albeit for addiction, those sites were still vulnerable to grooming) as where we are today.
OG Facebook was perfectly fine. In your analogy it’d be more like someone replacing your Diet Coke with actual cocaine. Like, yeah Diet Coke isn’t great for you, but it’s not cocaine.
Being on "social media" is a fundamentally unsocial activity: you do it alone, it makes you lonely, and it separates you from others. Some people manage to bootstrap a social layer on top of the base medium, but most are being driven apart for profit.
I think you can tell approximately how old someone is by when they believe Eternal September started on the internet. Nobody believes it was when they started enjoying the internet. It was always when some other generation or service arrived after them.
The internet was not a calm and well behaved place before Facebook arrived. The original “Eternal September” was in the early 90s. Usenet, forums, Reddit, comment sections, and every other social part of the internet have been full of bad behavior long before Facebook came along.
So many words and you missed the most important one: "netiquette"
That's the whole point: the word exists precisely as a testament to something that used to exist but now doesn't.
Anybody old enough to remember the word when it was common use should realize that it would have been impossible for the term to be coined in 2026.
If you missed that part of the Internet (maybe you were too young or maybe you were focused on other things, like the vast majority of people in the 90s), that's totally fine, but plenty of us did experience it and remember it pretty clearly.
> Usenet, forums, Reddit, comment sections, and every other social part of the internet have been full of bad behavior long before Facebook came along.
You can tell approximately how old someone is by whether they have reached the "everything sucks" part of life yet or not.
Eternal September started before I was on the internet, but there have been several similar shifts since then.
It gets continually worse. Agentic AI is another Eternal September. For example, we now have dimwits sending dozens of unsolicited and unreviewed slop PRs to open source projects. Every search result is an affiliate marketing listicle obviously written by a robot.
Hence... "of the web." IRC is and always was a cesspool but at least they had heard of netiquette, and it was something you could choose to partake in - or not, for the lulz. Nobody said anything about being "calm and well behaved" in particular.
As a Millennial, I'm sad to say that it wasn't even older generations' fault, but our own (+Gen X). The tipping point was letting in normies who traded in photos and money instead of text and art.
Elitism and selectivity were actually features of the early Internet. High barriers to entry (tech savvy, literacy) ensured that there was a high signal to noise ratio, and thus you had, let's say, upper quartile participants concentrated in one (forum of) fora.
LLMs are now heralding the Eternal September of even software engineering, and now I am wondering where to hang up my Techpriest robes in search of more elite pastures.
I wonder if this is how the clergy felt once the vulgar were allowed to study scripture not in the original spiritual programming languages of Hebrew or Latin, but English.
Elitism and selectivity were actually features of the early Internet. High barriers to entry (tech savvy, literacy) ensured that there was a high signal to noise ratio, and thus you had, let's say, upper quartile participants concentrated in one (forum of) fora.
I disagree. I'm of the Neopets/Pokemon forums generation. Elitism and selectivity were not what made that era a good balance between the caustic free-for-all we have now and the rich kid's playground from before. It was the technical and practical restrictions on what you could put in and get out of a web experience.
You couldn't upload thousands of thirst traps every month, because storage was limited. You couldn't summon another head of the dropshipping or affiliate marketing hydras with a few clicks, because the infrastructure didn't exist. You couldn't inundate users with dark patterns designed to extract every ounce of attention, data, and cash possible, because the rich web wasn't that rich yet.
You had to deal in text and reasonably-sized images on a CRT with a limited-bandwidth pipe feeding it all. Because of this, many of the techniques developed to transform so many other forms of media and so many other institutions into Capitalist hellscapes and high school, respectively, didn't work online. Until they did.
> I wonder if this is how the clergy felt once the vulgar were...
You meant the "vulgus". "Vulgar" has the same root, but a very different meaning.
This random thought is kinda disconnected from actual human history. "Not allowed to study Scripture" was not a thing: Illiteracy was. There were people that knew how to read and people who didn't, that's it.
I'm trying hard (and failing) to visualize your mental image.
"Dear Father: it looks like the Bible has been translated to English by my dear brothers up at the monastery. I'm sure you understand why I can no longer be a priest"
Remember that you're living in the actual earth timeline, not the 40k one.
I mean, one can always get an older machine and code everything as holy binary chant not only impress the youngsters, but also impose level of distance from the 'limited by llms'.
FWIW, I like the analogy despite seeing a benefit to knowing the original languages to studying scripture.
Ha, I think the great crimes and wrongs title goes to Angular. I became a front-end guy specifically to avoid all the OOP verbosity. I'm just trying to call some APIs and render some data on a web page. I don't need layers of abstraction to do that.
Anyways, is there a "just use vue" effort like there is with postgres :)
I also found Angular to be a nightmare. I enjoy Astro, Svelte, even Preact can be fun. There are many to try. My comment above was just a joke, but I'm getting downvoted.
Actually. Meta is spending millions to push the age verification requirement off to the app store providers, such as Google and Apple. It's an attempt to shield Meta from liability, transfer it to the app providers.
Having clear laws about what's allowed and what isn't is a lot cheaper than getting repeatedly sued for hundreds of millions for not doing things there was never a clear legal requirement to do.
im not aware of any law that went through parliament that directly impacts installing apps. OSA has already hit and didn't impact app stores. Can you link me the relevant legislation or hansard debates?
>to push the age verification requirement off to the app store providers,
and makes more sense, Apple and Google have your credit card , or if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account.
Even if they did, having a credit card is not proof of age.
> if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account
Setting up a "child account" shouldn't involve setting some age field. Setting up a "child account" should involve restricting permissions.
Why leave it to the OS or a company to decide what is "age appropriate"? Leave it to the parent to decide what the child should or should not have access to. Extra bonus: that same "child account" can then also be used for other restricted purposes. Want a guest account which limits activity? Want an incognito account? Want a sandbox account? None of these should require setting some age.
This shit already happened years ago with consoles, i setup a choild account and the games were restrcited and other features also.
I am not paid by a trilion dollar company to decide if it should be a birthday input, or a dropdown where you select your political and religious conviction about what your child should see. Sony figured it out, if Apple pays me I will spend more time to write for them a UX flow so average people could sert the accpunts up and the rest could ask their priest, cousins or other person that can follow instructions to setup the account for them.
The giants shoudl have solved this decades ago and not wait for the fanatic religious to push for this as laws and get the goverments involved, now you will get 25 different laws about this.
> at first boot up as a parent should be your job to setup a child account.
Something I would be 100% OK with is some regulation that at first boot, you have to present information about what parental controls are available on the device and ask if you'd like them enabled.
I haven't set up a phone in a hot minute, I only do it once every few years, is this something they already do?
I'd imagine there's a lot of cases where a parent buys a new phone and hands down the old one to their kid without enabling safety features. I don't know if there's a good way to help with that - maybe something like, whenever you go to set a new password, prompt "hey is this for a kid?" and go through the safety features again?
Just spitballing, that last one may not be a good idea, not really sure.
Exactly, I did not seen such a screen, but this giants have the budget to hire UX experts to clearly design the initial setup to clearly ask if this device is for a child or if is for multiple users to make more accounts. Also to make happy the other guy that commented they could ask you if you do not want to sure adult content too and in that case set same flags int he system.
Seems such a simple solution rather then each appa nd website having to figure out a way to do it.
Most sites are not going to implement this themselves.
I think they're in prime position to become a key broker of identity in the same way that a lot of people already log in with their meta or google account to unrelated websites.
They become very entrenched and get a ton of data that way.
As more and more people essentially lock themselves in with these identitybrokers tho I imagine it has a very stifling effect on speech tho. Imagine getting banned from those.
Isn't this conversation, not publishing scientific hypotheses, theories and findings?
If so, it is customarily permissible to use rhetoric and sarcasm to more strongly emphasize a point. Or, to leave the conclusion as an exercise for the reader.
By intentionally hiding their position (and simultaneously acting as though it is completely obvious) the OP shuts down any useful conversation that might follow. Do they think Meta will sell the user's data? Do they think different people are in charge of different policies at Meta leading to actions that appear to be in conflict with each other? Do they think they will use this information to train AI models? Do they think they will use this information to serve Ads?
There are many interesting ways that the conversation could have been carried forward but there is no way to continue the conservation as the OP doesn't make it clear what they think.
The only thing I can say is: No I cannot figure it out, please tell me what you're trying to say here.
What’s the point in providing a rebuttal to these points (e.g. that Meta doesn’t actually sell data to anyone) if the OP can simply say “that’s not what I meant”?
They are taking a position that cannot be argued against or even discussed because they don’t make that position clear.
> providing a rebuttal to these points (e.g. that Meta doesn’t actually sell data to anyone)
So one of your suggestions of what the OP could mean was something you explicitly don’t think is true and would argue against? That sounds like a bad faith straw man set up.
Perhaps it’s just as well that the OP didn’t provide one specific reason to be nitpicked ad nauseam by an army of “well ackshually” missing the forest for the trees.
You could, as the HN guidelines suggest, argue in good faith and steel man. The distinction between “selling your data” and “profiting from your data” isn’t important for a high level discussion.
Can you truly not see through Meta’s intentions? There are entire published books, investigations, and whistleblowers to reference. Zuckerberg called people “dumb fucks” for trusting him with their data and has time and again proven to be a hypocrite who doesn’t care about anyone but himself.
Or, OP is not hiding their position and shutting down conversation — they are not imposing their position and are opening it up to discussion.
What prevents you from saying "Yes, and Xyz!!" and another poster "Yup, and Pdq, and Foo too!"
Or, maybe OP is just being a bit lazy, but again, it seems the context is conversation, not formal scientific inquiry where everything must be falsifiable?
I think they meant that Meta is offloading the cost (fines) of farming minor's data onto the operating systems. With an up-front cost of 2 billion dollars in lobbying, they can avoid paying 300m+ fees regularly.
I mean, their telemetry crap is on a lot of apps too. I remember someone DMing me something very niche on Discord, and by chance I opened up Facebook, it gave me ads for that very, very niche thing I have never even looked up on Google, or Facebook, it was like IMMEDIATE. I opened up Facebook by chance, and voila.
The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
There's some serious shenanigans going on with ad companies, and we just seem to handwave it around.
Coincidentally, I remember both experiences very very vividly, because this was the last time I used either platform in any meaningful capacity.
> The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
Option A: The Nest camera not only listened to the conversation and picked out "Airport Grade Tar" and decided it needed to show adverts about it to people, but the camera also identified you to the point it could isolate your FB account in order to serve you those adverts.
(I'm making some assumptions but...)
Option B: Your brother had done various searches for airport grade tar from his home (in order to know how expensive it was). You, whilst visiting his home, were on his Wifi and therefore shared the same external IP address, your phone did enough activity whilst at his house (FB app checked in to their servers in the background, or used Messenger, etc) to get the "thinking of buying airport grade tar" associated with his external IP address associated with your FB account that was temporarily on that IP.
I had a friend who was convinced that some device in his house was listening in on his conversations with his wife as he kept on getting adverts for things they'd been talking about buying the day before but he hadn't searched for. (But she was searching for it from their home wifi, which is why it appeared in his adverts afterwards.)
Option C: no cameras or crude wifi tracing needed; they know who you talk to / associate with based on location data and the full profile of both sides, and can estimate things like 'will have mentioned X' -> can dispatch that via heuristic like 'show ads for X thing that was also mentioned by someone adjacent on that social graph'.
That is, BiL was marked as 'spreader for airport grade tar' based on recent activity, marked as having been in contact with spreadee, and then spreadee was marked as having received the spreading. P(conversion) high, so the ad is shown.
It's just contact tracing, it works well and is really easy even without literally watching what goes on in interactions.
Basically these age attestation/verification laws are being pushed as a "save the children!" scenario. But if you read the laws - all they really do is shift responsibility around.
Currently, websites and apps are supposed to ensure they don't have kids under 13, or if they do - that they have the parents permission. That's federal law in the US.
These laws make the operating system or app store (depends on the particular law) responsible for being the age gate.
This doesn't stop the federal law from being enforced or anything, but the idea is apps/websites don't handle it directly, that's handled by the operating system or app store.
So now - companies like Meta can throw up their hands and say "hey, the operating system told us they were of age, not our fault." It also makes some things murkier. Now if Meta gets sued, can they bring Google/Apple/Microsoft in as some kind of co-defendent?
I think that murkiness is the point. They don't need to create the most bullet-proof set of regulations that 100% absolves them of all responsibility, they just need to create enough to save some money next time they get sued.
I can think of a ton of regulations we could create to better help protect kids. We could mandate that mobile phones, upon first setup, tell the user about parental controls that are available on the device and ask if they'd like to be enabled. Establish a baseline set of parental controls that need to be implemented and available by phone manufacturers, like an approval process that you need to go through to hit store shelves.
We could create educational programs. Remember being in school and having anti-drug shit come through the school? It could be like that but about social media (and also not like that because it wouldn't just be "social media is bad," hopefully).
Again all these laws do is take what should be Meta's burden, and make it everybody else's burden.
Forget about the stated reason for the laws. The fact is that it makes sense that people using a service are age-appropriate. And there is no market mechanism (I mean tort law) because of Section 230.
Now the easiest law change - that wouldn't required anyone to change anything - would be to revoke Section 230. This would make service providers liable. Everything else is a band-aid. I doubt that this verdict will survive appeal (due to Section 230). But if it does, then again there is no need for any new regulations. The tort lawyers will solve the problem for us.
If we do have device age verification, then it still doesn't shield Meta. The lawyers will sue everyone involved, and disclosure will show if Meta had data that will have shown that user should have been blocked.
The purpose of age verification is to avoid all this. Of course the current proposals suck and won't achieve this. The market will not accept an approach that would work - which would be for anything with a screen or speaker to be permanently tied to an individual user. "OS verification" cannot succeed - it must be one-time hardware attestation. Even a factory reset wouldn't remove the user assignment.
> is also lobbying in secrecy for requiring all of us to scan our ID and face in order to use our phones and computers.
You’re conflating different things. The OS-level age setting proposals are not the same as scanning IDs and faces.
I’m anti age check legislation, too, but the misinformation is getting so bad that it’s starting to weaken the counter-arguments.
> Their stated reason? Child safety.
> Their actual reason? You can figure that out.
We’re commenting under an article about one $375M lawsuit over child safety and many more on the way. They are obviously being pressured for child safety by over zealous prosecutors. This is why they reversed course and removed end-to-end encryption from Instagram because it was brought up as a threat to child safety.
Also your “you can figure that out” implication doesn’t even make sense. The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content. I’m not agreeing with the proposal, but it’s easy to see that it would be more privacy-preserving than having to submit your ID to Meta.
> The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content.
I find it hard to believe that meta doesn't already have a pretty good age estimate for 95%+ of their users.
What offloading the responsibility to the app stores (or OS vendors) gives Meta is exactly that, offloading responsibility. In a future lawsuit, they can say that someone else provided them with incorrect information.
It is most likely not them but they proxie for the US. Under another administration they would use an NGO to advance the agenda. The goal is to facescan the world.
To be fair, they're just an evil corporation making lemonade out of lemons. I'm sure they'd be happier pushing porn and nazism to hundreds of millions of underage users, but if certain governments want them to write all that bunk code to verify everyone's ID, they might as well make money off the data.
> But skills are not fundamentally different from *.instruction.md prompt in Copilot or AGENT.md and its variations.
One of the best patterns I’ve see is having an /ai-notes folder with files like ‘adding-integration-tests.md’ that contain specialized knowledge suitable for specific tasks. These “skills” can then be inserted/linked into prompts where I think they are relevant.
But these skills can’t be static. For best results, I observe what knowledge would make the AI better at the skill the next time. Sometimes I ask the AI to propose new learnings to add to the relevant skill files, and I adopt the sensical ones while managing length carefully.
Skills are a great concept for specialized knowledge, but they really aren’t a groundbreaking idea. It’s just context engineering.
This is exactly what I do. It works super well. Who would have thought that documenting your code helps both other developers and AI agent? I've been sarcastic.
I would argue that many engineering “best practices” have become much more important much earlier in projects. Personally, I can deal with a lot of jank and lack of documentation in a early stage codebase, but LLMs get lost so quickly, or they just multiply the jank faster than anyone ever could have in the past, making it much, much worse for both LLMs and humans.
Documentation, variable naming, automated tests, specs, type checks, linting. Anything the agent can bang its proverbial head against in a loop for a while without involving you every step of the way.
This might be one of the best things about the current AI boom. The agents give quick, frequent, cheap feedback on how effective the comments, code structure, and documentation are to helping a "new" junior engineer get started.
I like to think I'm above average in terms of having design docs alongside my code, having meaningful comments, etc. But playing with agents recently has pointed out several ways I could be doing better.
If I see an LLM having trouble with a library, I can feed its transcript into another agent and ask for actionable feedback on how to make the library easier to use. Which of course gets fed into a third agent to implement. It works really well for me. Nothing more satisfying than a satisfied customer.
I've done something similar. I ask agents to use CLIs, then I give them an "exit survey" on their experience along with feedback on improvements. Feels pretty meta.
That comment didn't read like AI generated content to me. It made useful points and explained them well. I would not expect even the best of the current batch of LLMs to produce an argument that coherent.
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
The users' comment history does read like generic LLM output. Look at the first lines of different comments:
> Interesting point about Cranelift! I've been following its development for a while, and it seems like there's always something new popping up.
> Interesting point about the color analysis! It kinda reminds me of how album art used to be such a significant part of music culture.
> Interesting point about the ESP32 and music playback! I've been tinkering with similar projects, and it’s wild how much potential these little devices have.
> We used to own tools that made us productive. Now we rent tools that make someone else profitable. Subscriptions are not about recurring value but recurring billing
> Meshtastic is interesting because it's basically "LoRa-first networking" instead of "internet with some radios attached." Most consumer radios are still stuck in the mental model of walkie-talkies, while Meshtastic treats RF as an IP-like transport layer you can script, automate, and extend. That flips the stack:
> This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
The repeated prefixes (Interesting point about!) and the classic it's-this-not-that LLM pattern are definitely triggering my LLM suspicions.
I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Or maybe these are people who learned from a LLM that English is supposed to sound like this if you want to be permitted to communicate a.k.a. "to be taken into consideration"! Which is wrong and also kinda sucks, but also it sucks and is wrong for a kinda non-obvious reason.
Or, bear with me there, maybe things aren't so far downhill yet, these users just learned how English is supposed to sound, from the same place where the LLMs learned how English is supposed to sound! Which is just the Internet.
AI hype is already ridiculous; the whole "are you using an AI to write your posts for you" paranoia is even more absurd. So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere. Just like most non-AI-generated thoughts, except perhaps the one which leads to the fridge.
Or maybe the 2 month old account posting repetitive comments and using the exact patterns common to AI generated comment is, actually, posting LLM generated content.
> So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere.
FYI, spammers love LLM generated posting because it allows them to "season" accounts on sites like Hacker News and Reddit without much effort. Post enough plausible-sounding comments without getting caught and you have another account to use for your upvote army, which is a service you can now sell to desperate marketing people who promised their boss they'd get on the front page of HN. This was already a problem with manual accounts but it took a lot of work to generate the comments and content.
> I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Yes, if this is LLM then it definitely wouldn't be zero-shot. I'm still on the fence myself as I've seen similar writing patterns with Asperger's (specifically what used to be called Asperger's; not general autism spectrum) but those comments don't appear to show any of the other tells to me, so I'm not particularly confident one way or the other.
That's ye olde memetic "immune system" of the "onlygroup" (encapsulated ingroup kept unaware it's just an ingroup). "It don't sound like how we're taught, so we have no idea what it mean or why it there! Go back to Uncanny Valley!"
It's always enlightening to remember where Hans Asperger worked, and under what sociocultural circumstances that absolutely proverbial syndrome was first conceived.
GP evidently has some very subtle sort of expectations as to what authentic human expression must look like, which however seem to extend only as far as things like word choice and word order. (If that's all you ever notice about words, congrats, you're either a replicant or have a bad case of "learned literacy in USA" syndrome.)
This makes me want to point out that neither the means nor the purpose of the kind of communication which GP seems to implicitly expect (from random strangers) are even considered to be a real thing in many places and by many people.
I do happen to find that sort of thing way more coughinterestingcough than the whole "howdy stranger, are you AI or just a pseud" routine that HN posters seem to get such a huge kick out of.
Sure looks like one of the most basic moves of ideological manipulation: how about we solved the Turing Test "the wrong way around" by reducing the tester's ability to tell apart human from machine output, instead of building a more convincing language machine? Yay, expectations subverted! (While, in reality, both happen simultaneously.)
Disclaimer: this post was written by a certified paperclip optimizer.
It's probably a list of bullet points or disjointed sentences fed to the LLM to clean up. Might be a non-English speaker using it to become fluent. I won't criticize it, but it's clearly LLM generated content.
That was literally the same thought that crossed my mind. I agree wholeheartedly, accusing everything and everyone of being AI is getting old fast. Part of me is happy that the skepticism takes hold quickly, but I don't think it's necessary for everyone to demonstrate that they are a good skeptic.
(and I suspect that plenty of people will remain credulous anyway, AI slop is going to be rough to deal with for the foreseeable future).
Spammers use AI comments to build reputation on a fleet of accounts for upvoting purposes.
That may or may not be what's happening with this account, but it's worth flagging accounts that generate a lot of questionable comments. If you look at that account's post history there's a lot of familiar LLM patterns and repeated post fragments.
Yeah, you have a point... the comment - and their other comments, on average - seem to fit quite a specific pattern. It's hard to really draw a line between policing style and actually recognising AI-written content, though.
What makes you think that? it would need some prompt engineering if so since ChatGPT won't write like that (bad capitalization, lazy quoting) unless you ask it to
We finally have a blog that no one (yet) has accused of being ai generated, so obviously we just have to start accusing comments of being ai. Can't read for more than 2 seconds on this site without someone yelling "ai!".
For what it's worth, even if the parent comment was directly submitted by chatgpt themselves, your comment brought significantly less value to the conversation.
It's the natural response. AI fans are routinely injecting themselves into every conversation here to somehow talk about AI ("I bet an AI tool would have found the issue faster") and AI is forcing itself onto every product. Comments dissing anything that sounds even remotely like AI is the logical response of someone who is fed up.
Every other headline and conversation having ai is super annoying.
But also, its super annoying to sift through people saying "the word critical was used, this is obviously ai!". not to mention it really fucking sucks when you're the person who wrote something and people start chanting "ai slop! ai slop!". like, how am i going to prove is not AI?
I can't wait until ai gets good enough that no one can tell the difference (or ai completely busts and disappears, although that's unlikely), and we can go back to just commenting about whether something was interesting or educational or whatever instead of analyzing how many em-dashes someone used pre-2020 and extrapolating whether their latest post has 1 more em-dashes then their average post so that we can get our pitchforks out and chase them away.
LLMs will never get good enough that no one can tell the difference, because the technology is fundamentally incapable of it, nor will it ever completely disappear, because the technology has real use cases that can be run at a massive profit.
Since LLMs are here to stay, what we actually need is for humans to get better at recognising LLM slop, and stop allowing our communication spaces to be rotted by slop articles and slop comments. It's weird that people find this concept objectional. It was historically a given that if a spambot posted a copy-pasted message, the comment would be flagged and removed. Now the spambot comments are randomly generated, and we're okay with it because it appears vaguely-but-not-actually-human-like. That conversations are devolving into this is actually the failure of HN moderation for allowing spambots to proliferate unscathed, rather than the users calling out the most blatantly obvious cases.
Do you think the original comment posted by quapster was "slop" equivalent to a copy-paste spam bot?
The only spam I see in this chain is the flagged post by electric_muse.
It's actually kind of ironic you bring up copy-paste spam bots. Because people fucking love to copy-paste "ai slop" on every comment and article that uses any punctuation rarer than a period.
> Do you think the original comment posted by quapster was "slop" equivalent to a copy-paste spam bot?
Yes: the original comment is unequivocally slop that genuinely gives me a headache to read.
It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell.
Humans don't needlessly use a colon in every single sentence they write: abusing punctuation like this is actually really fucking irritating.
Of course, it goes beyond the punctuation: there is zero substance to the actual output, either.
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
> Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
This stupid pattern of LLMs listing off jargon like they're buzzwords does not add to the conversation. Perhaps the usage of jargon lulls people into a false sense of believing that what is being said is deeply meaningful and intelligent. It is not. It is rot for your brain.
"it's not just x, it's y" is an ai pattern and you just said:
>"It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell."
So, I'm actually pretty sure you're just copy-pasting my comments into chatgpt to generate troll-slop replies, and I'd rather not converse with obvious ai slop.
Congratulations, you successfully picked up on a pattern when I was intentionally mimicking the tone of the original spambot content to point out how annoying it was. Why are you incapable of doing this with the original spambot comment?
Cultural acceptance of conversation with AI should've come because of actual AI that are indistinguishable from humans, being forced to swallow recognizable if not blatant LLM slop and turn a blind eye feels unfair
For those looking to quickly understand scope of impact:
> According to Bloomberg and CNN, citing sources, SitusAMC sent data breach notifications to several financial giants, including JPMorgan Chase, Citigroup, and Morgan Stanley. SitusAMC also counts pension funds and state governments as customers, according to its website.
There are important contexts outside of machines you control where installing or running cli commands isn’t possible. In those cases, skills won’t help, but MCP will.
Hence why I said drastically, rather than totally. There are still a few edge cases where it is worthwhile, but they are small and shrinking, especially with services providing UI's with VM's/containers for the model to use increasingly being a thing.
Agreed. Only provide the servers and tools needed for that job.
It would be silly to provide every employee access to GitHub, regardless of whether they need it. It’s just distracting and unnecessary risk. Yet people are over-provisioning MCPs like you would install apps on a phone.
Principle of least access applies here just as it does anywhere else.
Which is really stupid. If I was going to an event and suddenly heard it was so dangerous there that the national guard had been deployed, I would not go to the event. Who would?
Guns, in one word. If you prefer longer answers, it's because police are not the rent-a-cop for private property.
There are definitely social situations where additional security is warranted, that should be clear to most Americans. That security has to come at the expense of those who finance contrived social situations on private property, though.
uh the police literally are the rent-a-cops - this entire thing was about him hiring off-duty cops to stand around, getting paid at cop overtime rates, with their guns, at his conference.
You’re probably underestimating how much credit is available to people. Having money issues? Keep paying your car while you borrow money from Klarna for your DoorDash chipotle.
I mean they hide it as best they can. Big restaurants like Applebee's you'll see "2 for $28" not priced at $28 so you can guesstimate the squeeze but otherwise you kinda have to go straight to Starbucks or McDonalds using a mobile app to order your "usual" to compare "here's what it looks like if I use DoorDash, here's what it looks like if I go myself," to find that the actual delivery fee is some $20-25 per order. Even worse, I'm pretty sure that they test algorithms to try to selectively lower this for new customers so that in the early days when you're more aware of the cost it seems like a steal.
Of course, you can arrive at the $20 just by thinking, "okay, I need someone to go do an errand for me, they'll have to drive to the restaurant, wait there for 15-20 minutes, and then bring it back... so it'll cost $15 for the hour of their time plus a few bucks of overhead for the platform plus a few bucks of messed-up-my-order insurance..."
Which gets us to 5 years from now when the DoorDash killer comes out, it'll be called Kourier or something starting with a K, and it'll start with trying to give Target a way to call up some extra trained Target employees, but they're cross-trained in packaging orders for K. One person will pick up 10 carefully-packaged K-orders, take them all to the central delivery hub, they'll get sorted into driverless cars that plot through some neighborhood some 10 stops, it'll be marketed as a real Amazon-killer and fly under DoorDash's nose -- InstaCart might balk, but DoorDash won't. Until they reveal some pizza-delivery partnership and suddenly within a year every restaurant has some K-employee working for them, whose job it is to batch orders down to the bikes that come by.
Sure, delivery times for Kourier will be 75, 80 minutes long at first. People won't mind because you pay $4 for delivery instead of $20. And Doordash/Amazon won't die, Amazon will just buy Kourier and DoorDash will focus on more rural locales.
I'll be disappointed if it isn't like Snow Crash (1992):
> The Deliverator, in his distracted state, has allowed himself to get pooned. As in harpooned. It is a big round padded electromagnet on the end of an arachnofiber cable. It has just thunked onto the back of the Deliverator's car, and stuck. Ten feet behind him, the owner of this cursed device is surfing, taking him for a ride, skateboarding along like a water skier behind a boat.
> In the rearview, flashes of orange and blue. The parasite is not just a punk out having a good time. It is a businessman making money. The orange and blue coverall, bulging all over with sintered armorgel padding, is the uniform of a Kourier. A Kourier from RadiKS, Radikal Kourier Systems. Like a bicycle messenger, but a hundred times more irritating because they don't pedal under their own power -- they just latch on and slow you down.
And while tipping is technically optional, it's de facto required. The driver will see the total pay for a delivery before they accept it, and if it's too low, they'll reject it, and DoorDash will offer the delivery to another driver. If you don't tip, then your delivery will be rejected until it reaches some driver that's gotten desperate. By that time, your food will likely have been already made and sitting and waiting for 30 minutes.
When the popular running theme of complaints is "it's impossible to do X because poor people all work 168 hours a week minimum", it's easy to excuse wasting your money to save time.
I think the most accurate part of your analogy is how fast the technology changes and renders yesterday’s product obsolete.
Just saw the Audi etron gt has amazing deals on used cars. Then I saw a new model coming out with better battery, more power, better range, and more features. Suddenly last year’s model is way less compelling.
True. At this point in time I'd only lease an EV. That being said, given that 100% of cars on the road won't be EV by 2030 as some have tried to convince us, I suspect the rate of innovation in EV land to slow as EV investment is greatly curtailed by the car companies.
Their stated reason? Child safety.
Their actual reason? You can figure that out.
reply