CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies. The abusers just have to find the weakest link, and that weakest link will have fewer resources than multi trillion dollar companies. You would also likely not hear many news stories about it, because they won't have the expertise to even detect it.
That's a tradeoff you can choose to make, but you need to enter into it with open eyes.
> Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies.
It doesn't matter how many are shared but how many are viewed. On a small server community policing works just fine, bad actors are easier and faster to block and to top it off, the smaller reach of each server makes it unprofitable to target multiple serves, fish for their weak points. etc - the dirty jobs become unprofitable which is what matters most.
With the help of AI, small players can do a better job at removing CSAM.
>That's a tradeoff you can choose to make, but you need to enter into it with open eyes.
No it's not. It's certainly not my choice. No one asked me if it's okay for Facebook to distribute CSAM because you insist it would be worse if it didn't.
I don't really care if you classify it as a choice or not. One set of actions results in more CSAM than others. Just because you don't like the implication of there being tradeoffs doesn't mean there aren't tradeoffs.
In what regard is it incorrect that a single, larger entity that is at least notionally committed to avoiding the existence of any specific type of content on their platform is more likely to successfully avoid the existence of that type of content on their platform than smaller entities with less resources?
Now consider that some of those smaller entities might not be even notionally interested in avoiding the existence of that specific type of content on their platform, and are small enough for regulators to be unaware of its existence?
What your opponent is saying is, "there are mutually exclusive A and B". A being widespread CSAM and B being somebody need to look at CSAM to remove it.
Can you elaborate on what exactly is wrong there? Do you see the third alternative C and it's not the "whole choice"? Or are you saying A or B do not exist and therefore there's no choice? Please name C, or tell us why A or B don't exist (or aren't acceptable), or explain your view that doesn't fit into these options.
Some people are not okay with actively facilitating harm to people, even if inaction results in harm to other people. See: the trolley problem. This is totally okay, but the point made above is that
>That's a tradeoff you can choose to make
is not correct: It is a tradeoff that one specific person can choose to make, but not one that I or we can choose to make, because we don't control facebook. Mark Zuckerberg controls facebook. He alone can choose to make that tradeoff, or not, on behalf of society.
> Server moderators should be legally responsible for content on their server.
And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.
Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.
>And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.
This is already the case. There is a lot of lawful, useful, medical or educational content that is actively censured on social medias because they include words or pictures of organs while same social medias actively encourage and develop algorithm to push underage girls (and possibly boys) posting pictures of themselves in sexual poses, attires and context.
Big tech and social media networks love and push CSAM, they just hide the genitals but the content really is the same.
> a lot of lawful, useful, medical or educational content
Like what? It’s all there on Wikipedia, and for all of Wiki’s faults, I have trouble imagining what kind of useful, educational, medical information you will find on social media that is better than that.
You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.
But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.
I have no deep problem with exclusion; as you say, that’s human nature and unfixable. Making mods personally legally liable for everything that appears on their board is just insane. How many minutes are acceptable for them to see and review content? Or does everything have to be pre-approved?
I know a local blog that pre-approves every comment. He lets a lot of stuff through, because he lets people be dumbasses. If he were personally liable, the conversation would get a lot quieter.
Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.
No. It's a legitimately difficult problem because there not all naked pictures of kids are illegal. The false positive problem is bad for business, but also generally bad even if the big social media was benevolent.
Moderators need to actually understand the context of the picture/video, which requires knowledge of culture and language of the people sharing the pictures. It's really difficult to do that without hiring moderators from every culture in the world.
But small federated servers can often align along real world human social networks, so it's easier for the server admin to understand what should be removed.
The amount of CSAM online is completely out of control. There's already nation-level and sometimes international cooperation to catch any known images with perceptual hashing (think: the opposite of cryptographic hashing) as well as other automated and manual tools.
My impression is it would take Manhattan-Project levels of effort and funds to come close to "solving" this problem, especially without someone getting on a watchlist for having a telehealth-first primary care provider insurace plan and asking for advice on their toddler's chickenpox.
Human review? Meta has small armies worth of content moderators already that tend to burn out with psychological problems and have a suicide rate where you're probably better off going to fight in a real war. (This includes workers hired by Sama in Kenya, to link back to the OP.)
I will reluctantly grant Meta that they're up against a really hard problem here.
Yeah, I agree with you. Of course, it’s not Meta’s blame that the CSAM actually exists, but calling the problem of filtering it extremely difficult at Meta’s scale is a problem that is easily solvable but fundamentally requires changing how the platform works, and would likely require a lot more money to be spent.
Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.
> Server moderators should be legally responsible for content on their server.
So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?
> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?
They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.
This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.
right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.
It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.
Fosstodon fediverse server doesn't have this issue, for example.
I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.
The problem is the people who participate in this (the illegal and immoral), and not "the network."
Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.
Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.
The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.
Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?
Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?
Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,
When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.
I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.
I am not sold on the federated thing to solve CSAM or similar issues.
Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.
>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.
That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.
Which is the first argument any decentralization advocate always brings up (and they're correct to do so), censorship is harder and evasion of law enforcement easier when dealing with a network of independent actors.
The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.
Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.
One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.
The FBI is interested in busting perverts in closets. That's often how they work their way up the "supply chain" when it comes to CSAM. Consumers lead them to distributors, who lead them to producers.
A fair point. But it still seems reasonable that only about 1% of suspect posts lead to a formal inquiry. Doesn’t mean they aren’t taking the report into account. You have to figure that they already have leads on most of them.
Do we really have to give the benefit of the doubt to the agency that was literally running one of the largest CSAM distribution outlets in the world for years as a honeypot?
If you want ro argue that the FBI is a fundamentally flawed agency that on balance is a net negative, I won’t fight you that hard. But during the civil rights struggle, they were the only force that could be trusted at all.
Yes, that was 60 years ago. No one involved at that time is still there - and in fact, most of them have passed. I don't know why you think there's a shred of relevance there.
when i ran a fediverse server for myself and 3 people, but allowed public signups if someone came by; it was very easy to ban people, and very easy to null-route entire swaths of the fediverse, because i didn't want their content on my service.
That's more what i got from that pull-quote. I know a company that has hundreds of individual forums, and those are all moderated quickly and correctly (last i heard). They're moderated so effectively they often get DDoS by Russian IPs for banning users for scam posts from that country.
I feel like for Project 1 at least, the old dashboard is better than the new one.
The problem with a set of mutually conflicting laws like this is that good designers are able to intuitively understand which ones to ignore and which ones to use for a particular project.
Yes. But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language).
The impact on people's time, money and on the environment are proportional.
> But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language)
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser. Even fairly decent-looking ones, not your Doom or Quake!
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
> Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser.
What about in QML, which uses Web technologies like CSS, JS and even basic HTML? The whole KDE Plasma 6 desktop is built around these technologies now and I (and many others) consider it light and high-performance.
If you saddle up those technologies in the full browser everything then it will get larger, yes, but nothing requires you to do this, just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed.
Plain Javascript can be very fast and still come at relatively low resource demands and the same is true of HTML and CSS. Many "plain desktop-native" applications often end up reinventing their own variants of HTML and CSS in the course of designing the U/I anyways.
It's better, but it's still quite bloated, to be honest. Linux is generally more memory-hungry than Windows because of how modular it is, and having no Win32 equivalent really hurts. Although they've started doing UI in React Native over there too...
Qt is much lighter than your Chromium-based stacks but all the waste kind of adds up.
"just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed"
Containers are hungrier than running stuff on bare metal...
Yeah, React Native is apparently how Claude Code operates (even on terminal) so it wouldn't surprise me to see it being useful in a native GUI context as well, if we can get more bindings than Skia.
> Containers are hungrier than running stuff on bare metal...
Containers are tremendously lightweight compared to VM. You might as well point out that running a full multiuser security-protected OS like Linux is hungrier than running on bare metal with DOS too. It's just as true, and even proportionally as true.
In any event a full Fedora container with all packages installed is going to be tremendously larger than a distroless hello-world "built" around Alpine, for instance, even though they both use container technologies. Same applies to Web technologies, you can certainly go and easily add a lot of waste using them but they are not themselves inherently wasteful.
I believe Firefox use separate processes per tab and most of them are over 100MB per page. And that's understandable when you know that each page is the equivalent of a game engine with it's own attached editor.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
> You have to take a topic you find interesting and read all possible related work in it
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
The usual justification is that if you don't do at least a breadth-first literature review, you can get burned by missing a paper that already does substantially what you do in your work. I've heard of extreme case where it happens a week before someone goes to defend their dissertation!
Excuse my naivety, but isn't it good if the same results get proofed in slightly different ways? This is effectively a replication, but instead of just the appliance of the experiments, you also replicate the thought process by having a slightly different approach.
It would be good (especially with the replication crisis), but historically to earn a PhD, especially at a top-tier institution, the criteria is conducting original research that produces new knowledge or unique insights.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.
It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
> The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.
> It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
Or are they trying to require enough rigor and discipline so that out of 100,000 people who want to be the next Einstein, the process washes out the 99,000 who aren't willing or able to do more than throw out half-baked 'creative' ideas and expect the world to pick them up and run with them.
There's only finite attention and money for funding research, so you gotta do SOMETHING to filter out the larpers who want to take it and faff around.
I think at this point the system has eaten its own tail a bit, but there's good reason to require some level of "show me" before getting given the money to run your own research.
For the humanity? Yes, it's generally good. For that particular researcher's career? Not really. Who wants to pay for research into something that's already known?
My imagination was leaning more into the educational side than the research side of university. I see how that wouldn't be appreciated by a patron, but when you get search grants, isn't the topic discussed before starting and paying for the research? Also that is kind of the point, why topics are cleared with the chair-holding professor, which is expected to be already experienced in the subject to know where the knowledge needs to be expanded.
Unless you're already an expert in the topic a literature search is literally step 1 since you have to check if your idea has already been done before.
That's where your supervisor comes in. In most cases, they should be an expert in the field, and guide you towards a useful and novel problem.
Moreover, I am not suggesting you don't look at other papers at all. But google scholar and some quick skimming of abstracts and papers you find should suffice to check if someone has already done the work. If you start fully reading more than a handful of papers, your ideas are already locked in by what others have done, and it becomes way harder to produce something novel.
Wage theft is the largest form of theft by a wide margin. Everything from not paying people at all for contracted work, forcing people to work overtime without additional pay, structuring contracts/agreements in terms of bonuses that can never be attained with the insane performance requirements, to paying people late.
To be fair, in most states you don't even have to sue to recover back wages. You just file a report with the state labor board, who are empowered to bring legal action on your behalf.
Any technology from before the time of your grandparents, and often parents, is usually perceived to be "not fancy". Because then those elders can't tell you in your childhood what life was like before that technology. So in your lived experience that technology was always there. Reading history later on, doesn't change your emotional experiences.
Disagree. There's lots of products and goods that have become less fancy as a result of changes in labor/material cost as industrialization ran its course and the old way is considered the fancy way.
Wood furniture joined with glue and pegs rather than inserts and screws. Solid wood furniture at all. Leather and natural fibers gave way to plastics. Ornate castings gave way to simple stampings and simply castings (where things are still cast).
I guess if we expand it worldwide that makes sense, though in a discussion about 96GB of RAM it feels like an apples to oranges comparison to bring in the entirety of the world. That is including a whole lot of people who probably couldn't afford the RAM or a car even if they saved most of their income for a decade.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
reply