Hiring is now by filter. Corporations do not try to hire good people, they just avoid hiring questionable people. In other words they are looking for the lowest common denominator.
If you do not think this is true, then ask yourself whether the company is attempting to use AI. THAT IS WHAT THEY WANT AND VALUE. The safer and easier you are as hire the better you will be.
So yes. You were probably hired because you are not a super genius and because you don't have a fancy company name. Not despite it, but because of it.
The question I have is why do I now think many corporations are "too stupid to succeed"? I know they will not fail, but the panicky rush for the supposed safety of AI is stunning.
This measures fatherhood in terms of time spent with children. I question whether that metric is of any value what so ever. Is a farmer a better farmer because he/she spends hours in the field? Or is the correct measure of a farmer the crops?
This article, and the place it has on Hackernews and the quality of "commments" raises serious questions for me about Hacker News as a whole, the moderation, the readers and mechanism.
My complaint is not that this kind of thing exists. My complaint is that something better does not.
> Is a farmer a better farmer because he/she spends hours in the field? Or is the correct measure of a farmer the crops?
At the risk is stating the obvious, crops do not have the ability to notice whether or not the farmer spends time with them. If you think that a child won't notice that one of their parents doesn't spend time with them and will be affected by it, I don't know what to tell you.
To make the analogy proportional (according to the article), the difference would be something like "do they notice the difference between 1h 40m and 5 hours" (i.e. 3x more). My money would be on yes, they'd very much notice.
When it comes to kids, quantity has a quality all its own. Yes, there are better and worse ways to spend time with kids, but between engagement, enrichment, play, laundry, cooking, feeding, changing diapers, etc, there's just an immense amount of time to fill and work to do. By these metrics, doing the dishes probably isn't counted as "parenting", but since it lets your partner spend time with the kid, or rest and recuperate, it's a good proxy.
If you don't believe me, fold a load of laundry the next time you visit a friend with little kids. Or play with their kid for half an hour so the parent can let their guard down for a bit. It has an incredible impact.
Yes yes the goal of life is to flourish and this metric doesn't measure flourishing directly so what's the point? And indeed is the fact that we talk about observable metrics rather than whatever else I had in mind not an indictment of this forum, nay, society at large?
Meta said the contracting "did not meet (meta's) standards". I am sure that is true. meta's "standard" is not to reveal the illegal, immoral, unethical things meta does. No matter what the harm.
Maybe a company with those standards should not get our business. Oops, no wait, maybe they mean the Friedman Doctrine standards? In that case they are entitled to do any and every thing to make a profit. No matter what the harm.
I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.
So that when I say that they really do have a zero tolerance policy for anyone using their internal systems to violate user privacy, it's not because I'm eager to defend them. It's just true (at least, it was when I was there). There are internal systems dedicated to making sure you have access to what you need to do your job, and absolutely nothing else. All content you interact with through internal tools is monitored and logged. If you get caught trying to use whatever access your job gives you for anything other than doing your job, security immediately escorts you out of the building. This is drilled into new hires early and often. For everything Meta gets wrong, they really do take this seriously.
These contractors were hired to view this data. Your defense of Meta here doesn't make sense. Meta fired them for speaking out about the data Meta collects, not because they saw the data they were hired to look at.
Meta didn't fire individual independent contractors, they terminated a contract with a vendor. It's possible they did so because some of the vendor's employees spoke out but we don't know the real reason.
(I do think these smart glasses are super creepy and I'm not defending Meta's data collection practices.)
We know the course of events. We have brains and can reason. You really expect Meta to come out and say "Yep, we fired them because they whistleblowed"
> I'm not defending Meta's data collection practices
No but you certainly seem to be over here quibbling about epistemology in the defense of Meta
The problem is that your comment and the one you're responding to can both be true: Just because the rules are heavily enforced does not mean the right rules are in place, starting with the fact that Meta is collecting this data to begin with.
> starting with the fact that Meta is collecting this data to begin with.
But that can't be the problem. They're collecting the data that users send them. To avoid collecting it despite the expressed wishes of the user, they'd need to be able to recognize it as untouchable.
And recognizing the data is the exact problem that this African firm was hired to help with. What do you want Meta to do?
> To avoid collecting it despite the expressed wishes of the user, they'd need to be able to recognize it as untouchable.
> And recognizing the data is the exact problem that this African firm was hired to help with. What do you want Meta to do?
This is written as if logically exhaustive, but it misses the very obvious alternative that none of these videos should have been reviewed by a human at all (aka no reason to "recognize it as untouchable"; they're all untouchable).
If you want to get stricter and talk about collecting at all, Meta already has that solution too, by leaving the video in the user's camera roll. Let the user manually add the video to the Meta AI app or whatever if they want to share it with others there.
> This is written as if logically exhaustive, but it misses the very obvious alternative that none of these videos should have been reviewed by a human at all (aka no reason to "recognize it as untouchable"; they're all untouchable).
No, taking that approach would mean that when someone sends you data that you aren't supposed to collect, you collect it anyway. This is the opposite of what was suggested above.
> No, taking that approach would mean that when someone sends you data that you aren't supposed to collect, you collect it anyway. This is the opposite of what was suggested above.
That was in reference to the original story, that human annotation is happening on videos that no one knew were getting reviewed. If you want to talk about not collecting at all, well:
> If you want to get stricter and talk about collecting at all, Meta already has that solution too, by leaving the video in the user's camera roll. Let the user manually add the video to the Meta AI app or whatever if they want to share it with others there.
Ok, let’s see that consent form and how explicitly it states that random call center people will possibly look at anything you record. I’ll bet you a crisp $50 it was a form designed to be as click-through-worthy as possible, being sure to not trigger the “wait, should I do this?” reflex in users, and also not loudly disclosing that you could still use the device without agreeing, if you even can, while still technically “””disclosing””” this information. The tech world has turned consent into a fucking joke.
Right. The whole point is that click-through consent forms get users’ ”clear“ ”consent” legally, but not morally. They’re deliberately opaque about the implications (ask 10 users if they consider recording a video on a device voluntarily ‘sharing’ it with anybody and I’ll bet 9 will say no,) are pretty inscrutable to regular people, are designed to not raise suspicions like a social engineering attack, often mean not being able to use the product they just bought if they don’t consent, (which is manipulative as hell when you’re talking about inessential functionality like telemetry,) and extremely consequential. The only evidence you need for that is how pissed off people get when they find out what these companies actually do with that consent.
There's no allegation that these workers abused their access. The allegation is that their routine work reviewing footage included private content. The revelation is that USERS are using meta glasses non-consensually.
Many things about the world are worse than they could be because of choices Meta has made.
If Facebook were designed with a different set of incentives that prioritized the user, fostered positive engagement, and better respected individual's privacy and data sovereignty - setting a better standard for the whole industry - I feel there wouldn't be all this fuss today about banning social media accounts.
Indeed, on this one point, Meta has higher standards than the NSA used to - Snowden mentioned that employees tracked their current wives/girlfriends so often it unofficially got the codename LOVEINT.
Same for "Meta reads your E2E whatsapp messages". Meta does many things, is probably massively net negative for civilisation, but it doesn't do that.
Anecdotal of course, but I heard that this wasn't at all the case circa 2006 and that (then) FB employees would routinely read private messages and such. Obviously it wasn't a big company yet and probably didn't have those policies yet... (clearly the policies are there for a reason...)
That’s my recollection too - there were some high profile cases and so institutional safeguards were established. They very well may be at the forefront of it - however, it’s a side issue to what’s being discussed.
As someone who worked for a contractor which had Meta as a client, I disagree.
All advertiser support agents were given super-read on all profiles & pages, and I never once observed a CSR being questioned on their use of this access in any way.
> I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.
You're still on the koolaid, as many replies here accurately point out. Saying it's not because you're eager to defend them is lying to yourself, because you're smart enough to think of most of these replies yourself. Primarily the fact that these are contractors whose entire job is to watch smart glasses footage and the point your bringing up - even if we believe it at face value - is completely irrelevant to this post.
If you truly want to atone for your sins, you have a long way to go. I don't blame you for having worked there, I've worked at places that are only a little better than Meta (which is hard considering Meta is at the absolute bottom of the entire ladder, including Peter Thiel companies, thanks to Meta's sheer scale of carnage). But its time to completely come to terms with the reality, rather than stopping halfway to try and feel better about your resume.
Yea but no. Meta is a defense contractor that hires out to 3rd parties exactly to do this. so you guys don't get to do that, but a lot of other people are. I hope that helped you sleep at night while you were there. But yea, it all gets bought and sold at the end of the day.
The irony is meta wants to implement verification to protect kids. Meanwhile it's doing everything it can to exploit them most at every single level for profit and for the love of the game. Billions of dollars, the world's most advanced computers all dedicated for it
> At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI.
They just got fired for "piercing the veil". They committed the sin of bringing attention to the invasion of privacy.
If you don’t disable the glasses they could continue to share content. The article describes the glasses being left on a dresser and then sharing content of people without their consent, which could easily parallel into showing a sexual encounter or other privacy-sensitive scenarios.
Sure, and the same is true with my iPhone or my Olympus. Except the former encrypts the video and the latter isn't internet-connected.
The problem here (other than Meta being Meta) is people assuming Meta isn't permanently operating in bad faith. I'm just surprised anybody into tech to the extent they'd buy first-gen VR glasses would be surprised at Meta doing Meta things. That's all, I guess.
Unfortunately in today’s world where organizations are larger than many a country’s GDP, they really only have to face responsibility towards shareholders and maximizing profits is the thing they usually care about.
That's not what the Friedman Doctrine is, technically. It is that management should obey moral, ethical, and legal frameworks in the operation of the business for the benefit of its investors; and specifically NOT take actions which are outside of that narrow scope.
Does that include trying to influence moral, ethical, and legal frameworks to the benefit of the investors as well? Because if it does it is kind of a moot point.
Yes, although as the Koch Brothers point out in their book: you have to play by the rules that exist, not the rules you want.
If you read, eg, Buffet, he makes the point that a manager donating to a political cause, whether the Heritage Foundation or, God forbid, something as far right as the SPLC, makes that donation with money that otherwise accrues to the shareholders. The manager therefore creates an agency problem, where he might pursue his own interests at the expense of the owners.
If they are aligned, the manager can retain the earnings and create a dividend for the owners, such that they can then make the donation directly. If they are not aligned with the owners, they are redistributing wealth.
I am not surprised that the Left advocates for backdoor wealth redistribution, but I would prefer they be honest about it.
> I am not surprised that the Left advocates for backdoor wealth redistribution, but I would prefer they be honest about it.
I'm pretty sure it's not just the Left team that advocates for bribes (sorry lobbying) to politicians. I don't think that's a very commonly held understanding of wealth redistribution either...but this argument you present isn't very coherent which is somewhat expected so I guess keep on keeping on..
Is it illegal or immoral? Having Meta review this material has to be approved by users and has their consent.
There was an example in the article where a user’s glasses kept recording the user’s wife after he took them off. That’s bad but on the user, not Facebook.
Seems similar to a situation where someone takes nudes of someone without their consent and then sends them off to a lab to be printed. The lab isn’t doing anything illegal or unethical printing them when they ask the user “are these legal” and the user replies “yes.” Unless you want to stop photo printers from ever printing nudes, I think the responsibility is on the user, not the firm.
You are not helpless in these situations. You have a legal right to take action, appearing pro se, so it cost you almost nothing. Our legal system has degenerated into a medieval class system of trial by combat. Corporations can sue you, small corporations and users do not have a symmetric ability. It is like challenging a (dark) knight with armor and a very sharp sword to combat. You will lose. But here is the thing, if people start challenging, it is going to cost them a lot of money to field that knight. Think of this like drone warfare against Russian tanks. Be the drone. If GoDaddy has to field a lawyer for stuff like this, they will have the financial motivation to provide support.
While you could use small claims court, you have to be careful about your ability to appeal and to obtain evidence. In this case you are clearly aggrieved and AI should be able to help you draft a cease-and-desist letter.
Oh, and I have to include a disclaimer that this is not legal advise, that you should pay lots of money to get advice, etc or some dark knight will show up at MY door.
Do not be helpless. You have the right to take legal action. Knowing how to file a case pro se is a useful skill that every citizen should have. (Oops, that is not legal advice either!)
The moral of this story is: it is human nature that when we have something, we do not want to lose it. This is an entirely different paradigm between what we do when we do not have something. It explains why the wealthy are so toxic. Their only goal in life is not to lose what they have.
I worked at a well respected technical company and was given the task of evaluating a small company that we could acquire. I looked at the technology -something anyone could put together in a day. I looked at the business model. It was that you get free storage if you get a friend to sign up for free storage!!
I told the company that it had no technology and a business model that made no sense. They bought the company. Why? Because the target company told them that other companies were interested - and they were.
They did not want to miss the boat and lose what they had. Nothing came from this acquired company. Meanwhile the fundamental technology was disrupted by something new and the company fell apart. End of story. This is common.
So AI? This is about not missing the boat. Someplace, somewhere there is value in AI, but for now, if you have missed the boat you are probably better off. So no, this is not (as the current top comment says) about "they couldn't sell their software". This is about a very real reason why companies try to not miss the boat rather than innovate.
[ASIDE] And I cannot help but laugh at the Clojure reference with the statement "two things are simple if they are not intertwined". I have always been interested in Clojure, but I never go there because it is not "simple". It is intertwined with Java which I know all to well and do not love. Java was the language of choice at this same company and I wasted too many months of my life bowing before that cumbersome language.
Commenting on the aside: that was my first reaction as well (years ago). But really you can treat it mostly as having a mature runtime and freebies and get a lot out of the language. Many who use and like Clojure, don’t necessarily like Java the language, or have similar reservations like you.
When I first read about transducers I was wowed. For example, if I want to walk all the files on my computer and find the duplicate photos in the whole file system, transducers provide a conveyor belt approach. And whether there are saving in terms of memory or anything, maybe. But the big win for me was to think about the problem as pipes instead of loops. And then if you could add conditionals and branches it is even easier to think about. At least I find it so.
I tried to implement transducers in JavaScript using yield and generators and that worked. That was before async/await, but now you can just `await readdir("/"); I'm unclear as to whether transducers offer significant advantages over async/await?
[[Note: I have a personal grudge against Java and since Clojure requires Java I just find myself unable to go down that road]]
I think, like with the rest of Clojure, none of this is "revolutionary" in itself. Clojure doesn't try to be revolutionary, it's a bunch of existing ideas implemented together in a cohesive whole that can be used to build real complex systems (Rich Hickey said so himself).
Transducers are not new or revolutionary. The ideas have been around for a long time, I still remember using SERIES in Common Lisp to get more performance without creating intermediate data structures. You can probably decompose transducers into several ideas put together, and each one of those can be reproduced in another way in another language. What makes them nice in Clojure is, like the rest of Clojure, the fact that they form a cohesive whole with the rest of the language and the standard library.
Yes, this is the one I wrote about. I used it quite a bit a long time ago to get more performance. I remember vaguely that the performance was indeed there, but the package wasn't that easy to use and errors were hard to debug.
I believe at one point you could run ClojureScript without Java, but either I was wrong or that version is no longer available. As far as I can tell both Clojure and ClojureScript require Java as of now.
I was hoping that at some point one of those would get to be self compiling and so would not require Java, but that seems not to be the case?
I started reading Byte when I had no way to understand what it was talking about. There were technical terms that I simply had no reference for. What the heck is an assembler?
I suppose it was an example of immersion language learning because after devouring the magazine for months it started making sense. I knew it was about something I wanted to know.
I also have used DO for years, and was very happy with the quality of their service. Until I found the alternative prices. Not as easy to use, but much better performance for much lower prices.
The introduction lost me. To quote: "Japan’s vast railway network", but it does not address the mouse in the room. Japan is approximately the size of California with a population density that is three times that of California. I would argue that a comparison of rail systems without addressing those critical issues may be interesting but isn't really informative. The issues are complex.
France has a density (pop/km²) of 122, similar density countries include Poland, Azerbaijan, Sierra Leone, Egypt (how are consistent are the rail systems across those countries?). US has a density of 37.
California specifically has a density of 94 (29% lower), which puts it near Spain, Timor-Leste, Moldova and Cuba.. California is doing OK for it's position.
If you do not think this is true, then ask yourself whether the company is attempting to use AI. THAT IS WHAT THEY WANT AND VALUE. The safer and easier you are as hire the better you will be.
So yes. You were probably hired because you are not a super genius and because you don't have a fancy company name. Not despite it, but because of it.
The question I have is why do I now think many corporations are "too stupid to succeed"? I know they will not fail, but the panicky rush for the supposed safety of AI is stunning.
reply