Hacker Newsnew | past | comments | ask | show | jobs | submit | jamilton's commentslogin

"Remote Application

The Remote Uber and Hybrid are like a roulette computer in the cloud – it can be applied from anywhere with internet access. You don’t even need to enter a casino. You can have others play for you, who pay you part of their winnings. You determine who accesses your computer and when. The Hybrid computer even allows you to watch your teams play live with a hidden camera."

This whole thing sounds sketchy, but this is particularly sketchy.


That seems like a pretty reasonable subdivision to me. Culturally there are differences in what it means to be attractive as a male or female, so it follows that the effects could be different.

> Culturally

Surely you mean biologically


"Attractive" is something evolved as well though, so that just passes the buck. It's culturally determined as well of course.

Also interesting to consider how much "compute" has to be spent by humans are learning something like that. Like, do we need to see more examples if learning from pictures of cats and dogs than seeing them in person? How many more examples? What if we're seeing them all in sequence, or spread out across hours or days?

I've probably seen... at least a dozen pictures of aardvarks and anteaters and maybe even see one of them at the zoo but I don't think I could reliably remember which was which without a reminder.


If you see one picture of a zebra, fly to Africa, see a real zebra, you recognize it as a zebra. But zebras are really unmistakable.

If you see a picture of an oryx and a picture of a kudu, maybe you remember the shape of their horns and a picture is enough.

Enter waterbucks and steenboks. That starts to require a little more training.

Go all the way from mammals to insects. Bees and wasps and ants are still in the one picture is enough category. But what species of ants those on the wall of my house belong to?

I believe that ease of detection depends on how much things stand out on their own. Anyway, we do use a fundamentally different way of training than neural nets because we don't rebuild ourselves from scratch. However birds and planes fly in totally different ways but both fly. Their ways of flying are appropriate for different tasks, reach a branch or carry people to Africa to look at zebras.


Humans can learn to recognize the difference between male and female newborn chickens, not sure if you can train an AI to do that since we humans don't know how we tell the difference we just learn how to by practicing enough. It is a skill any human can learn quite quickly, it isn't hard we just don't know how it works.

What levers are there, really? Waymo has a monopoly and it seems like they will for a while, so they have a lot of power, but all I really see them doing is making it expensive. Anything that makes the experience worse takes away from their ability to take market share away from Uber/Lyft.

Ads in the car.

Forced “safety breaks” due to the newly proven dangers of sitting in a car for more than 20 minutes. Taking place at our safety parter McDonalds.

Deliberately taking certain routes and encouraging you to stop at partner stores.

Making you pay rent for the self driving.

Increasing the subscription costs continuously.


An obvious issue with the metaphor that comes to mind is that if you consider yourself to have a pretty good life, to be overall happy and satisfied, but you think it's possible to have an objectively much better life, then you'd rank yourself relatively low. And vice versa, if you think your life sucks but it could be much worse you'd rank yourself relatively high.

But, that is still giving a happiness score.

If the society/culture you are living within. Is well off, but swamped with cravings that it could be better. Then you are less happy.

This study isn't trying to measure how 'materially well off you are', it is happiness. So if you are un-satisfied even with your big house, and un-happy, that still says something.


It’s also a culture score. Objectively we should all be very unhappy because we’re not all billionaires and can’t do whatever we want but culture tempers at what point you’re content.

The kind of a person who thinks they would be substantially happier as a billionaire would probably be unhappy as a billionaire. You get used to what you have, but there is always more wealth / status / power / influence to be had.

Same problem as rating your pain on the pain scale: is 10 the worst pain I've experienced, or the worst I can imagine? Because I've got a... very vivid imagination. And still, that's the best we can do. I blame an imperfect universe.

No it is not the best we can do. Like, just ask "are you happy?" instead of some convoluted scale.

Like, if there is no consensus on what the scale means the answers will be too culturaly dependend and random between individuals.

In my experience doing surveys "was the food good?" after say a conference is way easiee to interpret than some scale answers.


I don’t think that assumption is being made, why do you think that? In terms of metaphor, training a model could be considered both knowledge acquired after birth and its evolution. But I don’t think it’s particularly useful to stay thinking in metaphors.

Technically any market that's about someone doing something by a certain time can be an assassination contract, if you think the market will it enforce it that way. Can't do it if they're dead.


I would assume it was manually coded.


They're all pretty common AI-isms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: