Hacker Newsnew | past | comments | ask | show | jobs | submit | japoco's commentslogin

This is probably just because the coins aren’t actually fair. If the coin is slightly biased towards heads, the first throw is more likely to heads, and so are all subsequent throws. Same for tails.


That's the opposite of what the paper says. If the coin was biased you'd expect it to land on heads more often regardless of what side it starts on. The coins land on the side they start on more often.


No, first of all due to imperfections in the manufacture of real coins, there are actually no fair coins. Also the bias in the probability affects the first throw as well as all the rest. If your dataset is composed of first throws/rest of the throws, you’re going to see they are correlated.


I think you're missing the fact that you don't have to chain coin flips literally right after another.

As the other commenter said, in between coin flips, use a highly secure PRNG to orient the coin randomly. This would correct for your bias (if true).


You're missing the point.

A coin that is biased towards heads is one that would more often land on heads regardless of how you hold it when you start the flip.

The study finding is that every coin is more likely to land on heads if you start it with heads facing up, and will also be more likely to land on tails, if you start it that way instead. This bias, while small, is greater than the typical observed bias due to imperfections in manufacturing.

It's not about the "first throw" vs the "rest of the throws". It's about how you hold the coin when you go to flip it. That's what they mean by "started".


That's not the problem. You can test that by using a highly secure random number generator, e.g. /dev/random in Linux, to select the initial side. Keep track of that initial side, record the side it lands on. This paper shows a same-side bias, not a heads bias.


A same side bias is either a heads bias or a tails bias.


How? I described how to randomize the initial side. Boolean true for heads, boolean false for tails, for example. Keep pulling those from the Kernel's secure RNG.


Its not, its a bias towards which side the coin started on.


Which is either heads or tails.


A coin with a heads bias is more likely to land on heads no matter how it's thrown.

A coin with a same side bias is more likely to land on heads if it's thrown with heads facing up, and more likely to land on tails if thrown with with tails facing up.


If you take a specific coin and find that when you prepare it to be flipped showing heads up, that it is more likely to land heads up, and that when you prepare it to be flipped tails up, it is more likely to land tails up, it seems confusing to call that coin 'heads or tails biased'


I was pretty intrigued by Aphantasia a while ago, as I can’t picture anything at all with my eyes closed. Then I asked all my friends and none of them could either, apparently. So I’m wondering what “picturing” means in the definition of aphantasia? With my eyes closed all I see is pitch black, but I can “imagine” myself seeing a red apple even with my eyes open, I don’t actually see anything though.


Consider another sense, like hearing. Many people experience "earworms" where a song gets stuck in their head and plays repeatedly. They know it's not actually playing since there's no "external" sound but they can hear it "internally".

"Picturing" something in your head is the same, just with the sense of vision instead of the sense of hearing.


Actually seeing with your eyes would (I think) be a form of synesthesia. Being able to imagine a red apple is "normal". Not being able to imagine a red apple is aphantasia ("imagine" in the sense of a "visual" imagination, not in the sense of being able to conjecture the existence of an apple with particular qualities).


Does it follow that people with aphantasia (edit: "aphantasics", per the article) would be unable to draw a realistic-looking apple from scratch? If not, then how do scientists show someone has aphantasia? Is it falsifiable?


Prof Joel Pearson has developed three distinct objective tests to measure aphantasia. Here is a talk about it: https://youtu.be/tA_4HNaKsS0


Interesting, thanks. I'll have to watch it when I find the chance.


In addition to the other replies: No, aphantastics (nice word!) aren't unable to do it, much like almost anyone can become amateur-level competent at almost anything if they put in enough effort.

But it's a matter of talent, and you're missing a big component. That can be made up for in other ways, though I think it'd be hard to reach the peak.


Not at all - you can still see the paper and know what an apple is supposed to look like. Describing a face or drawing a scene from memory is very hard though.


That doesn't really make sense to me. What does it mean to "know what an apple looks like" without being unable to imagine it? How would that be any different from knowing what a face looks like without being able to imagine it? Do note I said realistic apple [1], not just a cartoonish drawing, so I don't just mean "a squished circle"...

[1] Example: https://drawpj.com/wp-content/uploads/2018/05/hyperrealistic...


Why would I have to visualize to know whether or not something is an apple? I can recognize one on sight without having to match it up with a visualization in my head, so I can start from the right shape and add details until it becomes an apple. No visualization required at all. Obviously it's quicker and easier to use a model or reference picture, but not required.


I guess I don't see how that's be different from drawing a face? Start with the right shape an add details until it becomes a face?


I can draw a generic face, but not a specific one unless I have a model or picture. If I had to give a description of someone, even family members or close friends, I would be hopeless other than very basic things like relative height, hair and skin color.


Aphantasia and face blindness are correlated it seems like.


In my experience with aphantasia, I struggle with the ability to describe people based on their physical attributes (my parents for example) but I think it's distinct from face blindness. I can easily recognize people that I know purely based on their faces, but I don't believe that people with face blindness could. I just can't visualize what they look like mentally.


That's so fascinating, thanks. Does aphantasia give you any trouble in your daily life? Or does it end up being a non-issue?


It's a non-issue. I never even realized that it was a thing until I several years ago I was listening to a podcast that involved discussing mental monologues and imagery and thinking "WTF are these people talking about?!", and then doing some research. I had previously always understood things like "mind's eye" and inner voice/conscience as metaphors or some kind of mystical superstition.


Same here. I never realized until I read an article about it well in my fourties. I read the late Wittgenstein when I was twenty, and I also thought that thing with "the meaning of a dog is a mental image of a dog" was a metaphor. He quotes this somewhere to criticize it iirc.


> I was listening to a podcast that involved discussing mental monologues and imagery and thinking "WTF are these people talking about?!"

But you claim to not have an inner voice?

Boys, I think we've got one.


I don't think in words unless I'm reading or writing and "thinking "WTF are these people talking about?!"" is just a metaphor for incredulity (how else am I supposed to that feeling across the internet, through text?). I especially don't have an independent, always-on commentator talking to me in my head all the time, which is what I gather "inner monologue" is.


Inner monologue is pretty much exactly that. The ability to internalize sound and voice that is not hallucinatory or accusatory. It's literally at a very simple level being able to just think of yourself saying something and you thinking it you hear it. On the other end of the spectrum you can basically make any sound or voice at will audible in your head.

I am on that far end of the spectrum where I could just make anything happen in my head visually or auditorily.


I don't think you have it if you can imagine something.

I don't think it's meant to be in that dark space / visual eye space.


It's definitely not a black and white thing but a (flexible) scale: a noticeable variation of intensity can be felt when practicing an activity demanding an intense visual focus on a specific object (e.g. painting): an stronger-than-usual visual image can be recalled effortlessly, at least during a few days.


I don't think it has anything to do with your eyes being open/closed, or even to do with your eyes at all, unless it's describing something different to what I assume. It's about mental images and visualization, not your field of vision itself.


The problem is asking people to close their eyes. Most visualizers don't need to close their eyes to visualize, and many state that they can visualize even better with them open. Everyone sees some form of black/Eigengrau when they close their eyes.


ask this questionnaire to a range of people, including some visual artists / designers:

close your eyes, think of a family member, who is it, where are they, what are they wearing, can you see details about the clothing, can you see details in the background, is there motion, if you open your eyes can you still see it

there will be some very strong yeses in there if you sample people in visual professions


Even designers who can visualize report different representations and experiences: https://aphantasia.com/article/strategies/visualizing-the-in...


Then you don't have Aphantasia. Very few people are claiming they literally see things, they can just conjure up a mental model of something by thinking about it. The weirdness is that some people (those with Aphantasia) are claiming that they can't even do that...


I think art is much easier for LLM-style AI models to do compared to writing. To make a nice picture you just need to place pixels near each other in a way that looks good, and we all know LLMs are phenomenal at this. Good text on the other hand is not just text that has a good flow and fits the prompt. It must follow a line of thought, and LLMs don’t do that by design, even though we could argue wether they have that capability as an emergent one, but I don't believe that at all.


Someone on twitter pointed out that their dating methodology is faulty. Carbon dating makes sense only if used on organic materials in human context. For example, dating charred remains in a hearth with human artifacts in its vicinity. It seems like the authors in the paper just took a sample from the ground and dated the organic material in it, which doesn’t make much sense. I can’t find the original tweet atm but I hope I’ve been clear enough.


Carbon dating is just fine in non-human contexts. It's commonly used to date non-anthropogenic extinctions, for example. You do need to have a theory about the formation processes that led to the sample and how they relate the sample to the topic under study though.

The dating is this paper is definitely questionable. For one, the extent of their analysis seems to have been taking samples, sending them to the lab (which could have widely varying error checking, I haven't worked with this one specifically), and using stock date calibration software. Unfortunately, they're sampling an area known to be volcanic (which tends to produce older than true dates), with lots of water (matter transports through soil), across a difficult boundary (the Holocene), and a lot of vegetative intrusion (another common error source). They attempt to dismiss the latter by saying it can only make dates younger, which isn't even true, only typical.

The headline would be a tough argument to make even if their evidence was good given the prior history here, but they don't seem to have put even basic effort into it.


You are severely underestimating how good vibes from a lot of people are at giving good estimates.


They are actually only good if the members of that crowd have some sort of empirical experience with the problem they are being asked to solve. Guess the number of coins in a jar? People know coins and have experience packing things in a limited volume to the crowd has a hope of being wise. Guess an obscure materials science and physics result? Not a chance, the crowd is worthless.


> You are severely underestimating how good vibes from a lot of people are at giving good estimates

Anyone else remembers how people were selling and buying doge coin at 70 cents based on good vibes? No? Ok.


There has to be _some_ level of knowledge to base it off though, this is just hope.


Lying needs intent. ChatGPT does not think therefore it doesn’t lie in that sense.


Thats like saying robots don't murder - they just kill


Which is actually a very good analogy. A lot of things can kill you, but only a human can be a murderer.


In movies and written fiction, "intelligent" robots, anthropomorphized animals, elves, dwarves and etc can all commit murder when given the attributes of humans.

We don't have real things with all human attributes but we're getting closer and as we get close "needs to be a human" will get thinner as an explanation of what is or isn't human for an act of murder, deception and so-forth.


And pit bulls, but I digress. The debate gets lost in translation when we start having what do words mean debate.


This is an interesting discussion. The ideas of philosophy meet the practical meaning of words here.

You can reasonably say a database doesn't lie. It's just a tool, everyone agrees it's a tool and if you get the wrong answer, most people would agree it's your fault for making the wrong query or using the wrong data.

But the difference between ChatGPT and a database is ChatGPT will support it's assertions. It will say things that support it's position - not just fake references but an entire line of argument.

Of course, all of this is simply duplicating/simulating for humans in discussions. You can call it is a "simulated lie" if you don't like the idea of it really lying. But I claim that in normal usage, people will take this as "real" lying and ultimately that functional meaning is what "higher" more philosophical will have to accept.


Somewhat unrelated, but I think philosophy will be instrumental in the development of actual AI. To make artificial intelligence, you need to know what intelligence is, and that is a philosophical question.


Merriam-Webster gives two definitions for the verb "lie". The first requires intent, the second does not:

> to create a false or misleading impression

> Statistics sometimes lie.

> The mirror never lies.


As with text, the current form of AI (generative AI) will be very good at creating music that sounds good, not good music.


I agree, but one lesson I've learned from fifty years of music obsession is that people mostly don't like good music.


Ok fair, but most of the work of music creation is in making it sound good. I suspect the right combination of generative AI plus a little human guidance at the conceptual level should result in music that is good, with minimal effort. And to think I wasted all those years improving my piano ;-)


>unlike in Asia where the huge books actually winning accounts to improve their line and make more money

This is true but it should have a big footnote. Disregarding Asian bookmakers that like to void winning bets from time to time for no apparent reason, Pinnacle, which is the book you’re referring to, lets winning players bet at such low limits that making money off them is an excruciating process.


I was more thinking of SBO. They do have low limits on unpopular events, but syndicates are doing $1m/match with them on soccer...they are prepared to take risk.

You have to understand that the limit is a function of the volume they take after you set their line. That is how the economy works. Your profit is paid out of their profit from squares after the line is set. So if they aren't doing volume after the line moves, then they are just going to leave the limit.


Chrome on iOS is basically a WebKit skin, so it’s still safari (at least for now, apparently both google and Mozilla are developing ports of their browser engines for iOS).


This is not what is happening here, the USDC peg to the dollar has nothing to do with this. The problem is that Binance converts every stablecoin you deposit there to BUSD, and so if you want to withdraw say USDC or USDT even they’d have to convert them, which is apparently what they’re having trouble to do.


the USDC peg to the dollar has nothing to do with this.

I never said it did.

The problem is that Binance converts every stablecoin you deposit there to BUSD

The real problem is that Binance can do whatever *they* want with your deposits --- including refuse to return them.

The real problem is that the crypto market is about as far removed from *trustless* or *decentralized* as is imaginable or possible.

The real problem is that people who say they don't trust government will readily trust FTX or BlockFi or Voyager or Celsius or Binance or Bitfinex --- all of which have been shown to be far less worthy of it in my opinion.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: