There are an awful lot of referrer links in this article. That's not a bad thing by itself, but it always makes me ask the question - Did this person write the article they wanted to write and then add referrer links, or did they decide to put in a load of referrer links and then build an article around it?
Is this true, though? Does a judge's salary depend on him not understanding the fourth amendment? Does a policeman's salary depend on him not understanding appropriate search warrant tactics? I understand most people disagree with law enforcement's decisions in this case (I do too).
But the only people who seem to refuse to understand the circumstances are those whose righteousness depends on portraying the police as cartoon villains.
That article is wrong on quite a few key points, and misleading on others.
> Mark will deduct the fair value of his gift to his foundation from his taxable income in the year he makes the donation.
No he won't. The new foundation is an LLC, not a charitable foundation, so he's not eligible to take a tax deduction on it. And even if he was -
1. Giving away 99% of your wealth to save tax on the income from the other 1% is a really dumb way to save money, and
2. They don't have very much taxable income anway - just their salaries, and any capital gains they realise if they sell any Facebook stock.
> Mark Zuckerberg will transfer ownership of his Facebook stock without paying capital gains taxes.
Well, yes - you pay capital gains taxes when you sell something and realise a profit. Since he is not selling his shares, he is not realising a profit, and therefore is not liable to pay taxes on them. Nothing wrong with that!
Presumably you think that the people running the study didn't think of this?
Here's a quote from a New York Times article about the project -
> In the second year of the tournament, Tetlock and collaborators skimmed off the top 2 percent of forecasters across experimental conditions, identifying 60 top performers and randomly assigning them into five teams of 12 each. These “super forecasters” also delivered a far-above-average performance in Year 2. Apparently, forecasting skill cannot only be taught, it can be replicated.
So the answer to the question "What is the probability any one member of the group correctly guesses the result of the next coin toss?" appears to be "reasonably high".
My interpretation in layman's terms of what this paper has proved -
Take an infinite sequence of +1 and -1, for example
1, 1, -1, 1, -1, -1, -1, 1, 1, -1, ...
You get an evenly spaced subsequence by starting on the n'th element, and picking every n'th element after that (and stopping after finitely many terms). So for example, we could pick every 2nd element of this sequence and get
1 (stopping after 1 term)
1, 1 (stopping after 2 terms)
1, 1, -1 (stopping after 3 terms)
1, 1, -1, 1 (stopping after 4 terms)
1, 1, -1, 1, -1 (stopping after 5 terms)
The discrepancy of an evenly spaced subsequence is obtained by adding together all the members of the subsequence and taking the absolute value. So the discrepancies of the sequences above are 1, 2, 1, 2, and 1.
The challenge is to find an evenly-spaced subsequence with as large a discrepancy as possible. For example, in a given sequence, can you find an evenly-spaced subsequence with a discrepancy of 10? of 100? of a million?
The paper has (apparently) proved that for any sequence, there is no upper limit to the discrepancies of evenly spaced subsequences, i.e. no matter how large the discrepancy of a subsequence you have found, there is always one larger.
The amazing thing about this result, to me, is the fact that it holds for any sequence of +1 and -1. Even if you try to engineer a sequence whose subsequences all have very small discrepancy, in some sense "there isn't enough room". You are always doomed to come up with a sequence containing evenly-spaced subsequences of arbitrarily large discrepancy.
It seems it would come down to an under-sampling problem. You should be able to weave through the sequence any way you want, but if you weave through at a frequency different (ie. lower) than the frequency needed to sample the small discrepancy, you will begin to measure artifacts of the "original signal" (ie. your small discrepancy sequences).
Interestingly, with Joe Biden in the range (10%, 13.5%) to win the nomination, and (7.7%, 8.3%) to win the Presidency, the lowest his electability could be is
7.7% / 13.5% = 57%
and the highest it could be is
8.3% / 10.0% = 83%
so the market seems to be pricing a probability in the range of (57%, 83%) for Biden to be elected president if he won the nomination - compared to Hillary Clinton's range of (56.9%, 57.7%)
I can think of a few explanations -
1. Biden really is a lot more electable than Hillary Clinton
2. Both candidates have electability at the low end of their range (around 57%).
3. The market is wrong, i.e. they are systematically underrating Biden's chance of winning the nomination (and overrating Clinton's) or overrating Biden's chance of winning the election (and underrating Clinton's) or both.
There is no arbitrage, but if you believe 3, there might be a good profit to be made in expectation by backing Biden to win the election, but Clinton to win the presidency (you wouldn't hold it through to 2016, but take off the bet as soon as the odds come back to something that looks more plausible).
I haven't done the analysis to see if it's still worth it after trading costs, but maybe someone else wants to.
If Biden defeats the established nominee, he must have an exceptionally well run campaign making him more electable. That is by winning the nomination Clinton has just met expectations but Biden exceeds most expectations.
This makes a lot of sense. I see Biden as much more electable than Clinton mainly because Clinton will draw out the 'hate Clinton' crowd to vote against her. I think the DNC is also realizing this, hence all the talk of Biden jumping in to begin with.
The most interesting question, to me, is the one about which words you know the meaning of.
About half of them aren't real words. I assume this question is used partly as a gauge of vocabulary (how many of the real words do you recognize) and partly of honesty (how many of the fake words do you claim to recognize).
Kinda wish you had waited until after the results were published to mention it. Plenty of people will read the comments first, so by talking about it now you're actively harming the very experiment you're so impressed by.
That only applies to people who saw this on Hacker News; the survey link is on xkcd's front page today, and I imagine there are many xkcd readers who don't come here.
Certainly some jargon is typical in spoken environments but rarely necessary in written contexts, just as vice versa. Not that I have any specific proofs on the specific words in the survey, but given at least one example of curious slang ("fleek"), I wouldn't put it past Randall to attempt to find some.
Also, the fun thing about pronounceable neologisms is even if Randall made them up, there's a curious tendency in English at least to actually start using some of them.
I checked them after I'd submitted the survey. The only real words that I hadn't ticked were "regolith" (I was almost sure it was a real word, but I didn't know the meaning of it), "phoropter" (I believed it could be a piece of engineering terminology, but again didn't know the meaning) and "peristeronic" and "apricity" (I would have given better than evens that these were made up).
Tribution and Revergent are likely plays on con- prefix removal and substitution (contribution, convergent). If they are not part of some jargon, they will be. Similarly, the morphological construction for Unitory (-tory is the latin agency prefix) I can certainly believe it to have jargon usage.
Trephony could be a form of this noun for different grammatical situations: http://www.merriam-webster.com/dictionary/trephone That would suggest to me that it may be a biosciences jargon term already.
I would argue as a descriptivist that revergent is a legit English word - something that was previously divergent that is now tending towards convergence.
I'm not sure if it has been removed from most dictionaries, but
apricity is commonly accepted as "The warmth felt from sunlight". Wiktionary lists it as obsolete though.
I saw a Reddit post once about how Google was releasing Cromcast.
I immediately pictured a device with an HDMI interface that continuously forces your TV to change to that input, turn up the volume, and then repeatedly play short videos from different scenes, of Arnold Schwarzenegger as Conan the Barbarian, yelling "CROM!"
I don't know why, but I was able to recognize the non-words instantly (with one exception) even though I didn't know the meaning of every single "real" one. (Peristeronic, etc.)
As a non-native speaker, this was a hard question to be honest on. I recognize the word "rife", I can use it in a sentence, but _do I know what it means_?
I wondered the same (also a non-native speaker), but I figured if I can use the word correctly (to my knowledge) in a sentence, it means I know what it means. Even if I can't succinctly describe its meaning in English or Dutch. That's a job for dictionary-editors :-)
The problem with "rife" is that it's mainly used in the context of "rife with", where it means something like "full of". Then it gets confusing because it's not really correct to say "rife" means "full". Or, "rife means full changing with to of", which is just word salad.
I put yes for this because I've heard the funny sounding slang phrase 'on fleek' (similar to on point) before, but I'm not sure if it actually is an OED word.
You must accept that you will sound like a fool just for using this word. You can use it for anything you want to show satisfaction/approval for. React.js on fleek, eyebrows on fleek, uptime on fleek
Remember that a hedge fund only sees 20% of the profit that it generates on behalf of its investors, and a large part of that goes into staffing and infrastructure costs, not to mention that quant traders like to be paid sizable bonuses (and therefore would not want to work on a trade with a small upside).
I find it extremely unlikely (almost inconceivable, in fact) that a hedge fund would divert 10 researchers to work on a trade with $20m of potential upside.
"Standard" hedge fund compensation is 2-and-20 (2% of funds under management and 20% of gains), so a $2B fund would yield $40M in the 2% management fee. That's the "keep the lights on money".
Ten people, not ten researchers, but that would be towards the upper limit. Point is, $20M is nothing to sneer at, even for a billion dollar hedge fund.
> If you back test over the past 5 years then you are only testing your model against a huge bull market.
If your model is long as often as it is short (either cross-sectionally or in a time series sense) then this is less likely to be a problem.
A far bigger source of error for inexperienced researchers is incorrectly accounting (or not accounting at all) for trading costs, financing costs, roll costs, liquidity constraints, data delays, market impact etc.