What on earth? I know we all love a good MS pile-on, but if this contest weren't being run by Microsoft, it'd be an academic shared task with results to be written up and published in a workshop of some conference, probably ACL or EMNLP (which will probably still happen, actually), for no prize money and yet with most or all of the authors open-sourcing their research code anyway.
And this isn't even substantially different from putting together OSS to do the job; even GPLed code lets MSR "use, review, assess, test, and otherwise analyze" it.
EDIT: To be even clearer, the idea of a shared task, run a bit like a contest (with a validation corpus and a secret test corpus, and a designated winner at the end), is really well established in the Natural Language Processing community, and the usual expectation is that you publish at the end. When I say in my first paragraph that this "'d be an academic shared task", I'm not speculating. The remarkable thing about the MSR contest is not the publication requirement, but that they're paying out money at all.
Either that, or it might get build in one of these newfangled "start-ups". Good luck buying, for $10,000, a company that has a spellchecker better than Microsofts.
A sufficiently-smart spell-checker would be indistinguishable from a translation aide, sentiment analyzer, voice-recognition classifier, etc. Basically, whatever tech is required to make spell-checkers better has other uses than spell-checking, and those might be valuable.
A sufficiently-smart spell-checker would be indistinguishable from an artificial intelligence. A company producing one of those things would make billions, but not by selling a spell-checker.
I didn't mean it would be indistinguishable from all of those things (because it is an AGI and could act as any of those things if it wished), but rather that it would likely have to incorporate one or more of those things to spell better. I have a feeling sentiment analysis alone would be a pretty good next step for spelling/grammar-checking.
And this isn't even substantially different from putting together OSS to do the job; even GPLed code lets MSR "use, review, assess, test, and otherwise analyze" it.
EDIT: To be even clearer, the idea of a shared task, run a bit like a contest (with a validation corpus and a secret test corpus, and a designated winner at the end), is really well established in the Natural Language Processing community, and the usual expectation is that you publish at the end. When I say in my first paragraph that this "'d be an academic shared task", I'm not speculating. The remarkable thing about the MSR contest is not the publication requirement, but that they're paying out money at all.