Launching Reflection. Our team pioneered major advances in RL and LLMs, including AlphaGo, AlphaZero, and Gemini. We're building superintelligent autonomous systems. Starting with autonomous coding.
We believe that solving autonomous coding will enable superintelligence more broadly.
Traditional OCRs are good at transcribing typed notes (e.g. pdfs) to editable docs, but do poorly with handwriting. The best OCRs I've seen can make a handwritten note searchable (e.g. Evernote) but still don't transcribe it editable form.
A lot of academic work on transcribing images of handwritten notes into text has surfaced over the last couple of years (mostly regarding using neural networks), and we decided to apply it
I was at LensCrafters the other day and had to fill out a paper form that someone input into a computer by hand, so definitely see the need there.
Our goal is to get a high enough accuracy for handwriting OCR to work in enterprise settings. 90% may be good for consumers, but I wouldn't want to put anyone's health on the line due to a transcription error
Currently, it's a combination of the two, mainly because people often take notes hastily so word-based recognition coupled with spell check allow you to fix things on the fly. However, this also results in bizarre outputs sometimes so we're still figuring out what an optimal output looks like.
Adding digital handwriting support is a great idea. I actually think other companies do it pretty well, which is why we didn't go down that route. The reason is that they use a different type of algorithm that learns, in part, from the handwriting velocity, and gives you edit access on the go, which is not possible if you've taken the notes in a normal notebook.
We decided to start with plain notebook text mainly because it seemed like no one else had solved this problem to our satisfaction yet.
We believe that solving autonomous coding will enable superintelligence more broadly.