Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be surprising to me if it consistently made this error. It may well correct you (and contradict itself) if you later used the word incorrectly. A human tutor might be less likely to make a mistake, but they would probably also get it wrong more consistently if they did.

As usual with language models, I think the key is to learn to live with their limitations as well as their strengths.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: