Hacker Newsnew | past | comments | ask | show | jobs | submit | zenati_'s commentslogin

interesting


so shall we deduct that reasoning capabilities are very bad for Mistral?


I don't think this implies that. Even a human being with strong reasoning capabilities could be made to believe it were something it's not if that is all it was ever taught. This isn't a matter of reasoning.


according to one of the ai youtubers, the mistral large llm actually scored a perfect score on their logic benchmarks, which is pretty good. All LLM's are prone to some suggestion or confusion. I wouldn't base whether it's logical or not based off an assumption from one response.


What do you think?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: