I don't think this implies that. Even a human being with strong reasoning capabilities could be made to believe it were something it's not if that is all it was ever taught. This isn't a matter of reasoning.
according to one of the ai youtubers, the mistral large llm actually scored a perfect score on their logic benchmarks, which is pretty good. All LLM's are prone to some suggestion or confusion. I wouldn't base whether it's logical or not based off an assumption from one response.