> If you present a simple chess board to an LLM or a complex board to an LLM and ask it to generate the next move, it always responds in the same amount of time.
Is that true, especially if you ask it to think step-by-step?
I would think the model has certain associations for simple/common board states and different ones for complex/uncommon states, and when you ask it to think step-by-step it will explain the associations with a particular state. That "chattiness" may lead it to using more computation for complex boards.
> > If you present a simple chess board to an LLM or a complex board to an LLM and ask it to generate the next move, it always responds in the same amount of time.
> Is that true, especially if you ask it to think step-by-step?
That's fair -- there's a lot of room to grow in this area.
If the LLM has been trained to operate with running internal-monologue, then I believe they will operate better. I think this definitely needs to be explored more -- from what little I understand of this research, the results are sporadically promising, but getting something like ReAct (or other, similar structures) to work consistently is something I don't think I've seen yet.
Is that true, especially if you ask it to think step-by-step?
I would think the model has certain associations for simple/common board states and different ones for complex/uncommon states, and when you ask it to think step-by-step it will explain the associations with a particular state. That "chattiness" may lead it to using more computation for complex boards.