My headcanon is that "boredom" and "fear" are probabilities in a Markov chain - since it's implied the machine society is not all-knowing, they must reconcile uncertainty somehow.
Sure, but I'm still not sure it would realistically function. All data in this scenario is obviously synthetic data. It could certainly identify gaps in its "experience" between prediction and outcome. But what it predicts would be limited by what it already represents. So anything novel in its environment would likely confound it.
It's a cool sci-fi story. But I don't think it works as a plausible scenario, which I feel it may be going for.