I think you have misunderstood the application, possibly because of the way the parent framed it.
This is not a project to interpret MRI data, it is a project to apply ML to accelerated scanning, i.e. inferring data that is not actually measured.
So it's a real problem, if a systematic bias attenuates some signals that would be interesting, there will be nothing there for a radiologist (or other ML system) to perform on.
Think of this as more of a "algorithmic super-resolution" approach.
I would probably use the term "data-driven machine hallucination" - which is pretty awesome. Though, I can see why radiologists would be wary of such an approach.
I think you have misunderstood the application, possibly because of the way the parent framed it.
This is not a project to interpret MRI data, it is a project to apply ML to accelerated scanning, i.e. inferring data that is not actually measured.
So it's a real problem, if a systematic bias attenuates some signals that would be interesting, there will be nothing there for a radiologist (or other ML system) to perform on.
Think of this as more of a "algorithmic super-resolution" approach.