Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I will reiterate my comment on these types of projects. The regulatory pathway established by the fda for these types of products is woefully inadequate and they are very very hard to properly validate.

I think any application of deep convolutional neural networks should be alongside a radiologist. If we speed up scans and make up for it with convnets it is very hard (practically speaking: impossible) to properly validate that they will not hallucinate away rare abnormalities. It will also be impossible for radiologists your spot errors like this in the wild because of the reduction in quality of the scan.

What happens when the scanners change their behavior in some subtle way that is unaccounted for by FastMRI? It could start erasing a ton of subtle abnormalities and this would not be possible to check for since the original scan will be lower quality.



>It could start erasing a ton of subtle abnormalities and this would not be possible to check for since the original scan will be lower quality.

Radiologists are notoriously conservative for that very reason. Dreamy-eyed image processing and computer vision researchers have been trying to get radiologists to abandon some of their caution for many decades. All in vain. "Hard pass", they say most of the time. The dream usually doesn't get as far as their malpractice insurers, but it certainly dies there if it makes it that far.


> I think any application of deep convolutional neural networks should be alongside a radiologist. If we speed up scans and make up for it with convnets it is very hard (practically speaking: impossible) to properly validate that they will not hallucinate away rare abnormalities. It will also be impossible for radiologists your spot errors like this in the wild because of the reduction in quality of the scan.

The benchmark for this technology is not perfection. The benchmark is human radiologists. Yes, this technology will miss things, so do humans. But if it's performing better than the humans we should prefer it, even if it's not perfect.


>"The benchmark is human radiologists."

I think you have misunderstood the application, possibly because of the way the parent framed it.

This is not a project to interpret MRI data, it is a project to apply ML to accelerated scanning, i.e. inferring data that is not actually measured.

So it's a real problem, if a systematic bias attenuates some signals that would be interesting, there will be nothing there for a radiologist (or other ML system) to perform on.

Think of this as more of a "algorithmic super-resolution" approach.


I would probably use the term "data-driven machine hallucination" - which is pretty awesome. Though, I can see why radiologists would be wary of such an approach.


I will not say i understand it, but when an MRI aquires an image it is a rich k-space dataset. This data is then reduced to a single number (ie intensity) for each voxel in a MR image. Usually the k-space data is discarded after it is used to predict the voxel's intensity.


This is already theoretically a bit of a problem with conventional acceleration, which is arguably manageable through understanding of the physics.

I agree the validation and regulatory path is really problematic for less common presentations.

It's probably more interesting to use for artifact detection / QC issues, especially when you do it quickly enough to initiate a re-scan. More problematically but still interesting to do artifact removal. Acceleration is tricky though, for the reasons you mention.


The theoretical problems with deep convnets are far worse than with traditional reconstruction algorithms. They have a far greater capacity to hallucinate or have unexpected behavior on new inputs (see GANS, or the imagenet-c dataset where by simply adding a little motion blur they nuked resnet performance)


We are agreeing, I think.


Got it! I’ve heard points similar to yours raised in defense of these sorts of projects. “What about sparse reconstruction why aren’t you worried about that”


I've come across this issue in some tools in neuroimaging which allow you to make sub-voxel super-resolution volumetric inferences based on a trained model. This is all fine and good if you experimetal population matches the training population, but that is rarely the case in disease/disorder/developmental neuroscience. If you cant see it in your data then its only possible if the assumptions hold.


most places call this type of thing "clinical decision support". nobody in their right mind wants to remove the human doctors from the process... yet.


And yet, for the reasons I stated, this is exactly what FastMRI aims to do. Speed up the scan. There will be no way for Radiologists to oversee the reconstruction and make sure subtle abnormalities are preserved.


Ideally a proper DNN reconstruction would learn the mapping from the raw-space to image-space. See, for example: https://www.nature.com/articles/nature25988 .

There is just too much redundancy in MRI data, and initiatives such as FastMRI are fundamental for us to learn what the limits are of feasible acceleration. Also, some MRI scans take forever and cannot be used in vulnerable populations because of, e.g., breath holds, the need to stand still, etc. The image quality, perhaps counter-intuitively, in some situations improves with acceleration.


Can you explain why mapping from raw space gets rid of any of the concerns I raised?

It’s interesting research for sure. I hope it stays far away from actual clinical use for a while, for the reasons I highlighted. I’d like to see convnets work alongside radiologists for a while and prove robustness to dcanner changes in the wild before we start shoving them deep in the stack where radiologists can’t review what’s happening.


Radiologists don’t usually oversee reconstruction currently even when it’s real-time (whatever that means in MR, generally it means immediately after acquisition).


Exactly and that’s why this is not a good first place for convnets to be used in the workflow of a radiologist. They should be working alongside the radiologist, not somewhere where it’s impossible for the radiologist to examine what happened.


I think the point is that radiologists and technicians never see thr kspace data, just the image reconstruction of each voxel's intensity.

Such that i am not sure the risks are the same as say convolution nets reconstructing large brain structures.


Yes we do (I’m and MR tech) and it’s a part of trouble shooting on GE scanners. There is also a weird bug on a release I use of the Philips platform that allows a visualisation too. Working with the data in this form is not an everyday thing but it is useful.

K-space data is also saved for reconstructions and processing later on, though everyone prefers to avoid that as it’s horrible and lots of storage is required.

I’ve also worked at a university site where the raw data was collected and used on a daily basis, but that is presumably less common.


I meant radiologist in the sense of a physician which inspects MR images for abnormalities.


I'd even want to use similar tech on a personal level. I can't do a brain scan myself, but I can take a picture of a mole. I'm at high-risk for skin cancer, but I honestly wouldn't go see a doctor just for a change for a skin lesion that I'm not sure about. There's too many, and even with an above-average knowledge of what to look for, I have little confidence in my own observation. But if a neural network could tell me it's a higher risk, I'd for sure not wait until my next physical. Gotta be careful people who don't understand the risk & statistics don't depend on negative diagnoses too heavily, though.


I think you’ve misunderstood. This is not for diagnosis. This is so that they can run a crappier lower quality scan and then “enhance” it using neural nets.


No, I'm replying to the parent comment which is about critical decision support vs. removing doctor's from the decision process. I don't believe they were commenting on FastMRI vs. full MRI.


Haha "yet". Removing human doctors from the process is sci-fi, and not really 'hard' sci-fi either, as it stands now. Lawyers and CEOs will be far easier to automate.

Rather than dreaming about that, the focus should definitely be on "clinical decision support", i.e. "something useful that will save a radiologist some time and won't just get in the way". Not too many examples of that exist right now. Even speech-to-text is not a solved problem in their domain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: