>If the problem at hand is better solved using a black box (in terms of accuracy, precision, robustness, etc)
It's been a while since I read her work, but IIRC one of the positions she argues for, which I find plausible, is that interpretable models can be performance competitive. For example, it could be that the only reason black box methods outperform is because they've been more heavily researched, and if we were to put more research into interpretable methods, we could achieve parity. I also mentioned a few reasons why we might expect interpretable models to perform better a priori in this comment https://news.ycombinator.com/item?id=28838321
I'd find Rudin's argument a lot more convincing if she offered an existence proof, like using the same number of examples to train an equally discriminative SVM or random forest (or hybrid) that can equal the performance of AlexNet in 2012 on the ImageNet ILSVRC (or in another domain where DNNs are SOTA).
Until that can be done, I think few outside academia will invest time or money in alternative non-DNN methods in the hope of competing with today's even superior DNN variants. There's a decade of evidence now that DNNs are incontestable discriminators in numerous domains, relative to pre-2012 ML technology anyway.
>There's a decade of evidence now that DNNs are incontestable discriminators in numerous domains, relative to pre-2012 ML technology anyway.
Do we know that this is due to inherent superiority of DNNs, vs just experiencing a virtuous cycle of success leading to increasing investment leading to more success?
It's been a while since I read her work, but IIRC one of the positions she argues for, which I find plausible, is that interpretable models can be performance competitive. For example, it could be that the only reason black box methods outperform is because they've been more heavily researched, and if we were to put more research into interpretable methods, we could achieve parity. I also mentioned a few reasons why we might expect interpretable models to perform better a priori in this comment https://news.ycombinator.com/item?id=28838321