The members of the GraphBLAS forum have discussed this chip a couple of times. There's a lot of research on making deep neural networks more sparse, not just by pruning a dense matrix, but by starting with a sparse matrix structure de novo. Lincoln Laboratory's Dr. Jeremy Kepner has a good paper on Radix-Net mixed radix topologies that achieve good learning ability but with far fewer neurons and memory requirements. Cited in the paper was a network constructed with these techniques that simulated the size and sparsity of the human brain:
It would be cool to see the GraphBLAS API ported to this chip, which from what I can tell comes with sparse matrix processing units. As networks become bigger, deeper, but sparser, a chip like this will have some demonstrable advantages over dense numeric processors like GPUs.
https://arxiv.org/pdf/1905.00416.pdf
It would be cool to see the GraphBLAS API ported to this chip, which from what I can tell comes with sparse matrix processing units. As networks become bigger, deeper, but sparser, a chip like this will have some demonstrable advantages over dense numeric processors like GPUs.