Robustness and maintainability is as much a product of the minds working on the code as it is the code itself. However, I can say that for me, this was not even a conceivable enterprise, much less one that could be done by one person or, really, a few people, without the approach I have taken, for me at least.
I am interested in this partly because I feel that future hardware will more and more reward code written in this style than in other styles, and I want to make it possible for people, especially non-computer scientists, to learn and produce good code and "think" about problems more rigorously, which, as problem sizes increase, makes this sort of approach important for scaling.
As for speed, one nice, but currently unexplored feature of the compiler as it is written above is that the entire complexity (algorithmic complexity, Big O) is computable, even trivially derived from the code itself, because the compiler is just a string of function compositions on primitives with well known algorithmic complexity. A future feature plan for the compiler is to produce Big-O complexity analysis for code of this sort to help developers spot performance regression points earlier in their code, since the compiler can now assist you in doing so symbolically, and not just with test cases.
I have thought a bit about how easy it would be to compile code like this to the FPGA, and I think there are a lot of possibilities here, but I've not been able to make time to study it completely. My current scope is limited to GPU's and CPU's of the more standard ilk.
Rewriting the kernel of an operating system would be very challenging if you expected good performance on the GPU. GPU's are not designed to make context switching of this sort (as found in Linux) easy, or fast. It's highly likely that you will run into divergence problems, at the very least.
However, people have explored the implementation of operating systems on top of these concepts (see kOS). I expect those to be very practical, and it would be an interesting future project to bootstrap Co-dfns programs to run natively on bare metal.
A more interesting initial idea would be to explore rewriting the base userland in OpenBSD or Ubuntu in something like Co-dfns. This would open up interesting challenges relating to file systems and interacting with GPU's and the file system, but it is at least conceivable that many userland functions would become much easier to write and maintain (maybe) by playing with this approach. However, it would require additional infrastructure support and the like which Co-dfns currently does not have. I imagine the first step would be to identify the core file system APIs that would need to be provided in some reasonable analogue in Co-dfns, and then try implementing the code on the CPU first. I would expect that the code would be simpler, and could potentially better utilize the CPU, but I am not sure you would see fundamental improvements in performance without going way up in scale and increasing computational needs to the point that the speed of the filesystem is not the limiting factor.
Thanks heaps for the detailed reply! The BigO complexity analysis 'for free' sounds pretty interesting.
I'm wondering if Co-dfns would be a natural fit for an algorithmic design of programs through deep learning. We're already seeing some attempts at using an AlphaGo like architecture (marrying traditional AI such as monte carlo tree search with modern deep learning to learn useful heuristics, or in the case of a recent paper on program design I can't find right now, they combined the network with an SMT solver).
If that's something of interest to you, get in touch with me via email and we can see if we can't get Co-dfns supporting what you need to make it a reality! There are a number of things that could make this nice in Co-dfns, including the possibility of using idioms instead of library calls, potentially making adaptation of algorithms easier.
I am interested in this partly because I feel that future hardware will more and more reward code written in this style than in other styles, and I want to make it possible for people, especially non-computer scientists, to learn and produce good code and "think" about problems more rigorously, which, as problem sizes increase, makes this sort of approach important for scaling.
As for speed, one nice, but currently unexplored feature of the compiler as it is written above is that the entire complexity (algorithmic complexity, Big O) is computable, even trivially derived from the code itself, because the compiler is just a string of function compositions on primitives with well known algorithmic complexity. A future feature plan for the compiler is to produce Big-O complexity analysis for code of this sort to help developers spot performance regression points earlier in their code, since the compiler can now assist you in doing so symbolically, and not just with test cases.
I have thought a bit about how easy it would be to compile code like this to the FPGA, and I think there are a lot of possibilities here, but I've not been able to make time to study it completely. My current scope is limited to GPU's and CPU's of the more standard ilk.
Rewriting the kernel of an operating system would be very challenging if you expected good performance on the GPU. GPU's are not designed to make context switching of this sort (as found in Linux) easy, or fast. It's highly likely that you will run into divergence problems, at the very least.
However, people have explored the implementation of operating systems on top of these concepts (see kOS). I expect those to be very practical, and it would be an interesting future project to bootstrap Co-dfns programs to run natively on bare metal.
A more interesting initial idea would be to explore rewriting the base userland in OpenBSD or Ubuntu in something like Co-dfns. This would open up interesting challenges relating to file systems and interacting with GPU's and the file system, but it is at least conceivable that many userland functions would become much easier to write and maintain (maybe) by playing with this approach. However, it would require additional infrastructure support and the like which Co-dfns currently does not have. I imagine the first step would be to identify the core file system APIs that would need to be provided in some reasonable analogue in Co-dfns, and then try implementing the code on the CPU first. I would expect that the code would be simpler, and could potentially better utilize the CPU, but I am not sure you would see fundamental improvements in performance without going way up in scale and increasing computational needs to the point that the speed of the filesystem is not the limiting factor.