Why overcommit is a problem? A program is unlikely to use all the memory that it allocates, or use it only at a later time. It would be a waste to not have it, it would mean having a ton of RAM that never gets used because a lot of programs allocates more ram that they will probably ever need. And it would be inefficient, costly and error prone to use dynamic memory allocation for everything.
The cause of your browser crash is not the overcommit, is simply the fact that you have not enough memory. If you disable overcommit (something you can do on Linux) you would the same crash earlier, before you allocated (not necessary used) 100% of your RAM (because really no software handles the dynamic memory fail condition, i.e. malloc returning null, that you can't handle reasonably).
Null pointers are not a mistake, how do you signal the absence of a value otherwise? How do you signal the failure of a function that returns a pointer without having to return a struct with a pointer and an error code (which is inefficient since the return value doesn't fit a single register)? null makes a perfect sense to be used as a value to signal "this pointer doesn't point to something valid".
Microsoft saying that fork() was a mistake... well, of course, because Windows doesn't have it. fork was a good idea and that is the reason why it's still used these days. Of course nowadays there are evolution, in Linux there is the clone system call (fork is deprecated and still there for compatibility reasons, the glibc fork is implemented with the clone system call). But the concept of creating a process by cloning the resources of the parent is something that to me always seamed very elegant to me.
In reality fork is something that (if I remember correctly, I don't have that much experience in programming in Windows) doesn't exist on Windows, and the only way to create a new process of the same program is to launch the executable, and pass the parameters from the command line, that is not that great for efficiency at all, and also can have its problems (for example the executable was deleted, renamed, etc while the program was running). Also in Windows there is neither the concept of exec, tough I think it can be emulated in software (while fork can't).
To me it makes perfect sense to separate the concept of creating a new process (fork/clone) and loading an executable from disk (exec). It gives a lot of flexibility, at a cost that is not that high (and there are alternatives to avoid it, such as vfork or variations of the clone system call, or directly higher level API such as posix_spawn).
I think much of the confusion around nulls stems from the fact that in mainstream languages pointers are overloaded for two purposes: for passing values by reference, and for optionality.
Nearly every pointer bug is caused by the programmer wanting one of these two properties, and not considering the consequences of the other.
Non-nullable references and pass-by-value optionals can replace many usages of pointers.
Yes, and they are just two usages of pointers. The fact is that, whatever you call it, null pointer, nullable reference, optional, you have to put in a language a concept of "reference to an object that can reference a non valid object".
>How do you signal the failure of a function that returns a pointer without having to return a struct with a pointer and an error code (which is inefficient since the return value doesn't fit a single register)?
Rust does this with the Result and Option "enums", which are internally implemented as tagged unions. From my understanding the only overhead with this implementation is the size taken by the tag and then any padding required for alignment.
It also helps that references in Rust are not nullable and working with pointers is fairly rare, so the type system can do a lot of heavy lifting for you rather than putting null checks all over the place. When you have &T you never have to worry about handling null in the first place!
The inventor, Tony Hoare, famously called them his "billion-dollar mistake". The better way to do it is with nullable types (which could internally represent null as 0 as a performance optimization). This is something Rust gets right.
Nullable types... they have the same problems as null pointers: if you don't care about handling the case they are null the program will crash, if you handle it, you can handle it also for null pointers. Well, they have a nicer syntax, and that's it. How much Rust code is full of `.unwrap()` because programmers are lazy and don't want to check each optional to see if it's valid? Or simply don't care about it, since having the program crash on an unexpected condition is not the end of the world.
The Rust code using `.unwrap()` is explicitly testing for a missing value and signaling a well-defined error when the prerequisites are not met. Contrast this with dereferencing a null pointer in C, where doing so results in undefined behavior.
More importantly, in Rust you don't have to allow the value to be missing. What Rust has but C does not is not nullable pointer types, but rather non-nullable ones—in C all pointers are potentially null, or dangling, or referencing incorrectly aliased shared memory, etc. Barring a programming error in marked `unsafe` code, or a compiler bug, if you have a plain reference in Rust not wrapped in Option<T> then it can't possibly be null (or invalid or mutable through other references) so you don't need to check for that and your program is still guaranteed not to crash when you use it.
Nullable/option types are explicit. Every time you ignore null, you have to make a conscious choice to do so, and it's prominent in the source code forever after.
The problem with null pointers is that you have to remember to check for null. For OO languages specifically, the other problem is that null pointers violate the Liskov substitution principle.
The cause of your browser crash is not the overcommit, is simply the fact that you have not enough memory. If you disable overcommit (something you can do on Linux) you would the same crash earlier, before you allocated (not necessary used) 100% of your RAM (because really no software handles the dynamic memory fail condition, i.e. malloc returning null, that you can't handle reasonably).
Null pointers are not a mistake, how do you signal the absence of a value otherwise? How do you signal the failure of a function that returns a pointer without having to return a struct with a pointer and an error code (which is inefficient since the return value doesn't fit a single register)? null makes a perfect sense to be used as a value to signal "this pointer doesn't point to something valid".
Microsoft saying that fork() was a mistake... well, of course, because Windows doesn't have it. fork was a good idea and that is the reason why it's still used these days. Of course nowadays there are evolution, in Linux there is the clone system call (fork is deprecated and still there for compatibility reasons, the glibc fork is implemented with the clone system call). But the concept of creating a process by cloning the resources of the parent is something that to me always seamed very elegant to me.
In reality fork is something that (if I remember correctly, I don't have that much experience in programming in Windows) doesn't exist on Windows, and the only way to create a new process of the same program is to launch the executable, and pass the parameters from the command line, that is not that great for efficiency at all, and also can have its problems (for example the executable was deleted, renamed, etc while the program was running). Also in Windows there is neither the concept of exec, tough I think it can be emulated in software (while fork can't).
To me it makes perfect sense to separate the concept of creating a new process (fork/clone) and loading an executable from disk (exec). It gives a lot of flexibility, at a cost that is not that high (and there are alternatives to avoid it, such as vfork or variations of the clone system call, or directly higher level API such as posix_spawn).