Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure there is: when I’m parsing or serializing something it’s pretty important I know what size something is.


The size of `byte` is fixed at 8, `short` 16, `int` 32, `long` 64. There is no ambiguity or uncertainty about it.


At that point, why not just call them i/u8/16/32/64? If the sizes are fixed anyway, why come up with different names for them, especially when almost all of the times you would want to select a different integer type is specifically because of how many bits wide it is? (otherwise, surely you would just use the machine word size?)


Good question.

1. After 5 minutes, you know what sizes they are, and don't need reminding.

2. Easier to touch type.

3. They're just aesthetically more pleasing to the eye.

4. The names aren't really different, they follow the most-used (by far) sizes on C.

5. It's easier to say and hear them. I can say "int" when talking code with someone, instead of "eye-thirty-two".

6. I'm guessing it may be easier for a visually impaired coder with a screen reader.


Just be honest with yourself and say that you subjectively like it better that way, as it grew on you.

There is nothing wrong with that reasoning. Also, there will never be a language which is perfect in every conceivable way, this is such a minor difference that if someone chooses a language over this alone, they are not being reasonable.


A language is nearly all about subjective choices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: