Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've never really liked the short/int/long definitions. They're just out there to confuse you, especially if you've written your share of assembly.

For as long as I can remember I've just used size_t, uintptr_t, uint32_t/int32_t (or any of the 16/32/64 variants), exactly because I want to be explicit about the machine word sizes I'll be dealing with. Before that, I always used similar (u32/i32, LONG/ULONG, ...) platform-specific typedefs on proprietary systems too.

For all practical purposes, int/unsigned int has been at least 32 bits since the early 90's (well, on modern platforms) but why use those if you can explicitly declare how many bits you actually need.

(I've bumped into a few archaic platforms where stdint headers weren't present but it's easy to just add a few build-specific typedefs somewhere in that case.)



Well I know historically it was because there were potentially performance issues with using explicit sizes - int was basically the guarantee that you would be working in the CPU's native size, and hence at it's most efficient, no need to shift and mask bits to get the value that you wanted. Obviously this leads straight to a bunch of relatively subtle bugs, but I guess for some applications the tradeoff of speed vs safety was worth it.


On 8 bit machines it's the reverse, an int being 16 bits minimum requires more operations to handle than an 8 bit number. Pass an 'int' push two things on the stack. Etc. Causes your code size to balloon noticeably.


Many 8bit processors such as AVR have instructions that work with 16 bit numbers (stored in register pairs). So that's not the case.


That only applies to adds, subtracts, and register moves. 16-bit Booleans, shifts/rotates, multiplies, and load/store still need to be done with multiple instructions.


I did a little mucking on some AVR code of mine. Sometimes going from an uint8_t to a uint16_t saves a couple of bytes Sometimes adds a dozen.

One case changing an index in a for loop to an int, code went from 34024 bytes to 34018 (saved four bytes). But changing uint8_t i, j, k; to uint16_t i, j, k; code compiled to 34068 bytes, gain of 44 bytes.


C++11 has improved the situation somewhat by making it possible to explicit state what you want from your numeric types: http://en.cppreference.com/w/cpp/types/integer


<stdint.h> was actually introduced by C99 and included in C++11 with other C99 changes.


Let's also mention how utterly moronic it is to type three our four entire english words for one type definition. Do you really want to type `unsigned long long int x = 4` all the time? No, you should never type that. It should always be `uint64_t x = 4`. (That's also ignoring the mental indirection required for every line of code that experienced programmers take for granted. Try explaining to a new programmer why 'long' changes storage size on platforms or why 'long long' and 'long' is usually, but not always the same size. Be explicit, not english.)

Basically, if you are still typing "unsigned long long" or even "long" in 2015 in a modern environment, please stop. But—you may say—we want long to be 32 bit on 32 bits and 64 bit on 64 bits! No, you don't. That makes your system difficult to reason about. Plus, you'll probably start casting your inputs to printf() instead of using proper printf type macros, which breaks things even further. Adopt proper types, use proper types, stop programming like it's 1975. Good luck.


The biggest frustration with those types is printf:

    "Num: %" PRIuFAST32 " found\n"
Is quite annoying to type instead of:

    "Num: %d found\n"


I always found it odd that there's no printf extension to just automatically print the correct integer size. GCC is already smart enough to tell me if %d is wrong for a long; why can't I just do %N and have it print out the correct length for the argument?


It's not even necessary for the compiler to do it: writing a type-safe wrapper is pretty easy in C++11 and a decent approximation is doable in pure C99 (ask me if you want a sample). Yet using the ugly PRI macros seems common even in C++ projects. It's understandable why one wouldn't want to use ostream type formatting, but at least write a better printf...

(Type-safe at runtime, that is, via automatically passing a type info argument along with each format argument; C++ is braindead enough that doing the check at compile time is only somewhat possible with C++14 - as far as I can tell, it only works if you define the format outside of a function - and probably fairly slow, since that involves a template instantiation per character in the format. Oh well.)


> and a decent approximation is doable in pure C99 (ask me if you want a sample).

I'd like to see that.

I can see some pretty straightforward ways to do it with C11 and _Generic, but I don't see how C99 helps.


With C++14 you don't need to define the format string outside functions, but it does get a little hairy. Here's a half-assed example: http://coliru.stacked-crooked.com/a/3f34563e9a85af51


Because printf is just a function, and doesn't have access to the type information at the call-site.


Just a hypothetical idea: what if the standard allowed a compiler to modify the format string during compilation, replacing e.g. %N with the appropriate conversion specification, subject to proving that a given format string is a plain old literal that is not touched anywhere else?


Although compilers will warn about it, it's still possible to generate a format string dynamically at runtime. This might be done if you want different formats for the same fixed argument types (though there are probably better/safer ways).


This is exactly what I was suggesting. The compiler definitely has that information.


It's a good idea, but then you are tied to one compiler.

So, what if Clang implements it, but not GCC? Or what if Clang and GCC implement it, but not the Sun or Intel compilers? Or what about all the GCC copies every board maker forks when they create something custom?

It's tricky making in-compiler behavior non-standard (though, I guess that's what compiler flags are for).


Cant a macro handle it?


Maybe it's there for those that don't care about that sort of "details". If all you care about is passing around some small numbers, int does the job :)!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: