What is the difference between Usage
#define CONSTANT_1 (256u)
#define CONSTANT_2 (0XFFFFu)
and
#define CONSTANT_1 (256)
#define CONSTANT_2 (0XFFFF)
when do I really need to add u and what problems we get into if not?
I am more interested in the example expressions where one usage can go wrong with other usage.
The trailing
umakes the constant have unsigned type. For the examples given, this is probably unnecessary and may have surprising consequences:The reason for this surprising result is the comparison is performed using unsigned arithmetics,
-1being implicitly converted tounsigned intwith valueUINT_MAX. Enabling extra warnings will save the day on modern compilers (-Wall -Werrorfor gcc and clang).256uhas typeunsigned intwhereas256has typeint. The other example is more subtle:0xFFFFuhas typeunsigned int, and0xFFFFhas typeintexcept on systems whereinthas just 16 bits where it has typeunsigned int.Some industry standards such as MISRA-C mandate such constant typing, a counterproductive recommendation in my humble opinion.