While I was testing a floating point to string function, I initialized a
double variable with a
float literal. And the result I got when printing the variable weren't what I expected. Here are the value, bit pattern and print result:
float f = 123400000000000000000000000.0f; // 01101010'11001100'00100101'11101111 // Prints: 1.234e+26 double d = 123400000000000000000000000.0f; // 01000101'01011001'10000100'10111101'11100000'00000000'00000000'00000000 // Prints: 1.234000017665421e+26 double d2 = 123400000000000000000000000.0; // 01000101'01011001'10000100'10111101'11011001'11011110'11111110'11010110 // 1.234e+26
We can see that second value's bit pattern got zero appended, which result in the different print out. Which is expected and OK.
My question is: is there a way to get a warning from MSVC, Clang and GCC when initializing a double variable using a float literal ? I searched a bit for MVSC but couldn't find one.