The 2024 Wheel Reinvention Jam is in 11 days. September 23-29, 2024. More info

## How can I parse floating point numbers in C?

I'm writing a simple C parser and wondering how you parse a floating point number like these: `15.75, 1.575E1, 1575e-2, -2.5e-3, 25E-4`

Also, how can you read the 'e' part of the number?

You read your string char by char constructing integer first, once you get to `'.'` or `'e'` chars you know you've crossed into decimal or exponent parts and do different integer. Once you extracted individual parts from string as integers, then you can construct float from it.

Be careful with large numbers - as they can easily overflow your calculations and it can give incorrect float. Make sure you have good understanding on how float is represented in IEEE 754 standard to calculate them properly. Here are some papers discussing the problems when implementing this and how to do it properly:

How to Read Floating Point Numbers Accurately

Correctly Rounded Binary-Decimal and Decimal-Binary Conversions

Edited by Mārtiņš Možeiko on

If an "e" exists, left-hand side should be multiplied by 10 to the power of the right-hand side. Try to use as few accumulated rounding errors as possible when working with the double type, and use int64_t or uint64_t when possible.

To parse an integer, multiply the previous result by ten and add the current character's corresponding digit by subtracting '0' from the character code. If an odd number of negation signs were found, you negate the integer before returning. If you find a decimal along the way, switch into a mode where you decrease the added value of new digits instead.

Floating-point representations are not portable across different computers, so don't expect them to be exact. If you only need determinism, you can let int64_t be interpreted with the lower 32 bit as decimals to implement your own fixed-point decimal type.

For geometry, one can also create fraction types using a struct of two int64_t numbers, but these will have to be divided by the greatest common denominator once in a while and the products of primes will eventually get filled with prime numbers not matching the other side until it overflows from not being able to reduce anything. While fraction types still work, they can be used for exact comparisons that allow geometric operations that are otherwise impossible.

Edited by Dawoodoz on

Try to use as few accumulated rounding errors as possible when working with the double type

How?

you can let int64_t be interpreted with the lower 32 bit as decimals to implement your own fixed-point decimal type

How then can I switch from int64 to float? Isn't dividing the last 32 bits still leading to precision error? Or do you mean to use the fixed-point type all the time and never convert?

Replying to Dawoodoz (#26704)

Floating-point representations are not portable across different computers, so don't expect them to be exact.

This is not true. Unless you're developing for ancient pre-IEEE754 computers (which nowadays nobody are), the float point representation and operations will be bitwise exact across all modern systems. That includes Intel, AMD, ARM and even GPU's.

Replying to Dawoodoz (#26704)

I've tried reading the two papers you gave, but I'm not a native speaker and there're a lot of notations that I don't understand. Can you give me a simple overview of some problems and implementations?

Replying to mmozeiko (#26699)

Game engines rarely implements this kind of functionality manually. They typically use sscanf/strtof/strtod functions from C runtime.

So if you want source for example implementations, look at some existing C runtime source. For example muslc: https://git.musl-libc.org/cgit/musl/tree/src/internal/floatscan.c (start in `__floatscan` function). muslc usually does not have the fastest/best implementation, but typically it has implementation in small amount of code.

Another place for more modern C++ code is in llvm libc: https://github.com/llvm/llvm-project/blob/main/libc/src/__support/str_to_float.h#L894-L898

It uses algorithm from this pdf: Number Parsing at a Gigabyte per Second which probably is faster than code in muslc. Video from author of paper: https://www.youtube.com/watch?v=AVXgvlMeIm4

Shorter overview of this algorithm is available here: https://nigeltao.github.io/blog/2020/eisel-lemire.html plus this entry also is interesting: https://nigeltao.github.io/blog/2020/parse-number-f64-simple.html

Both articles have linked to C and C++ code at bottom.

Edited by Mārtiņš Možeiko on
Replying to Shastic (#26710)

Wow, thanks so much for all the links! I just finished a temporary, not high-precision parser so it's time to check these out.

Replying to mmozeiko (#26712)

I had my physics engine break down on AMD CPUs with forces truncated to zero, because it relied on 80-bit floating-point precision on Intel processors. There is also talk among scientists about replacing the old floating-point standard that we currently use. Just like char might be 32-bits long in the future, float and double may have completely different representations next time a new CPU is released. Unless the format it explicitly specified in the C++ standard with plans for emulating the type on future hardware, do not carelessly assume that it will always remain the same. Convention by history is not a substitute for properly written code according to standards.

Edited by Dawoodoz on
Replying to mmozeiko (#26707)

Yes, that is part of "ancient" computers that nobody develops for anymore. 80-bit float is not a thing in since everybody moved to 64-bit x86, like 15 or so years ago. And it also does not exist natively on many other arch's than x86. It is emulated in most of places either with reduced precision, or with double-double, or other things.

What I was talking was about 32-bit and 64-bit floats in IEEE754. Those are standardized and work exactly same as they did 20 years ago and will continue work in future. A lot of software is written with assumption how they work, so CPU or GPU cannot change its behavior - that will break too much software, so newer ISA's will need to emulate if they won't support it natively.

Edited by Mārtiņš Možeiko on
Replying to Dawoodoz (#26714)

So where does the C++ standard say that IEEE754 must always be used in all future versions of C++? Having such a limitation would either be a huge roadblock towards future innovation, or broken yet again when optoelectronics require a new representation containing uncertainty intervals or such. Developers made many false assumption about having reached a stable state in past history and history is still in the making. Eventually hardware will no longer support old 64-bit double registers because our programs will be "ancient" stuff nobody needs to support anymore.

My parents also have a computer still running with its original Windows 95 installation and I write new code with Windows XP as the minimum requirement. The percentage of Windows users still on Windows XP today, is comparable to people using any version of Linux.

Edited by Dawoodoz on
Replying to mmozeiko (#26716)

What are you talking about? Obviously I don't know what code we'll be writing 50 years in future. All I said is (quote):

float point representation and operations will be bitwise exact across all modern systems

In current systems - all of them, including new ones (like RISC-V) support and implement IEEE 754. So saying don't use it is pretty much same as don't use two's complement for integers.

What newer C++ standards are showing is exactly opposite what you're proposing. If anything they are actually specializing to what hardware supports instead of going away from it. For example, C++ atomics in C++11, or two's complement in C++20. And there is incredible amount of hardware released that supports these things and does not look they are going away soon - scientific computing, gaming hardware, embedded hardware, all of them supports IEEE 754 (and two's complement for integers).

Edited by Mārtiņš Možeiko on
Replying to Dawoodoz (#26717)

Assuming two's complement for signed integers will give you style warnings in modern C++ compilers. Not only because it's invalid C++, but also because such hacks makes the code harder to read for beginner developers who don't know how two's complement work. Most people who do use bitwise operations on signed integers also get incorrect results when numbers get big or negative.

Edited by Dawoodoz on
Replying to mmozeiko (#26718)

You're talking a lot about C++ standards, but you have not kept up with latest ones. C++20 requires two's complement storage for signed integers. It is not invalid C++ anymore. This is accepted into C++20 standard: https://wg21.link/p0907r3 && https://wg21.link/P1236R1
And similar change coming to C in C23.

Edited by Mārtiņš Možeiko on
Replying to Dawoodoz (#26719)