Using generic_128 for long double yields incorrect outputs on most platforms. The API appears to want you to do this:
long double value; char* str_ptr;
length = generic_to_chars(long_double_to_fd128(value), str_ptr);
However, long_double_to_fd128 is implemented like this:
uint128_t bits = 0;
memcpy(&bits, &d, sizeof(long double));
which appears to assume that long double is an 80-bit float. This ignores the fact that the C standard does not make this guarantee, only that long double is at least as precise as a double. It is perfectly valid for long double to be 64-bit, and on nearly all non-Linux platforms this is the case.
Indeed on macOS formatting long double = 202 with generic_128 prints 1.6918505631274746047E-4932.
This is a massive silent gotcha.