Skip to content

Add support for half type for the http restful api #1753

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions tensorflow_serving/util/json_tensor.cc
Original file line number Diff line number Diff line change
Expand Up @@ -876,6 +876,37 @@ bool IsNamedTensorBytes(const string& name, const TensorProto& tensor) {
absl::EndsWith(name, kBytesTensorNameSuffix);
}

float intBitsToFloat(int32_t x)
{
union int_float_bits bits;
bits.int_bits = x;
return bits.float_bits;
}

float toFloat16(int bits) {
// 10 bits mantissa
int mant = bits & 0x03ff;
int exp = bits & 0x7c00;

if (exp == 0x7c00) {
exp = 0x3fc00;
} else if (exp != 0) {
exp += 0x1c000;

if( mant == 0 && exp > 0x1c400 ) {
return intBitsToFloat( ( bits & 0x8000 ) << 16 | exp << 13 | 0x3ff );
}
} else if( mant != 0 ) {
exp = 0x1c400;
do {
mant <<= 1;
exp -= 0x400;
} while( ( mant & 0x400 ) == 0 );
mant &= 0x3ff;
}
// else +/-0 -> +/-0
return intBitsToFloat((bits & 0x8000 ) << 16 | (exp | mant) << 13);
}

Status AddSingleValueAndAdvance(const TensorProto& tensor, bool string_as_bytes,
RapidJsonWriter* writer, int* offset) {
Expand All @@ -896,6 +927,13 @@ Status AddSingleValueAndAdvance(const TensorProto& tensor, bool string_as_bytes,
success = writer->Int(tensor.int_val(*offset));
break;

case DT_HALF:
int src = tensor.half_val(*offset);
float dst;
dst = toFloat16( &src, &dst);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont roll your own version for float16 conversion (its hard to get that correct). instead use standard @FP16 library that is used in TF. specifically use fp16_ieee_from_fp32_value() API (from fp16.h), and write the output as decimal number.

example code:
https://github.com/tensorflow/tensorflow/blob/2ba6502de549c20c7498f133792cf3223eabc274/tensorflow/lite/delegates/gpu/common/convert.cc#L303

you can refer to the fp16 library via @FP16 bazel target in the BUILD file.

also your change is incomplete. you need to handle input conversion (json -> tensor) for DT_HALF. please update AddValueToTensor() method.

and finally add unit tests for your code.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you , I'll try.

success = WriteDecimal(writer, dst);
break;

case DT_STRING: {
const string& str = tensor.string_val(*offset);
if (string_as_bytes) {
Expand Down