Skip to content

models works on float32 instead of uint8 #10760

Open
@naarkhoo

Description

@naarkhoo

Hi,

I am in the process of my ssd model based on ssd_mobilenet_v2_320x320_coco17_tpu and I noticed the model works on float32 and not uint8 - I am curious how I can make that change ?

Also I appreciate if you point me to other tricks that I can make my model run faster at inference level. for example larger kernel size ? or shallower model ? or some threshold ? I feel these recommendations/explanation can be helpful when it comes to optmization

here is the link to the colab notebook https://drive.google.com/file/d/1iqUgeabbTgfixehGomDoj5eHGfHd8Lvt/view?usp=sharing

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions