-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update stop sign detector to run without EdgeTPU #953
Comments
It looks like Tensorflow has changed their approach to object detection; https://www.tensorflow.org/lite/examples/object_detection/overview So we might want to use the Tensorflow Object Detection API https://github.com/tensorflow/models/tree/master/research/object_detection There are now better models that mobilenet; a little research on state of the art my yield higher accuracy than mobilenet can provid. |
Note on performance; remember that we are also running the normal autopilot model, so performance described above will be low because we are trying to run multiple models. No one has done this as yet, so we don't know what the performance would be.
|
Ok, now Google has deprecated https://github.com/tensorflow/models/tree/master/research/object_detection, and now recommends https://github.com/tensorflow/models/tree/master/official/vision Google sucks. This thing looks harder to use, perhaps just go back to the original mobilenet suggestion in the initial issue description. |
A user on discord also has used this tensorflow-lite example for people detection on small images (160x120) on the RaspberryPi |
The StopSignDetector runs a canned version of mobilenet for the EdgeTPU. This means that anyone that wants to use the stop sign detector must have an EdgeTPU. We should generalize the stop sign detector so it can also run the model on an RPi or a Nano.
Current code is using this model:
https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
That model will recognize a stop sign without any additional training.
So we could use that same ssd mobilenet v2 coco in a more generic tflite variation, then modify the code run it on the RPi; see https://github.com/google-coral/edgetpu/blob/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess.tflite.
See https://medium.datadriveninvestor.com/mobile-object-detector-with-tensorflow-lite-9e2c278922d0
and https://www.tensorflow.org/lite/examples/object_detection/overview for some related info.
Further, we could then compile the .tf file using the Nvidia compiler to get a file that will run fast on the Nano's GPU.
The text was updated successfully, but these errors were encountered: