Created and coded by:
Behiç KILINÇKAYA (https://github.com/BehicKlncky) & Sencer YÜCEL (https://github.com/senceryucel)
It takes the .jpg image and crop it into desired number of equal parts (name of the variable to set crop number is CROP_COUNT). Then, for every mini-frame (cropped photo), TFLite Model runs and the result of classification is compared with the grand truth (.json annotation). The performance of the model is calculating based on this comparison.
- A testing dataset in .jpg and their annotations in .json; example format of the .json file is in the repository [Recommended: 100+ photos with different scenarios (easy-medium-hard)].
PATH_TO_JSON = "PATH_TO_ANNOTATION_JSON_FILE"
PATH_TO_DATASET = "PATH_TO_YOUR_DATASET"
PATH_TO_CROPPED_PHOTOS_TO_SAVE = "PATH_TO_CROPPED_PHOTOS_TO_SAVE"
PATH_TO_TFLITE_MODEL = "PATH_TO_MODEL.tflite"
PATH_TO_SAVE_INFERENCE_RESULTS = "INFERENCE_RESULTS.txt"
CROP_COUNT = 16
What CROP_COUNT is has been described in the Algorithm part.