-
Notifications
You must be signed in to change notification settings - Fork 2
7. Vision and Learning
SETTING UP ZED CAMERA -
Step 1 - Download cuda 10.0 for your system(deb version) from this link.
Step 2 - Install cuda with the help of this guide
Step 3 - Download the correct zed SDK version for your system from this link
Step 4 - Installation complete. Use the different applications.
OBJECT RECOGNITION THROUGH ZED CAMERA - PC VERSION (follow this link)
Step 1 - Install python3
Step 2 - Install opencv
Step 3 - Intsall tensorflow
Step 4 - Download the zed python api
Step 5 - Copy the object_detection.py file in the /models/research
Step 6 - Install COCO
Step 7 - Download link for different models
Step 8 - There are different models given in the code. Choose any and download it from the above link
Step 9 - Copy the downloaded model in the same directory as that written in the ob.py file. You'll get minor errors if the paths don't match. You'll have to change the path at three places(183,190 and 202)
Step 10 - Installation complete. Run the object_detection.py file.
OBJECT RECOGNITION THROUGH ZED CAMERA - TX2 VERSION (follow this link)
Step 1 - All the steps are same as the pc version but there's a minor change. While installing protoc you have to compile protobuf from the source because jetson tx2 has a different architecture than pc. So follow this [link] (https://askubuntu.com/questions/1072683/how-can-i-install-protoc-on-ubuntu-16-04) and do exactly as it says. This will take around one hour. Step 2 - Done
**LANE DETECTION **
Step 1 - Follow the steps in this link
Step 2 - Make sure the paths given in the code from which image/video has to be read is correct
Step 3 - Done!
SETTING UP ENVIRONMENT - RECOMMENDED
( conda makes it very easy to install the dependencies and different libraries. A lot of problems were faced in installing different packages but after the installation of conda, it was very easy to install the dependencies)
Step 1 - Install conda
Step 2 - Verify installation of conda by opening a new terminal. If base appears on the top left corner of the terminal window, then installation is complete
Step 3 - Install python if not already present
Python 3.7 was installed using method given in this link
The python version can be checked by writing into terminal -> python --version
it will show python 2.x(2.7 or 2.6)
Set latest python version(in this case python 3.7.3) as default by following these steps -
a) check the version of python: ls /usr/bin/python* your python version should show up in the list, otherwise reinstall correctly
b) alias: alias python='/usr/bin/pythonxx'
c) re-login: . ~/.bashrc
d) check the python version again: python --version
this should set up your python version as default in the environment
Step 4 - Install opencv in conda using this command - conda install -c conda-forge opencv
Step 5 - Install pytorch by choosing appropriate options according to your python version,cuda version(this case python 3.7, cuda 10.0) from this link
Step 6 - If while running any code an error pops up saying "xx module not found"(for example "no module named socketio"), you can just google - "conda socketio" and you will find the command to download it in the first link.
SELF DRIVING CAR ON UDACITY SIMULATOR - Version 1 [NOT GOOD ENOUGH]
Step 1 - Follow the steps in the given link
Step 2 - Install Unity 2018.1.6f1(64-bit) on Ubuntu from this link
Step 3 - Download GIT LFS
Step 4 - Go to this link
Step 5 - Clone the repository using git lfs command
Step 6 - Open LauncherScene.unity from the directory /home/inspired/self-driving-car-sim/Assets/1_SelfDrivingCar/Scenes
Step 7 - Click on record and save the files in a folder named 'data'
Step 8 - Press R or click on record to start collecting data. Collect data for 3,4 laps
Step 9 - Stop recording and let unity to capture the video
Step 10 - Download/copy the code self_driving_car.py from this link
Step 11 - Make sure the address of the images in the stored CSV file is the same as the original address of the images (I encountered an error which referred to line 33 of the code. The terminal window gave an error - NoneType object is not subscriptable. This was because my original image address and the address stored in the CSV file was different and because of this the code was not able to detect images and returned current_images as None)
Step 12 - Run the code in visual studio code
Step 13 - Remember that the Validation Loss should be greater but not much greater than Training Loss then the model should work fine. I changed the number of epochs in the code from 22 to 18 because it gave the desired results. (In my model -> Validation Loss = 0.145, Training Loss = 0.127). If you are not getting the desired results, try to run the self_driving_car.py file again for a few times or if this doesnt work, change the number of epochs.
Step 14 - Run the drive.py file from the above github link. It will give some errors like xx module not found. You know what to do now for these kind of errors.
Step 15 - Download/ copy the code from model.py file from the same github link. Make sure all the .py files and model.h5 files are in the same directory.
Step 16 - after completing the training now its time for testing. run this command - python drive.py model.h5. The termianl window should say something like -
NOT RECORDING THIS RUN
wsgi starting on....
Step 17 - Next, open unity and load the self_driving_car project and open the LauncherScene.unity file and select autonomous mode. You should see the car running on its own.
NOTE: When I was using my own dataset and the number of epochs were 18, the car was sometimes completing 1 lap and then went off track. This was because maybe my dataset was not good enough because of my poor gaming skills or because the number of epochs were less. So, I downloaded the dataset provided by Udacity and increased the number of epochs from 18 to 25. After training the model, the car was running on the road but with some wobbling. In the first try, the car went off the road after completing 3 laps. Then I increased the set_speed variable from 9 to 10 in my driv.py file and then tried again with this configuration. Result: the car ran properly without getting off the track for more than 25 minutes. After that, I closed unity so I don't know how long it would've run on the track. Change the set_speed variable to see at which speed your model is working more properly. If you get some error like ErrNo 98: address already in use, just kill the terminal and open a new one.
FILES : model1.py -> my CNN model file
driv.py -> file for running the car
drive.py -> file that I used to train and create the model
model.h5 -> file created after successfully training the model
ALL FILES ARE IN THE 'drive' FOLDER
SELF DRIVING CAR ON UDACITY SIMULATOR - Version 2 [BETTER VERSION]
- All the steps are the same as above except a few.
- The change is in the model file and drive file which are taken from this link
- Remove the previous tensorflow and reinstall tensorflow-gpu in conda using this command - conda install -c anaconda tensorflow-gpu
- Run the model file to train your neural network
- The model takes a lot of time to train because maybe it doesn't use the GPU memory. (it took more than 6 hours to just train for 7 epochs)
- Run the drive.py file and model-00x.h5 file which is created
- You'll see the car running properly at about twice the speed in the previous model but it will sway a lot in the beginning
FILES : model2.py and drive.py in this directory /home/inspired/drive/Naoki OBJECT RECOGNITION THROUGH JETSON TX2 ONBOARD CAMERA - BETTER FPS Step 1 - Clone this repository https://github.com/jkjung-avt/tf_trt_models
Step 2 - Copy object detection folder from research library into the directory in which you cloned the above repository(you can skip this step by changing a few lines in the code of camera_tf_trt.py file)
Step 3 - Download ssdlite_mobilenet_v2_coco from this. You can choose any other model and see which gives better detection and FPS
Step 4 - Make changes in the camera_tf_trt.py file to use your model and make changes in model name wherever required(2 or 3 lines under #constants comment)
Step 5 - Run the file