Cool project! Last year I built self driving robot for my bachelor's thesis. Instead of building end-to-end deep learning pipeline I used two neural nets: one trained with genetic algorithm to drive a robot based on ultrasonic sensors, another for object recognition and detection. Based on detected items (like road signs) robot took different actions.
Nice project and cool video! (our video is not so well produced :P). Our project was a summer project, we did it in this January. The evolution of the project will be something like yours, we didn't apply any object detection technique.
Nice work. I worked on a similar project where I created an adaptive cruise control prototype. I used a camera and a CNN to determine the distance from the car ahead of my car. From there you can adjust your cruise control speed and hopefully maintain a set distance away.
There are clearly some ethical/safety issues with testing the entire thing at once while driving on the road. Some better applications would be an alert for following too close or drifting out of lane. Anyways, I find self driving cars exceptionally interesting and I liked reading your write up.
> Some better applications would be an alert for following too close or drifting out of lane.
One problem I have with 'adaptive cruise control' in car is that it has no situational awareness. It constantly speeds up and slows down in heavy traffic highway situations, its quite jarring. What i want instead is integration with traffic situation and adapt the speed to that automatically os it doesn't zoom upto 75 mph and come to a stop.
Great project! It's really interesting that you relied solely on a regular camera for depth perception. Way less expensive than working with RGBD images.
I built something similar for my senior design project. We made a game of Pacman using two little rovers running FreeRTOS with RN-131C wifi modules on a PIC board and an overhead PixyCam connected to a raspberry pi. The rovers had a color sensor on the front and back that would allow it to track a black line on white paper. The overhead camera would feed pacman and ghost positions to the identical rovers. The rovers were commanded by just telling it what to do at the next intersection and what speed to move at. We did most of the heavy lifting on the raspberry pi simply because it's quicker to write A* in python than FreeRTOS C. Once pacman was seen driving over a colored dot on the map, the ghost would be commanded to run from pacman. If the camera saw the two rovers touch, the game was over.
Modularity was pretty key on reducing the work for that project. The rovers ran identical code and had no knowledge whether they were a ghost or pacman. The command router didn't care if the instructions it received came from a AI or a user GUI.
The platform was limited by what the professor provided us. We only bought color sensor arrays. Other teams went with simpler "games" but much more complex sensor processing on the rover. Those teams had much more trouble getting their designs to work, essentially needing two separate code bases.
Half of us were taking a class on AI and the other half were taking network application design so the idea for the game seemed like an easy way to just reuse code from those classes.
FormulaPi is a robot car racing series along similar lines. Submit your own code and race against teams from around the world. I've participated in the last couple of seasons and it's been great fun.
Video: https://www.youtube.com/watch?v=cUXh7iP3hoQ Code: https://github.com/kazepilot
https://www.youtube.com/watch?v=ECbU_EvyUqM
https://github.com/iinc/acc
There are clearly some ethical/safety issues with testing the entire thing at once while driving on the road. Some better applications would be an alert for following too close or drifting out of lane. Anyways, I find self driving cars exceptionally interesting and I liked reading your write up.
One problem I have with 'adaptive cruise control' in car is that it has no situational awareness. It constantly speeds up and slows down in heavy traffic highway situations, its quite jarring. What i want instead is integration with traffic situation and adapt the speed to that automatically os it doesn't zoom upto 75 mph and come to a stop.
Modularity was pretty key on reducing the work for that project. The rovers ran identical code and had no knowledge whether they were a ghost or pacman. The command router didn't care if the instructions it received came from a AI or a user GUI.
The platform was limited by what the professor provided us. We only bought color sensor arrays. Other teams went with simpler "games" but much more complex sensor processing on the rover. Those teams had much more trouble getting their designs to work, essentially needing two separate code bases.
Half of us were taking a class on AI and the other half were taking network application design so the idea for the game seemed like an easy way to just reuse code from those classes.
https://www.formulapi.com
Deleted Comment