This week the first batch of students started the Nanodegree for Self-Driving Engineers on the online platform Udacity.This three-part course that will take nine months participants will learn about algorithms and methods to program autonomous vehicles. This is the first course of its kind and drew a lot of attention.
My application for the course was accepted and after paying 800 dollars for the first three month (Term 1) I was assigned to the second batch, that will start the program on December 12th. Each batch seems to have 300 students. In this series I will update on a regular base about the things I learned in this course.
Today we received the students’ handbook for preparation. The handbook covers the frequently asked questions. The Udacity course team consists of 15 people, who are either teaching, leading the Open Source project, or take care of the student community.
The students are from every corner of the globe. In the students-only Facebook group all the participants are introducing themselves, including their location. I have seen students from Mexico, USA, Canada, Austria, Germany, Guatemala, Australia, or Columbia. The reason why students are requested to tell where they are located is so that they can from local teams to collaborate on projects.
Each student has a mentor assigned, who’ll check in with the students on a weekly base to see how they are doing. An internal Slack-forum allows students to ask questions, discuss issues, and make announcements
The effort for the course is estimated to be 10 hours per week, that’s pretty intensive. Each student project will be graded after the deadline. Only students who have submitted their projects in time are invited to attend the hiring event with the hiring partners. In total 14 companies have partnered with Udacity to hire alumni.
The course content is split into three parts, of which each lasts three months.
Term 1: Computer Vision and Deep Learning
In this term, you’ll become an expert in applying Computer Vision and Deep Learning on automotive problems. You will teach the car to detect lane lines, predict steering angle, and more all based on just camera data!
More details about Term 1 were published by the course owner in this blog.
Term 2: Sensor Fusion, Localization, and Control
In this term, you’ll learn how to use an array of sensor data to perceive the environment and control the vehicle. You’ll evaluate sensor data from camera, radar, lidar, and GPS, and use these in closed-loop controllers that actuate the vehicle.
Term 3: Path Planning, Concentrations, and Systems
In this term, you’ll learn how to plan where the vehicle should go, how the vehicle systems work together to get it there, and you’ll perform a deep-dive into a concentration of your choice.
My course starts on December 12th, and the submission deadline for the first project is a week later on December 19th. So much stress right before Christmas!