This is project I did on Behavioral Cloning for Autonomous Driving Using Convolutional Neural Network (CNN) if you are interested in code its here Project github repo.
📝 Project Overview
Behavioral cloning is an approach in deep learning where a neural network learns to mimic human behavior—in this case, driving a car based on visual inputs. This project leverages LeNet Convolutional Neural Networks (CNNs) architecture to predict steering angles using images captured from a front-facing camera of a self-driving car.
📌 Goal: Train a CNN to predict steering angles from road images.
📌 Tech Stack: Keras
, TensorFlow
, Python
, OpenCV
📌 Data: ~4825 images from a simulated environment.
📸 Training data
I used all 3 camera images to train my model. I changed steering angle correction factor from 0.2 to 0.4 which showed lot improvement in driving. While recording data I tried to keep car in middle of road. I let the car drift to the edge of the road and recover before a crash occurs. I took 2 laps of center lane and one lap of track from opposite direction. I got around 4825 data points for training my model.
Center view | Left view | Right view |
---|---|---|
![]() |
![]() |
![]() |
🏗 Project Architecture
Project’s workflow:
📊 Key Components & Features
🔹 Component | 🔍 Details |
---|---|
📸 Data Collection | 3 Camera views (Left, Center, Right) to improve robustness. |
✂ Preprocessing | Cropping sky & hood, normalizing pixel values. |
🎭 Data Augmentation | Flipping, brightness adjustment, adding shadows. |
🏗 Model Architecture | CNN → 6 Conv layers + 4 Fully Connected layers. |
🛠 Training Strategy | Optimizer: Adam Loss Function: MSE |
🔄 Overfitting Reduction | Dropout layers, pooling, increased dataset. |
🏁 Final Model Performance | Successfully completed lap without drifting! |
🏗 LeNet CNN Model Architecture
📈 Model Training & Performance
🔹 Training Set Size | ⚠ MSE Loss | 🚗 Performance |
---|---|---|
2048 images | ❌ High | Drifts off track |
4825 images | ✅ Lower | Completes lap 🚀 |
🏁 Final Results
✅ Model successfully drives autonomously without manual intervention.
✅ Improved performance using data augmentation + dropout layers.
✅ MSE Loss reduced significantly with more training data.
🚀 Key Observations
1️⃣ Small Dataset (2048 samples) • 📉 Training Loss drops significantly but quickly → Overfitting risk! 🚨 • 📈 Validation Loss remains flat or slightly increases → Poor generalization
2️⃣ Large Dataset (4825 samples) • 📉 Training Loss decreases steadily • 📉 Validation Loss also decreases → Better generalization 🏆
• Training & Validation Loss converge more smoothly → Less overfitting
MSE with small data | MSE with Large data (Final solution) |
---|---|
![]() |
![]() |
📌 What these results mean
✅ Overfitting Reduction – Increasing data reduced the training-validation gap ✅ Better Generalization – Validation loss decreased, meaning it performs well on unseen data ✅ Final Model Stability – Training and validation loss align, meaning the model is learning correctly ✅ Performance Boost – Initial validation loss ~0.06, final validation loss ~0.045 → ~25% improvement! 📉
🔮 Next Steps
🔹 Test on real-world driving datasets 🚗
🔹 Optimize with Reinforcement Learning 🤖
🔹 Deploy on Embedded Hardware (Jetson Nano, Raspberry Pi)