Advanced Driver Assistance & Intelligent Systems Lab
The Advanced Driver Assistance & Intelligent Systems Lab focuses on developing innovative technologies to enhance vehicle safety, efficiency, and driver experience. Our research areas include adaptive cruise control, lane departure warning, forward collision warning, traffic signal recognition, tire pressure monitoring, night vision, pedestrian detection, parking assistance, automatic emergency braking, driver behavior monitoring, blind spot detection, electronic stability control, and alcohol interlock systems. We also work on intelligent systems using machine learning, sensor fusion, real-time data processing, vehicle-to-everything communication, and autonomous driving. Our lab collaborates with industry partners, academic institutions, and government agencies to push the boundaries of intelligent transportation systems. We provide state-of-the-art facilities and a collaborative environment for researchers, engineers, and students to create innovative solutions.
SUBJECT EXPERTS
Prof. Rajiv Ranjan Gupta
rajivranjan.gupta@pilani.bits-pilani.ac.in
Dr. Shree Prasad M
shreeprasad.m@pilani.bits-pilani.ac.in
Dr. Suparna Chakraborty
suparna.chakraborty@pilani.bits-pilani.ac.in
Dr. P. Priyalatha
p.priyalatha@wilp.bits-pilani.ac.in
FACILITIES
The Advanced Driver Assistance Systems (ADAS) lab immerses students in the exploration of key technologies driving modern intelligent systems. They gain hands-on experience with sensor integration, including ultrasonic sensors, LiDAR, and radar, to understand real-time environmental perception. The lab emphasizes computer vision and machine learning, with practical projects in object detection, tracking, and image processing using neural networks and CNN models. Autonomous driving concepts are studied through simulations and control systems, where students learn about longitudinal and lateral vehicle control. Additionally, the lab introduces natural language processing and automation mechanisms, demonstrating the role of AI in analyzing data and automating complex tasks.
Hardware Components: Autoclaves (for sterilization), GPU based High end system, 3D LiDAR, FMCW radar, Arduino, UNO/Mega, Ultrasonic sensor, CARLA simulator, Red LED, Green LED, Blue LED, Jumper wires, Jetson Xavier, Logitech Steering wheel, STM 32 microcontroller, Linear actuator, motor driver MD20A, Camera, Breadboards, L298N motor driver, RC car, Infineon Microcontroller, Vector Toolchain, Lauterbach Debugger, LCD, I2C converter, Perfume Dispenser, Relays
Software Components: Pycharm IDE, Python3, Ultralytics, OpenCV, Pandas, cvzone, Pytorch Framework, Jupyter Notebook, MMMWave, DemoVisualizer, TbrigdeDrivers, Arduino IDE, Roboflow, Platform.io, Conda, EasyOCR, PIL, Matplotlib, Robot Operating, System 2 (ROS 2), TensorFlow, Keras, Scikit Learn, Aurix Development studio, C++, Open3D
EQUIPMENTS AVAILABLE
PROJECT DETAILS
Driver Facial Emotion Recognition
A model that analyzes the driver's facial expressions to adjust the car's interior settings based on their emotional state.
Hardware and Software Requirements:
Arduino Uno Microcontroller, GSM 900A, LCD Display, I2C Converter, LEDs, Perfume Dispenser, High-End System
Learning Outcomes:
Learn how to apply CNNs to automatically detect and classify driver emotions (e.g., fatigue, distraction) from facial features in real-time.
Understand the role of AI in enhancing road safety, and developing systems that monitor driver emotions to prevent accidents and improve driving behaviour.
Intelligent Actuator Control
Training a custom convolutional neural network (CNN) to classify white boxes into two categories: "Defect" and "Perfect." When the model detects a "Defect," it activates a linear actuator controlled by an STM microcontroller to move the box out of the frame for sorting. Boxes classified as "Perfect" remain in place, simulating an automated quality control system.
Hardware and Software Requirements:
A system equipped with high end GPUs, STM 32 Microcontroller, Linear actuator, Camera, Ultralytics, OpenCV, Project.io(STM 32 IDE), Annotating tool
Learning Outcomes:
Gain practical experience in training a CNN based model on a custom dataset.
Learn to integrate object detection with hardware control for real-time automation.
Understand the workflow of setting up a quality control system using computer vision and microcontroller-driven actuators.
License Plate Detection
This project captures live video from a camera, processes each frame to detect license plates, and uses a pre-trained Optical Character Recognition (OCR) model to read and display the license plate numbers in real-time.
Hardware and Software Requirements:
High end system, Camera, OpenCV, EasyOCR, PyTorch, Numpy.
Learning Outcomes:
Understanding the concept of Optical Character Recognition, how it works, and its applications in real-world scenarios like license plate detection.
Familiarity with OpenCV for capturing and processing live video input, handling frames, and integrating real-time text detection.
Knowledge of how OCR models detect and localize text in images and video frames, and how they output recognized characters.
SRGAN Implementation
This project employs Generative Adversarial Network to generate pictures with improved resolution images when a low resolution image is taken as an input. The model was trained to generate a 128X128 image from a 32X32 image.
Hardware and Software Requirements:
High-End System, PyTorch, PIL, Numpy, Matplotlib, VS Code, Linux
Learning Outcomes:
GAN Basics: Learn the core GAN architecture with a generator (creates data) and a discriminator (evaluates data). Focus on concepts like adversarial loss, minimax game, and the balance between the two networks.
Pre-trained Networks: SRGAN uses a pre-trained VGG network for perceptual loss. Learn to integrate pre-trained models (e.g., VGG) in frameworks like PyTorch or TensorFlow for feature extraction.
CNNs: Since SRGAN works with image data, understanding convolutional layers, filters, strides, padding, and pooling is crucial.
Neural Network Basics: Know about fully connected layers, activation functions (e.g., ReLU, LeakyReLU), and loss functions (e.g., MSE, Binary Cross-Entropy).
Traffic Optimization
This project utilizes YOLO, a powerful object detection model developed by Ultralytics, to detect vehicles in a video stream and count them as they pass through a defined threshold. The results, including vehicle IDs and timestamps, are dynamically recorded and saved in an Excel sheet for further analysis.
Hardware and Software Requirements:
High end system, Ultralytics, OpenCV, Numpy, Pandas, cvzone
Learning Outcomes:
Learn the fundamentals of Object Detection using YOLO (You Only Look Once) a SOTA model in Computer vision.
Learn how to integrate object tracking algorithms with detection models to track objects across video frames, enabling practical applications like traffic monitoring, surveillance, and analytics.
Learn Image processing techniques like masking
Voice Powered Car Manual Assistant
This project involves building an intelligent assistant that can load car manuals in PDF format, perform semantic searches to find relevant sections based on voice queries, and read the summarized results out loud using text-to-speech. The project leverages the SBERT model for efficient text embedding and similarity comparison, making it suitable for retrieving information from lengthy documents with natural language queries.
Hardware and Software Requirements:
Standard laptop/PC with at least 8 GB RAM for basic NLP tasks, Microphone for voice input, Speaker, Python 3.x
Libraries: PyPDF2 (for PDF processing), sentence-transformers (for SBERT-based semantic search), pyttsx3 (for text-to-speech), speech_recognition (for voice input)
Learning Outcomes:
Understand and implement semantic search using sentence transformers.
Integrate text-to-speech (TTS) and speech recognition for natural language interaction.
Work with PDF text extraction and transformer-based models for NLP.
STUDENTS WORKING ON THE PROJECTS
Piyush Kumar - f20212979@hyderabad.bits-pilani.ac.in
Vansh Maheshwari - f20212534@hyderabad.bits-pilani.ac.in
Saketh Eunny - f20213106@hyderabad.bits-pilani.ac.in