Advanced Driver Assistance & Intelligent Systems Lab

The Advanced Driver Assistance & Intelligent Systems Lab focuses on developing innovative technologies to enhance vehicle safety, efficiency, and driver experience. Our research areas include adaptive cruise control, lane departure warning, forward collision warning, traffic signal recognition, tire pressure monitoring, night vision, pedestrian detection, parking assistance, automatic emergency braking, driver behavior monitoring, blind spot detection, electronic stability control, and alcohol interlock systems. We also work on intelligent systems using machine learning, sensor fusion, real-time data processing, vehicle-to-everything communication, and autonomous driving. Our lab collaborates with industry partners, academic institutions, and government agencies to push the boundaries of intelligent transportation systems. We provide state-of-the-art facilities and a collaborative environment for researchers, engineers, and students to create innovative solutions.

SUBJECT EXPERTS

Prof. Rajiv Ranjan Gupta

rajivranjan.gupta@pilani.bits-pilani.ac.in

Dr. Shree Prasad M


shreeprasad.m@pilani.bits-pilani.ac.in

Dr. Suparna Chakraborty

suparna.chakraborty@pilani.bits-pilani.ac.in

Dr. P. Priyalatha


p.priyalatha@wilp.bits-pilani.ac.in

FACILITIES

The Advanced Driver Assistance Systems (ADAS) lab immerses students in the exploration of key technologies driving modern intelligent systems. They gain hands-on experience with sensor integration, including ultrasonic sensors, LiDAR, and radar, to understand real-time environmental perception. The lab emphasizes computer vision and machine learning, with practical projects in object detection, tracking, and image processing using neural networks and CNN models. Autonomous driving concepts are studied through simulations and control systems, where students learn about longitudinal and lateral vehicle control. Additionally, the lab introduces natural language processing and automation mechanisms, demonstrating the role of AI in analyzing data and automating complex tasks.

Hardware Components: Autoclaves (for sterilization), GPU based High end system, 3D LiDAR, FMCW radar, Arduino, UNO/Mega, Ultrasonic sensor, CARLA simulator, Red LED, Green LED, Blue LED, Jumper wires, Jetson Xavier, Logitech Steering wheel, STM 32 microcontroller, Linear actuator, motor driver MD20A, Camera, Breadboards, L298N motor driver, RC car, Infineon Microcontroller, Vector Toolchain, Lauterbach Debugger, LCD, I2C converter, Perfume Dispenser, Relays

Software Components: Pycharm IDE, Python3, Ultralytics, OpenCV, Pandas, cvzone, Pytorch Framework, Jupyter Notebook, MMMWave, DemoVisualizer, TbrigdeDrivers, Arduino IDE, Roboflow, Platform.io, Conda, EasyOCR, PIL, Matplotlib, Robot Operating, System 2 (ROS 2), TensorFlow, Keras, Scikit Learn, Aurix Development studio, C++, Open3D

EQUIPMENTS AVAILABLE

        PROJECT DETAILS

Driver Facial Emotion Recognition

A model that analyzes the driver's facial expressions to adjust the car's interior settings based on their emotional state. 


Hardware and Software Requirements:


Arduino Uno Microcontroller, GSM 900A, LCD Display, I2C Converter, LEDs, Perfume Dispenser, High-End System 


Learning Outcomes: 

Intelligent Actuator Control

Training a custom convolutional neural network (CNN) to classify white boxes into two categories: "Defect" and "Perfect." When the model detects a "Defect," it activates a linear actuator controlled by an STM microcontroller to move the box out of the frame for sorting. Boxes classified as "Perfect" remain in place, simulating an automated quality control system.


Hardware and Software Requirements:


A system equipped with high end GPUs, STM 32 Microcontroller, Linear actuator, Camera, Ultralytics, OpenCV, Project.io(STM 32 IDE), Annotating tool 


Learning Outcomes: 


License Plate Detection

This project captures live video from a camera, processes each frame to detect license plates, and uses a pre-trained Optical Character Recognition (OCR) model to read and display the license plate numbers in real-time.


Hardware and Software Requirements:


High end system, Camera, OpenCV, EasyOCR, PyTorch, Numpy.


Learning Outcomes: 


SRGAN Implementation


This project employs Generative Adversarial Network to generate pictures with improved resolution images when a low resolution image is taken as an input. The model was trained to generate a 128X128 image from a 32X32 image.


Hardware and Software Requirements:


High-End System, PyTorch, PIL, Numpy, Matplotlib, VS Code, Linux


Learning Outcomes: 


Traffic Optimization


This project utilizes YOLO, a powerful object detection model developed by Ultralytics, to detect vehicles in a video stream and count them as they pass through a defined threshold. The results, including vehicle IDs and timestamps, are dynamically recorded and saved in an Excel sheet for further analysis.


Hardware and Software Requirements:


High end system, Ultralytics, OpenCV, Numpy, Pandas, cvzone


Learning Outcomes: 


Voice Powered Car Manual Assistant

This project involves building an intelligent assistant that can load car manuals in PDF format, perform semantic searches to find relevant sections based on voice queries, and read the summarized results out loud using text-to-speech. The project leverages the SBERT model for efficient text embedding and similarity comparison, making it suitable for retrieving information from lengthy documents with natural language queries.


Hardware and Software Requirements:


Standard laptop/PC with at least 8 GB RAM for basic NLP tasks, Microphone for voice input, Speaker, Python 3.x

Libraries: PyPDF2 (for PDF processing),  sentence-transformers (for SBERT-based semantic search),  pyttsx3 (for text-to-speech),  speech_recognition (for voice input)


Learning Outcomes: 


STUDENTS WORKING ON THE PROJECTS