This project aims to develop an autonomous driving system in BeamNG.drive, leveraging real-time object detection for basic driving tasks, such as lane following, steering, accelerating, and braking. The project uses screenshots captured from the game to train a deep learning model that can detect road lines and control a vehicle in real-time using keypresses.
- Project Overview
- Setup and Installation
- Data Collection
- Training the Object Detection Model
- Real-Time Vehicle Control
- Contributing
- License
This project is divided into three main stages:
- Data Collection: Capture images from BeamNG.drive to create a dataset for training an object detection model.
- Model Training: Train a deep learning model using the collected images to detect street lines and road elements.
- Real-Time Vehicle Control: Utilize the trained model to control the vehicle in BeamNG.drive via keypresses, based on detected objects in the game environment.
- BeamNG.drive: Ensure BeamNG.drive is installed on your system.
- Python 3.9 or higher: Required for running scripts.
- Virtual Environment: Recommended for managing dependencies.
- NVIDIA GPU: Optional, but recommended for training the object detection model.
-
Clone the Repository:
git clone https://github.com/yourusername/beamng-autonomous-driving.git cd beamng-autonomous-driving
-
Create a Virtual Environment:
python3 -m venv .venv source .venv/bin/activate # On Windows use `.venv\Scripts\activate`
-
Install Required Packages:
pip install -r requirements.txt
-
Install Detectron2:
Follow the official Detectron2 installation guide or use:
pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu118/torch2.0/index.html
-
Additional Dependencies:
Install required system libraries:
sudo apt update sudo apt install libgl1-mesa-glx libglib2.0-0 libsm6 libxrender1 libxext6
To collect data for training:
-
Run the Data Collection Script:
python data_collection.py
Before running the script, make sure you have a empty directory called
captured_images
in the root folder of this project.This script captures screenshots from the BeamNG.drive window whenever a specific key combination is pressed,
Ctrl + Shift + s
for saving a picture every 1.5 sec,Ctrl + Shift + r
to change the number where it should continue with saving, saving images to thecaptured_images
folder. If needed the saving interval can be changed indata_collection.py
-
Organize and Annotate Images:
Use tools like LabelImg or automated annotation scripts to label the images with road line data.
-
Prepare the Dataset:
Ensure that the collected and annotated images are organized into a training dataset.
-
Train the Model:
THIS PART OF THE PROJECT IS UNDER DEVELOPMENT AND WILL BE ADDED LATER
Run the training script using TensorFlow:
python train_model.py
This script uses TensorFlow to train an object detection model with the prepared dataset.
-
Evaluate and Test:
Test the trained model on a validation set to ensure it can accurately detect road lines.
-
Run the Control Script:
Use the trained model to control the vehicle in BeamNG.drive:
python ai_driving.py
This script captures images from the game, uses the model to detect road lines, and sends keypresses to control the vehicle.
-
Adjust Parameters:
Adjust parameters like frame rate, detection threshold, and control sensitivity in
realtime_control.py
as needed.
Contributions are welcome! Please fork the repository, create a new branch, and submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.