- Instructions:
- Summary
- File Description
- Dataset
- Modeling Process
- Screenshots
- Model results
- Effect of Imbalance
-
Run the following commands in the project's root directory to set up your database and model.
- To run ETL pipeline that cleans data and stores in the database
python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/DisasterResponse.db
- To run ML pipeline that trains classifier and saves
python models/train_classifier.py data/DisasterResponse.db models/classifier.pkl
- To run ETL pipeline that cleans data and stores in the database
-
Run the following command in the app's directory to run your web app.
python run.py
-
Go to http://0.0.0.0:3001/
In this project, I've analyzed disaster data from Figure Eight to build a model for an API that classifies disaster messages.
The data set contains real messages that were sent during disaster events. I've created a machine learning pipeline to categorize these events so that messages can be sent to an appropriate disaster relief agency.
This project has an app inside the app folder. Using it an emergency worker can input a new message and get classification results in several categories. The web app also display the visualization of the data.
- ETL Pipeline Preparation.ipynb: Notebook contains ETL Pipeline.
- ML Pipeline Preparation.ipynb: Notebook contains ML Pipeline.
- etl.db: etl database.
- categories.csv: Categories data set.
- messages.csv: Messages data set.
- classifier.pkl: Trained model pickle file.
- train_classifier.py: Python file for model training.
- transformation.py: Helper file for train_classifier.py
- disaster_categories.csv: Disaster Categories data set.
- disaster_messages.csv: Disaster Messages data set.
- process_data.py: Python ETL script.
- app: Flask Web App
- run.py: Flask Web App main script.
- img: Image Folder
- requirements.txt: Text file containing list of packages used.
- LICENSE: Project LICENSE file.
This disaster data is from Figure Eight This dataset has two files messages.csv and categories.csv.
- Based on id two datasets were first merged into df.
- Categories were split into separate category columns.
- Category values were converted to numbers 0 or 1.
- Replaced categories column in df with new category columns.
- Removed duplicates based on the message column.
- df were exported to etl.db database.
-
Wrote a tokenization function to process text data.
-
Build a machine learning pipeline using TfidfVectorizer, RandomForestClassifier, and Pipeline.
-
Split the data into training and test sets.
-
Using pipeline trained and evaluated a simple RandomForestClassifier.
-
Then using hyperparameter tuning with 5 fold cross-validation fitted 100 models to find the best random forest model for predicting disaster response category. Random Forest best parameters were
{'clfcriterion': 'entropy', 'clfmax_depth': 40, 'clfmax_features': 'auto', 'clfrandom_state': 42}
-
Using this best model we've made train_classifier.py