Skip to content

punitive1729/Toxic-Comment-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Toxic-Comment-Classification

The model is used to classify a comment into multiple classes.

The dataset used: https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge

The output generated using the code achieves the following scores: Private Score: 0.97747 Public Score: 0.97584

The output files and other resources can be found here: https://drive.google.com/drive/folders/14yZO6mAVwBk0vqPN2KJC4ME4OKrpE_CN?usp=share_link

Helpful Resources:

https://towardsdatascience.com/cleaning-preprocessing-text-data-by-building-nlp-pipeline-853148add68a

https://machinelearningmastery.com/clean-text-machine-learning-python/

https://www.analyticsvidhya.com/blog/2022/01/text-cleaning-methods-in-nlp/

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published