Skip to content

Abhiojiki/Python-project

Repository files navigation

Testrepo2

Toxic comment Classification

Social media platforms and online forums provide individuals with the means to express their opinions on various issues. These online comments can contain explicit language that hurts readers. They can be classified as toxic, severe toxic, obsessive, threatened, insulted, and hateful. Social media platforms and online forums provide individuals with the means to express their opinions on various issues. These online comments can contain explicit language that hurts readers. They can be classified as toxic, severe toxic, obsessive, threatened, insulted, and hateful. To protect users from being exposed to offensive language on online forums or social media sites, companies have started flagging comments and blocking users who are found guilty of using unpleasant language.[1] So, to tackle this various mL algorithms have been designed. I followed the toxic comment classification project provided on a medium website. The link is in the reference section. For future scope, this could be hosted on AWS or any other cloud service platform.

Reference

  1. Toxic Comment Classification using LSTM and LSTM-CNN. (https://towardsdatascience.com/toxic-comment-classification-using-lstm-and-lstm-cnn-db945d6b7986)

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published