Abusive Content Classifier
Protect abusive and offensive language in your forums or portals. This API identifies offensive
language with 98%
accuracy and helps you in fighting online abuse and spam.
Protect abusive and offensive language in your forums or portals. This API identifies offensive
language with 98%
accuracy and helps you in fighting online abuse and spam.
Offensive
--
Non Offensive
--
High accuracy to classify abusive content that are commonly present in social media conversations or chatbot applications. Achieves 98% accuracy on our internal test set.
Komprehend Abuse detection solution is built for most demanding requirements, already in use by various industries, from social media monitoring to chatbots.
Komprehend Abuse detection API support private cloud deployments via Docker containers or on-premise deployment ensuring no data leakage.
It uses Long Short Term Memory (LSTM) algorithms to classify a text into different. LSTMs model sentences as chain of forget-remember decisions based on context. It is trained on social media data and news data differently for handling casual and formal language. We also have trained this algorithm for various custom datasets for different clients.
Current best-practices of letting users flag inappropriate content is an unreliable and time-consuming task. It requires a team of human moderators to check each of the flagged item and take action accordingly. Using our abuse classifier, forum operators can moderate content and automatically ban the users who are repeat offenders.
One can use the Abusive Content Classifier to keep the comments section of their blogs, forums etc free from any inappropriate content. News media websites, covering sensitive topics such as Immigration, Terrorism, Unemployment, etc struggle to keep their content safe and abuse-free. Such media houses, can therefore benefit from moderating such content automatically and protect their brand integrity.