-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
Great work on the hate speech detection. However, the performance of the model in the Rationale Predictor demo seems less accurate than desired. I tried a few sentences, and the results were labeled as the normal class instead of the abusive class. I suspect that the model is biased toward the normal class. Could you please suggest ways to improve the model's performance?
Here are some examples of sentences and their results:
I will kill you. {'Normal': 0.51631415, 'Abusive': 0.48368585}
I hate the rich people. {'Normal': 0.8278808, 'Abusive': 0.17211922}
Hope to hear from you soon.
Thank you.
Metadata
Metadata
Assignees
Labels
No labels