Skip to content

Performance for the Rationale predictor demo #1

@aifangchai

Description

@aifangchai

Great work on the hate speech detection. However, the performance of the model in the Rationale Predictor demo seems less accurate than desired. I tried a few sentences, and the results were labeled as the normal class instead of the abusive class. I suspect that the model is biased toward the normal class. Could you please suggest ways to improve the model's performance?

Here are some examples of sentences and their results:
I will kill you. {'Normal': 0.51631415, 'Abusive': 0.48368585}
I hate the rich people. {'Normal': 0.8278808, 'Abusive': 0.17211922}

Hope to hear from you soon.
Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions