Skip to content

Stagnant training accuracy and loss? #31

@ayay-netizen

Description

@ayay-netizen

First off I wanted to say that I think the idea behind this project is amazing and thank you for creating it and releasing it.

I seem to struggle to be able to use it properly however.

First, I tried to run it locally on my windows x64 machine.
I followed the instructions (for the muse_p3) example, used the example commands in the readme, and there were no errors, however the training accuracy and loss seemed to be relatively stagnant and never went too far above a low percentage. I then changed the model_type to a 3DCNN and subsequently an LSTM to see if those models would better be able to interpret the data, but got similar behavior. I had assumed the example data you included would show the model going to a high level of competency so I felt like something was off.

Assuming there must have been something wrong with my setup, I then went to the COLAB example. As I am interested in MUSE data, I followed the link for the MUSE jupyter notebook. On COLAB, just running all the cells by default I still got the same behavior. Validation accuracy barely went over 50%. Is this really intended? I also tried increasing the number of epochs to 350 in the colab notebook but again to no avail.

Please give me guidance on how I can get better results with this incredible pipeline.

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions