• Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    A full 100% sounds weird. It means complete overlap with the ASD assessment which itself isn’t bulletproof. Weird like there were some mistakes in the data. E.g. all ASD pictures taken on the same day and getting a date timestamp, “ASD” written in the metadata or filename, or different light in different lab.

    I didn’t see any immediate problems in the published paper, but if these were my results I’d be to worried to publish it.

    • sosodev@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.

      I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.