Train Test Split in Deep Learning

  • Fairly empty article. The validation vs test set has a lot you can talk about - for instance what do you do if you’re happy with the validation results, but then unhappy with the test set results, to avoid “contamination” (overfitting hyperparameters to the test set).

    Have a secret backup test set you hide from your team? Only look at the test set results once every full moon? Personally I hire a stranger to look at the results, and they can only communicate the outcome via a 15 second interpretive dance.