Questions tagged [validation]

For questions related to the validation of a machine learning model, which is different from testing a model, which is done after training. Validation is usually performed for early stopping (i.e. assess when the model is over-fitting) or hyper-parameter optimization during training. However, often people use (either correctly/intensionally or not) the terms "validation" and "testing" interchangeably, so context needs to be taken into account.

10 questions
11
votes
3 answers

Should I choose a model with the smallest loss or highest accuracy?

I have two Machine Learning models (I use LSTM) that have a different result on the validation set (~100 samples data): Model A: Accuracy: ~91%, Loss: ~0.01 Model B: Accuracy: ~83%, Loss: ~0.003 The size and the speed of both models are almost the…
malioboro
  • 2,859
  • 3
  • 23
  • 47
2
votes
0 answers

Which evaluation metrics should be used in training, validation and testing of a model?

Which specific performance evaluation metrics are used in training, validation, and testing, and why? I am thinking error metrics (RMSE, MAE, MSE) are used in validation, and testing should use a wide variety of metrics? I don't think performance is…
1
vote
1 answer

Is the validation set still considered a validation set when it affects model weights?

As the title said, validation set that is affecting weights. But, it might not be what you think. While the training set is affecting weights based on sample with backpropagation like these steps: Forward (inference) a train sample input to the…
1
vote
2 answers

How to choose validation data?

To train a deep learning model, I work with a dataset that is divided into train and test parts by constructors. I'm stuck on how to select some data for validation? From the train part or from the test part? It seems that dividing the test part…
user153245
  • 195
  • 9
1
vote
0 answers

Why is my validation accuracy fluctuating between two inverse values?

I am currently going through the FastAI course and to practise, I wanted to code a neural network that classifies the FashionMNIST dataset from scratch. Lately, I've been running into an issue where I get a consistent validation accuracy score of…
DerOeko
  • 13
  • 3
0
votes
0 answers

My Validation set, gets me worse results, even when I'm using the train set as validation. What may cause that?

I have a question about how Keras handles validation. I have a pre-trained model (ResNet34 with a U-NET architecture) and want to train on a custom dataset for binary segmentation. I created a validation set of 60 images and when I tried to set an…
0
votes
1 answer

Low validation loss from the first epoch?

The initial validation loss is low from the first epoch and then decreases slightly. What does this actually mean? Does it indicate that the model can effectively and quickly identify patterns for this task? I can see that the model works in…
RT.
  • 101
0
votes
1 answer

Is model order of a model class (for example, polynomial regression class) a hyperparameter or a tuning parameter?

We know that in ML we have tuning parameters and hyperparameters. Is model order of a model class (for example, polynomial regression class) a hyperparameter or a tuning parameter?
DSPinfinity
  • 1,223
  • 4
  • 10
0
votes
0 answers

Why would my neural network have either an accuracy of 90% or 10% on the validation data, given a random initialization?

I'm making a custom neural network framework (in C++, if that is of any help). When I train the model on MNIST, depending on how happy the network is feeling, it'll give me either 90%+ accuracy, or get stuck at 10-9% (on validation set). I shuffle…
-1
votes
1 answer

how to decide the optimum model?

I have split the database available into 70% training, 15% validation, and 15% test, using holdout validation. I have trained the model and got the following results: training accuracy 100%, validation accuracy 97.83%, test accuracy 96.74% In…