My work: I train a binary classification SVM model. I then test it with some test data files, each file containing data for a period of time. Then I apply some domain specific logic method on the set of obtained labels inside each file; and another logic to obtain like probabilities i.e. probability of finding +ve class in +ve class test data (TP) and +ve class in -ve class test data (FP). Then I plot a roc curve comparing these probabilities against a set of explicitly define range of threshold values. I do this for different training experiments and judge the performance based on the AUC ROC i.e. area under the roc curve.
My questions are:
In a individual roc graph, in the context of svm, it doesn't make sense to 'prefer' a certain threshold value that allows me a satisfactory (TP,FP), right? Because at the end of the day, the trained svm will behave the way it should on any new feature vector in real application. Does this mean I should judge different training experiments by looking at TP value at FP=0 in the roc curve? Or should I see the (TP,FP) value for threshold=0? Or is my current approach of just seeing the total auc roc is enough? I got ques.1 and 2 when I started thinking the purpose of plotting and seeing over different thresholds.