Why are different evaluations methods needed? For many real-world datasets, simply measuring overall accuracy will not produce the most useful models. This is usually the case when the dataset has an imbalanced distribution of classes. For example, consider a test dataset with 900 positive instances and 100 negative instances. If the model puts every instance into the positive class, it gets 90% correct. However, the model is useless, as it gives no information about the negative class.

The geometric mean is one way to give each class equal prominence, irrespective of the frequency of instances of that class. The geometric mean first computes the accuracy of each class separately, multiplies them together and takes the nth root (where n is the number of classes). In the example above, the geometric mean would come out as zero, the worst score possible, because none of the negative instances are correctly classified by the model. Precision and recall are evaluation measures which highlight the performance of one class, usually the minority class.

The cross-validation process must select a model using an evaluation technique appropriate to the domain if the model is going to be useful in later tests. For example, in information retrieval the best choice may be to find a model with the best recall of the minority class, as the model is more likely to pick out all members of the class of interest, even if these are mixed with many instances from the other class.

Four evaluation types are currently provided with the svm_toolkit:

- Evaluator::OverallAccuracy, to compute the proportion of correctly classified instances.
- Evaluator::GeometricMean, to compute the geometric mean of the accuracy of each separate class.
- Evaluator::ClassPrecision(class), to compute the precision of the model for the given class.
- Evaluator::ClassRecall(class), to compute the recall of the model for the given class.

# evaluate based on Overall Accuracy

puts model.evaluate_dataset(Dataset, :evaluator => Evaluator::OverallAccuracy)# evaluate based on geometric mean of accuracies for separate classes

puts model.evaluate_dataset(Dataset, :evaluator => Evaluator::GeometricMean)# evaluate based on precision of class labelled 1

puts model.evaluate_dataset(Dataset, :evaluator => Evaluator::ClassPrecision(1))# evaluate based on recall of class labelled 1

puts model.evaluate_dataset(Dataset, :evaluator => Evaluator::ClassRecall(1))

User-Defined Evaluators

It is easy to define a new evaluator class. The class must support the following methods:

- add_result(actual, prediction), which is called with the actual and predicted classification for every instance in the dataset.
- value, to retrieve a number giving the performance of the model.
- better_than?(eval), to compare this evaluation with a given one (which may be nil), returning true if the model yielding this evaluation should be kept over the given one.

Finding the Best Model# Measures accuracy as the percentage of instances

# correctly classified out of all the available instances.

classOverallAccuracy

attr_reader :num_correct

definitialize

@num_correct = 0

@total = 0

end

# As each instance result is added, store the total number

# of instances and the number of correct predictions.

defadd_result(actual, prediction)

@total += 1

@num_correct += 1ifprediction == actual

end

# This object is better than given object, if the

# given object is an instance of nil, or the accuracy

# is better

defbetter_than? other

other.nil?orself.value > other.value

end

# Return the accuracy as a percentage.

defvalue

if@total.zero?

0.0

else

100.0 * @num_correct / @total

end

end

defto_s

"Overall accuracy: #{value}%"

end

end

The process of finding a good model with a radial-basis function (RBF) kernel is fairly straightforward. Once the training and cross-validation sets have been created, and lists of cost and gamma values created to search over, the best model can be found using:

best_model = Svm.cross_validation_search(

TrainingData, CrossSet,

Costs, Gammas,

:show_plot =>true,

:evaluator => Evaluator::GeometricMean

)

The :show_plot parameter is given if you want a contour plot of the grid search results. The :evaluator parameter is used to give the class name of the evaluation technique to use in evaluating the models on the cross-validation set.