Your internet connection may be unreliable. For more information about the W3C website, see the Webmaster FAQ. Please cite us if you use the software. Model evaluation: quantifying 0x71 in binary option quality of predictions 3.
This is not discussed on this page, but in each estimator’s documentation. Model selection and evaluation using tools, such as model_selection. All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics. The scorer objects for those functions are stored in the dictionary sklearn.
Metrics available for various machine learning tasks are detailed in sections below. In such cases, you need to generate an appropriate scoring object. That function converts metrics into callables that can be used for model evaluation. If a loss, the output of the python function is negated by the scorer object, conforming to the cross validation convention that scorers return higher values for better models. Again, by convention higher numbers are better, so if your scorer returns loss, that value should be negated. Note that the dict values can either be scorer functions or one of the predefined metric strings. Currently only those scorer functions that return a single score can be passed inside the dict.