PERMETRICS Library

permetrics.utils package

permetrics.evaluator module

class permetrics.evaluator.Evaluator(y_true=None, y_pred=None, **kwargs)[source]

Bases: object

This is base class for all performance metrics

EPSILON = 1e-10
SUPPORT = {}
get_metric_by_name(metric_name=<class 'str'>, paras=None) dict[source]

Get single metric by name, specific parameter of metric by dictionary

Parameters
  • metric_name (str) – Select name of metric

  • paras (dict) – Dictionary of hyper-parameter for that metric

Returns

{ metric_name: value }

Return type

result (dict)

get_metrics_by_dict(metrics_dict: dict) dict[source]

Get results of list metrics by its name and parameters wrapped by dictionary

For example:
{

“RMSE”: {“multi_output”: multi_output}, “MAE”: {“multi_output”: multi_output}

}

Parameters

metrics_dict (dict) – key is metric name and value is dict of parameters

Returns

e.g, { “RMSE”: 0.3524, “MAE”: 0.445263 }

Return type

results (dict)

get_metrics_by_list_names(list_metric_names=<class 'list'>, list_paras=None) dict[source]

Get results of list metrics by its name and parameters

Parameters
  • list_metric_names (list) – e.g, [“RMSE”, “MAE”, “MAPE”]

  • list_paras (list) – e.g, [ {“multi_output”: “raw_values”}, {“multi_output”: “raw_values”}, {“multi_output”: [2, 3]} ]

Returns

e.g, { “RMSE”: 0.25, “MAE”: [0.3, 0.6], “MAPE”: 0.15 }

Return type

results (dict)

get_output_result(result=None, n_out=None, multi_output=None, force_finite=None, finite_value=None)[source]

Get final output result based on selected parameter

Parameters
  • result – The raw result from metric

  • n_out – The number of column in y_true or y_pred

  • multi_outputraw_values - return multi-output, weights - return single output based on weights, else - return mean result

  • force_finite – Make result as finite number

  • finite_value – The value that used to replace the infinite value or NaN value.

Returns

Final output results based on selected parameter

Return type

final_result

get_processed_data(y_true=None, y_pred=None)[source]
set_keyword_arguments(kwargs)[source]

permetrics.regression module

class permetrics.regression.RegressionMetric(y_true=None, y_pred=None, **kwargs)[source]

Bases: permetrics.evaluator.Evaluator

Defines a RegressionMetric class that hold all regression metrics (for both regression and time-series problems)

Parameters
  • y_true (tuple, list, np.ndarray, default = None) – The ground truth values.

  • y_pred (tuple, list, np.ndarray, default = None) – The prediction values.

A10(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

A10 index (A10): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Notes

  • a10-index is engineering index for evaluating artificial intelligence models by showing the number of samples

  • that fit the prediction values with a deviation of ±10% compared to experimental values

  • https://www.mdpi.com/2076-3417/9/18/3715/htm

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

A10 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

A20(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

A20 index (A20): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

A20 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

A30(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

A30 index (A30): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Note: a30-index evaluated metric by showing the number of samples that fit the prediction values with a deviation of ±30% compared to experimental values

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

A30 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

ACOD(y_true=None, y_pred=None, X_shape=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Adjusted Coefficient of Determination (ACOD/AR2): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • X_shape (tuple, list, np.ndarray) – The shape of X_train dataset

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

AR2 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

AE(y_true=None, y_pred=None, **kwargs)

Absolute Error (AE): Best possible score is 0.0, smaller value is better. Range = (-inf, +inf) Note: Computes the absolute error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

AE metric

Return type

result (np.ndarray)

APCC(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Absolute Pearson’s Correlation Coefficient (APCC or AR): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

AR metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

AR(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Absolute Pearson’s Correlation Coefficient (APCC or AR): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

AR metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

AR2(y_true=None, y_pred=None, X_shape=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Adjusted Coefficient of Determination (ACOD/AR2): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • X_shape (tuple, list, np.ndarray) – The shape of X_train dataset

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

AR2 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

CE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)

Cross Entropy (CE): Range = (-inf, 0]. Can’t give any comment about this one

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

CE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

CI(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Confidence Index (or Performance Index): CI (PI): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

  • Reference evapotranspiration for Londrina, Paraná, Brazil: performance of different estimation methods

  • > 0.85, Excellent

  • 0.76-0.85, Very good

  • 0.66-0.75, Good

  • 0.61-0.65, Satisfactory

  • 0.51-0.60, Poor

  • 0.41-0.50, Bad

  • < 0.40, Very bad

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

CI (PI) metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

COD(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Coefficient of Determination (COD/R2): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R2 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

COR(y_true=None, y_pred=None, sample=False, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)
Correlation (COR): Best possible value = 1, bigger value is better. Range = [-1, +1]
  • measures the strength of the relationship between variables

  • is the scaled measure of covariance. It is dimensionless.

  • the correlation coefficient is always a pure value and not measured in any units.

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • sample (bool) – sample covariance or population covariance. See the website above for more details

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

COR metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

COV(y_true=None, y_pred=None, sample=False, multi_output='raw_values', force_finite=True, finite_value=- 10.0, **kwargs)
Covariance (COV): There is no best value, bigger value is better. Range = [-inf, +inf)
  • is a measure of the relationship between two random variables

  • evaluates how much – to what extent – the variables change together

  • does not assess the dependency between variables

  • Positive covariance: Indicates that two variables tend to move in the same direction.

  • Negative covariance: Reveals that two variables tend to move in inverse directions.

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • sample (bool) – sample covariance or population covariance. See the website above for more details

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

COV metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

CRM(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)

Coefficient of Residual Mass (CRM): Best possible value = 0.0, smaller value is better. Range = [-inf, +inf]

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

CRM metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

DRV(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=10.0, **kwargs)

Deviation of Runoff Volume (DRV): Best possible score is 1.0, smaller value is better. Range = [0, +inf) Link: https://rstudio-pubs-static.s3.amazonaws.com/433152_56d00c1e29724829bad5fc4fd8c8ebff.html

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

DRV metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

EC(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Efficiency Coefficient (EC): Best possible value = 1, bigger value is better. Range = [-inf, +1]

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

EC metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

EVS(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Explained Variance Score (EVS). Best possible score is 1.0, greater value is better. Range = (-inf, 1.0]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

EVS metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

GINI(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Gini coefficient (GINI): Best possible score is 1, bigger value is better. Range = [0, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

Gini metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

GINI_WIKI(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Gini coefficient (GINI_WIKI): Best possible score is 1, bigger value is better. Range = [0, 1]

Notes

  • This version is based on wiki page, may be is the true version

  • https://en.wikipedia.org/wiki/Gini_coefficient

  • Gini coefficient can theoretically range from 0 (complete equality) to 1 (complete inequality)

  • It is sometimes expressed as a percentage ranging between 0 and 100.

  • If negative values are possible, then the Gini coefficient could theoretically be more than 1.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

Gini metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

JSD(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Jensen-Shannon Divergence (JSD): Best possible score is 0.0 (identical), smaller value is better . Range = [0, +inf) Link: https://machinelearningmastery.com/divergence-between-probability-distributions/

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

JSD metric (bits) for single column or multiple columns

Return type

result (float, int, np.ndarray)

KGE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Kling-Gupta Efficiency (KGE): Best possible score is 1, bigger value is better. Range = (-inf, 1] Link: https://rstudio-pubs-static.s3.amazonaws.com/433152_56d00c1e29724829bad5fc4fd8c8ebff.html

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

KGE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

KLD(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)

Kullback-Leibler Divergence (KLD): Best possible score is 0.0 . Range = (-inf, +inf) Link: https://machinelearningmastery.com/divergence-between-probability-distributions/

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

KLD metric (bits) for single column or multiple columns

Return type

result (float, int, np.ndarray)

MAAPE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Arctangent Absolute Percentage Error (MAAPE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MAAPE metric for single column or multiple columns (radian values)

Return type

result (float, int, np.ndarray)

MAE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Absolute Error (MAE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MAE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MAPE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Absolute Percentage Error (MAPE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MAPE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MASE(y_true=None, y_pred=None, m=1, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Absolute Scaled Error (MASE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Link: https://en.wikipedia.org/wiki/Mean_absolute_scaled_error

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • m (int) – m = 1 for non-seasonal data, m > 1 for seasonal data

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MASE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MBE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Bias Error (MBE): Best possible score is 0.0. Range = (-inf, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MBE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

ME(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Max Error (ME): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

ME metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MPE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Percentage Error (MPE): Best possible score is 0.0. Range = (-inf, +inf) Link: https://www.dataquest.io/blog/understanding-regression-error-metrics/

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MPE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MRB(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Relative Error (MRE) - Mean Relative Bias (MRB): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MRE (MRB) metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MRE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Relative Error (MRE) - Mean Relative Bias (MRB): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MRE (MRB) metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MSE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Squared Error (MSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MSLE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Squared Log Error (MSLE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Link: https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/mean-squared-logarithmic-error-(msle)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MSLE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

MedAE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Median Absolute Error (MedAE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MedAE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

NNSE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Normalize Nash-Sutcliffe Efficiency (NNSE): Best possible score is 1.0, bigger value is better. Range = [0, 1] Link: https://agrimetsoft.com/calculators/Nash%20Sutcliffe%20model%20Efficiency%20coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

NSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

NRMSE(y_true=None, y_pred=None, model=0, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Normalized Root Mean Square Error (NRMSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Link: https://medium.com/microsoftazure/how-to-better-evaluate-the-goodness-of-fit-of-regressions-990dbf1c0091

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • model (int) – Normalize RMSE by different ways, (Optional, default = 0, valid values = [0, 1, 2, 3]

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

NRMSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

NSE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Nash-Sutcliffe Efficiency (NSE): Best possible score is 1.0, bigger value is better. Range = (-inf, 1] Link: https://agrimetsoft.com/calculators/Nash%20Sutcliffe%20model%20Efficiency%20coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

NSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

OI(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Overall Index (OI): Best possible value = 1, bigger value is better. Range = [-inf, +1]

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

OI metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

PCC(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)

Pearson’s Correlation Coefficient (PCC or R): Best possible score is 1.0, bigger value is better. Range = [-1, 1] .. rubric:: Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

PCD(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Prediction of Change in Direction (PCD): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

PCD metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

R(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)

Pearson’s Correlation Coefficient (PCC or R): Best possible score is 1.0, bigger value is better. Range = [-1, 1] .. rubric:: Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

R2(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Coefficient of Determination (COD/R2): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R2 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

R2S(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

(Pearson’s Correlation Index)^2 = R^2 = R2S = RSQ (R square): Best possible score is 1.0, bigger value is better. Range = [0, 1] .. rubric:: Notes

  • Do not misunderstand between R2s and R2 (Coefficient of Determination), they are different

  • Most of online tutorials (article, wikipedia,…) or even scikit-learn library are denoted the wrong R2s and R2.

  • R^2 = R2s = R squared should be (Pearson’s Correlation Index)^2

  • Meanwhile, R2 = Coefficient of Determination

  • https://en.wikipedia.org/wiki/Pearson_correlation_coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R2s metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

RAE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Relative Absolute Error (RAE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

RAE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

RB(y_true=None, y_pred=None, **kwargs)

Relative Error (RE): Best possible score is 0.0, smaller value is better. Range = (-inf, +inf) Note: Computes the relative error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

RE metric

Return type

result (np.ndarray)

RE(y_true=None, y_pred=None, **kwargs)

Relative Error (RE): Best possible score is 0.0, smaller value is better. Range = (-inf, +inf) Note: Computes the relative error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

RE metric

Return type

result (np.ndarray)

RMSE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Root Mean Squared Error (RMSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

RMSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

RSE(y_true=None, y_pred=None, n_paras=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Residual Standard Error (RSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • n_paras (int) – The number of model’s parameters

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

RSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

RSQ(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

(Pearson’s Correlation Index)^2 = R^2 = R2S = RSQ (R square): Best possible score is 1.0, bigger value is better. Range = [0, 1] .. rubric:: Notes

  • Do not misunderstand between R2s and R2 (Coefficient of Determination), they are different

  • Most of online tutorials (article, wikipedia,…) or even scikit-learn library are denoted the wrong R2s and R2.

  • R^2 = R2s = R squared should be (Pearson’s Correlation Index)^2

  • Meanwhile, R2 = Coefficient of Determination

  • https://en.wikipedia.org/wiki/Pearson_correlation_coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R2s metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

SE(y_true=None, y_pred=None, **kwargs)

Squared Error (SE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Note: Computes the squared error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

SE metric

Return type

result (np.ndarray)

SLE(y_true=None, y_pred=None, **kwargs)

Squared Log Error (SLE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Note: Computes the squared log error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

SLE metric

Return type

result (np.ndarray)

SMAPE(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Symmetric Mean Absolute Percentage Error (SMAPE): Best possible score is 0.0, smaller value is better. Range = [0, 1] If you want percentage then multiply with 100%

Link: https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

SMAPE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

SUPPORT = {'A10': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'A20': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'A30': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'ACOD': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'AE': {'best': '0', 'range': '(-inf, +inf)', 'type': 'unknown'}, 'APCC': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'AR': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'AR2': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'CE': {'best': 'unknown', 'range': '(-inf, 0]', 'type': 'unknown'}, 'CI': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'COD': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'COR': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'COV': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'CRM': {'best': '0', 'range': '(-inf, +inf)', 'type': 'min'}, 'DRV': {'best': '1', 'range': '[1, +inf)', 'type': 'min'}, 'EC': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'EVS': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'GINI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'GINI_WIKI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'JSD': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'KGE': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'KLD': {'best': '0', 'range': '(-inf, +inf)', 'type': 'unknown'}, 'MAAPE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MAE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MAPE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MASE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MBE': {'best': '0', 'range': '(-inf, +inf)', 'type': 'unknown'}, 'ME': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MPE': {'best': '0', 'range': '(-inf, +inf)', 'type': 'unknown'}, 'MRB': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MRE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MSE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MSLE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'MedAE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'NNSE': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'NRMSE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'NSE': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'OI': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'PCC': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'PCD': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'R': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'R2': {'best': '1', 'range': '(-inf, 1]', 'type': 'max'}, 'R2S': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'RAE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'RB': {'best': '0', 'range': '(-inf, +inf)', 'type': 'unknown'}, 'RE': {'best': '0', 'range': '(-inf, +inf)', 'type': 'unknown'}, 'RMSE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'RSE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'RSQ': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'SE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'SLE': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'SMAPE': {'best': '0', 'range': '[0, 1]', 'type': 'min'}, 'VAF': {'best': '100', 'range': '(-inf, 100%)', 'type': 'max'}, 'WI': {'best': '1', 'range': '[0, 1]', 'type': 'max'}}
VAF(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Variance Accounted For between 2 signals (VAF): Best possible score is 100% (identical signal), bigger value is better. Range = (-inf, 100%] Link: https://www.dcsc.tudelft.nl/~jwvanwingerden/lti/doc/html/vaf.html

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

VAF metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

WI(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)

Willmott Index (WI): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

WI metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

a10_index(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

A10 index (A10): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Notes

  • a10-index is engineering index for evaluating artificial intelligence models by showing the number of samples

  • that fit the prediction values with a deviation of ±10% compared to experimental values

  • https://www.mdpi.com/2076-3417/9/18/3715/htm

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

A10 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

a20_index(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

A20 index (A20): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

A20 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

a30_index(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

A30 index (A30): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Note: a30-index evaluated metric by showing the number of samples that fit the prediction values with a deviation of ±30% compared to experimental values

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

A30 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

absolute_pearson_correlation_coefficient(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Absolute Pearson’s Correlation Coefficient (APCC or AR): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

AR metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

adjusted_coefficient_of_determination(y_true=None, y_pred=None, X_shape=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Adjusted Coefficient of Determination (ACOD/AR2): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • X_shape (tuple, list, np.ndarray) – The shape of X_train dataset

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

AR2 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

coefficient_of_determination(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Coefficient of Determination (COD/R2): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R2 metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

coefficient_of_residual_mass(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)[source]

Coefficient of Residual Mass (CRM): Best possible value = 0.0, smaller value is better. Range = [-inf, +inf]

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

CRM metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

confidence_index(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Confidence Index (or Performance Index): CI (PI): Best possible score is 1.0, bigger value is better. Range = (-inf, 1]

Notes

  • Reference evapotranspiration for Londrina, Paraná, Brazil: performance of different estimation methods

  • > 0.85, Excellent

  • 0.76-0.85, Very good

  • 0.66-0.75, Good

  • 0.61-0.65, Satisfactory

  • 0.51-0.60, Poor

  • 0.41-0.50, Bad

  • < 0.40, Very bad

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

CI (PI) metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

correlation(y_true=None, y_pred=None, sample=False, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]
Correlation (COR): Best possible value = 1, bigger value is better. Range = [-1, +1]
  • measures the strength of the relationship between variables

  • is the scaled measure of covariance. It is dimensionless.

  • the correlation coefficient is always a pure value and not measured in any units.

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • sample (bool) – sample covariance or population covariance. See the website above for more details

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

COR metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

covariance(y_true=None, y_pred=None, sample=False, multi_output='raw_values', force_finite=True, finite_value=- 10.0, **kwargs)[source]
Covariance (COV): There is no best value, bigger value is better. Range = [-inf, +inf)
  • is a measure of the relationship between two random variables

  • evaluates how much – to what extent – the variables change together

  • does not assess the dependency between variables

  • Positive covariance: Indicates that two variables tend to move in the same direction.

  • Negative covariance: Reveals that two variables tend to move in inverse directions.

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • sample (bool) – sample covariance or population covariance. See the website above for more details

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

COV metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

cross_entropy(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)[source]

Cross Entropy (CE): Range = (-inf, 0]. Can’t give any comment about this one

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

CE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

deviation_of_runoff_volume(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=10.0, **kwargs)[source]

Deviation of Runoff Volume (DRV): Best possible score is 1.0, smaller value is better. Range = [0, +inf) Link: https://rstudio-pubs-static.s3.amazonaws.com/433152_56d00c1e29724829bad5fc4fd8c8ebff.html

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

DRV metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

efficiency_coefficient(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Efficiency Coefficient (EC): Best possible value = 1, bigger value is better. Range = [-inf, +1]

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

EC metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

explained_variance_score(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Explained Variance Score (EVS). Best possible score is 1.0, greater value is better. Range = (-inf, 1.0]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

EVS metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

get_processed_data(y_true=None, y_pred=None, **kwargs)[source]
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

y_true used in evaluation process. y_pred_final: y_pred used in evaluation process n_out: Number of outputs

Return type

y_true_final

static get_support(name=None, verbose=True)[source]
gini_coefficient(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Gini coefficient (GINI): Best possible score is 1, bigger value is better. Range = [0, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

Gini metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

gini_coefficient_wiki(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Gini coefficient (GINI_WIKI): Best possible score is 1, bigger value is better. Range = [0, 1]

Notes

  • This version is based on wiki page, may be is the true version

  • https://en.wikipedia.org/wiki/Gini_coefficient

  • Gini coefficient can theoretically range from 0 (complete equality) to 1 (complete inequality)

  • It is sometimes expressed as a percentage ranging between 0 and 100.

  • If negative values are possible, then the Gini coefficient could theoretically be more than 1.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

Gini metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

jensen_shannon_divergence(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Jensen-Shannon Divergence (JSD): Best possible score is 0.0 (identical), smaller value is better . Range = [0, +inf) Link: https://machinelearningmastery.com/divergence-between-probability-distributions/

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

JSD metric (bits) for single column or multiple columns

Return type

result (float, int, np.ndarray)

kling_gupta_efficiency(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Kling-Gupta Efficiency (KGE): Best possible score is 1, bigger value is better. Range = (-inf, 1] Link: https://rstudio-pubs-static.s3.amazonaws.com/433152_56d00c1e29724829bad5fc4fd8c8ebff.html

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

KGE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

kullback_leibler_divergence(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)[source]

Kullback-Leibler Divergence (KLD): Best possible score is 0.0 . Range = (-inf, +inf) Link: https://machinelearningmastery.com/divergence-between-probability-distributions/

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

KLD metric (bits) for single column or multiple columns

Return type

result (float, int, np.ndarray)

max_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Max Error (ME): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

ME metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_absolute_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Absolute Error (MAE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MAE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_absolute_percentage_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Absolute Percentage Error (MAPE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MAPE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_absolute_scaled_error(y_true=None, y_pred=None, m=1, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Absolute Scaled Error (MASE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Link: https://en.wikipedia.org/wiki/Mean_absolute_scaled_error

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • m (int) – m = 1 for non-seasonal data, m > 1 for seasonal data

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MASE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_arctangent_absolute_percentage_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Arctangent Absolute Percentage Error (MAAPE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MAAPE metric for single column or multiple columns (radian values)

Return type

result (float, int, np.ndarray)

mean_bias_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Bias Error (MBE): Best possible score is 0.0. Range = (-inf, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MBE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_percentage_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Percentage Error (MPE): Best possible score is 0.0. Range = (-inf, +inf) Link: https://www.dataquest.io/blog/understanding-regression-error-metrics/

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MPE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_relative_bias(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)

Mean Relative Error (MRE) - Mean Relative Bias (MRB): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MRE (MRB) metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_relative_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Relative Error (MRE) - Mean Relative Bias (MRB): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MRE (MRB) metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_squared_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Squared Error (MSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

mean_squared_log_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Mean Squared Log Error (MSLE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Link: https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/mean-squared-logarithmic-error-(msle)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MSLE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

median_absolute_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Median Absolute Error (MedAE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

MedAE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

nash_sutcliffe_efficiency(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Nash-Sutcliffe Efficiency (NSE): Best possible score is 1.0, bigger value is better. Range = (-inf, 1] Link: https://agrimetsoft.com/calculators/Nash%20Sutcliffe%20model%20Efficiency%20coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

NSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

normalized_nash_sutcliffe_efficiency(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Normalize Nash-Sutcliffe Efficiency (NNSE): Best possible score is 1.0, bigger value is better. Range = [0, 1] Link: https://agrimetsoft.com/calculators/Nash%20Sutcliffe%20model%20Efficiency%20coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

NSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

normalized_root_mean_square_error(y_true=None, y_pred=None, model=0, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Normalized Root Mean Square Error (NRMSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Link: https://medium.com/microsoftazure/how-to-better-evaluate-the-goodness-of-fit-of-regressions-990dbf1c0091

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • model (int) – Normalize RMSE by different ways, (Optional, default = 0, valid values = [0, 1, 2, 3]

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

NRMSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

overall_index(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Overall Index (OI): Best possible value = 1, bigger value is better. Range = [-inf, +1]

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

OI metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

pearson_correlation_coefficient(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=- 1.0, **kwargs)[source]

Pearson’s Correlation Coefficient (PCC or R): Best possible score is 1.0, bigger value is better. Range = [-1, 1] .. rubric:: Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

pearson_correlation_coefficient_square(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

(Pearson’s Correlation Index)^2 = R^2 = R2S = RSQ (R square): Best possible score is 1.0, bigger value is better. Range = [0, 1] .. rubric:: Notes

  • Do not misunderstand between R2s and R2 (Coefficient of Determination), they are different

  • Most of online tutorials (article, wikipedia,…) or even scikit-learn library are denoted the wrong R2s and R2.

  • R^2 = R2s = R squared should be (Pearson’s Correlation Index)^2

  • Meanwhile, R2 = Coefficient of Determination

  • https://en.wikipedia.org/wiki/Pearson_correlation_coefficient

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

R2s metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

prediction_of_change_in_direction(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Prediction of Change in Direction (PCD): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

PCD metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

relative_absolute_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Relative Absolute Error (RAE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

RAE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

residual_standard_error(y_true=None, y_pred=None, n_paras=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Residual Standard Error (RSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Links:
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • n_paras (int) – The number of model’s parameters

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

RSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

root_mean_squared_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Root Mean Squared Error (RMSE): Best possible score is 0.0, smaller value is better. Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

RMSE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

single_absolute_error(y_true=None, y_pred=None, **kwargs)[source]

Absolute Error (AE): Best possible score is 0.0, smaller value is better. Range = (-inf, +inf) Note: Computes the absolute error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

AE metric

Return type

result (np.ndarray)

single_relative_bias(y_true=None, y_pred=None, **kwargs)

Relative Error (RE): Best possible score is 0.0, smaller value is better. Range = (-inf, +inf) Note: Computes the relative error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

RE metric

Return type

result (np.ndarray)

single_relative_error(y_true=None, y_pred=None, **kwargs)[source]

Relative Error (RE): Best possible score is 0.0, smaller value is better. Range = (-inf, +inf) Note: Computes the relative error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

RE metric

Return type

result (np.ndarray)

single_squared_error(y_true=None, y_pred=None, **kwargs)[source]

Squared Error (SE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Note: Computes the squared error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

SE metric

Return type

result (np.ndarray)

single_squared_log_error(y_true=None, y_pred=None, **kwargs)[source]

Squared Log Error (SLE): Best possible score is 0.0, smaller value is better. Range = [0, +inf) Note: Computes the squared log error between two numbers, or for element between a pair of list, tuple or numpy arrays.

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

SLE metric

Return type

result (np.ndarray)

symmetric_mean_absolute_percentage_error(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=1.0, **kwargs)[source]

Symmetric Mean Absolute Percentage Error (SMAPE): Best possible score is 0.0, smaller value is better. Range = [0, 1] If you want percentage then multiply with 100%

Link: https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

SMAPE metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

variance_accounted_for(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Variance Accounted For between 2 signals (VAF): Best possible score is 100% (identical signal), bigger value is better. Range = (-inf, 100%] Link: https://www.dcsc.tudelft.nl/~jwvanwingerden/lti/doc/html/vaf.html

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

VAF metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

willmott_index(y_true=None, y_pred=None, multi_output='raw_values', force_finite=True, finite_value=0.0, **kwargs)[source]

Willmott Index (WI): Best possible score is 1.0, bigger value is better. Range = [0, 1]

Notes

Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • multi_output – Can be “raw_values” or list weights of variables such as [0.5, 0.2, 0.3] for 3 columns, (Optional, default = “raw_values”)

  • force_finite (bool) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value (Optional, default = True)

  • finite_value (float) – The finite value used to replace Inf or NaN result (Optional, default = 0.0)

Returns

WI metric for single column or multiple columns

Return type

result (float, int, np.ndarray)

permetrics.classification module

class permetrics.classification.ClassificationMetric(y_true=None, y_pred=None, **kwargs)[source]

Bases: permetrics.evaluator.Evaluator

Defines a ClassificationMetric class that hold all classification metrics (for both binary and multiple classification problem)

Parameters
  • y_true (tuple, list, np.ndarray, default = None) – The ground truth values.

  • y_pred (tuple, list, np.ndarray, default = None) – The prediction values.

  • labels (tuple, list, np.ndarray, default = None) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average ((str, None): {'micro', 'macro', 'weighted'} or None, default="macro") –

    If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:

    'micro':

    Calculate metrics globally by considering each element of the label indicator matrix as a label.

    'macro':

    Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

    'weighted':

    Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).

AS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate accuracy score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the accuracy score

Return type

accuracy (float, dict)

AUC(y_true=None, y_pred=None, average='macro', **kwargs)

Calculates the ROC-AUC score between y_true and y_score. Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – A LIST OF PREDICTED SCORES (NOT LABELS)

  • average (str, None) – {‘macro’, ‘weighted’} or None, default=”macro”

Returns

The AUC score.

Return type

float, dict

BSL(y_true=None, y_pred=None, **kwargs)

Calculates the Brier Score Loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of labels (or predicted scores in case of multi-class)

Returns

The Brier Score Loss

Return type

float, dict

CEL(y_true=None, y_pred=None, **kwargs)

Calculates the Cross-Entropy loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – A LIST OF PREDICTED SCORES (NOT LABELS)

Returns

The Cross-Entropy loss

Return type

float

CKS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate Cohen Kappa score for multiple classification problem Higher is better (Best = +1), Range = [-1, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Cohen Kappa score

Return type

cks (float, dict)

CM(y_true=None, y_pred=None, labels=None, normalize=None, **kwargs)

Generate confusion matrix and useful information

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • normalize ('true', 'pred', 'all', None) – Normalizes confusion matrix over the true (rows), predicted (columns) conditions or all the population.

Returns

a 2-dimensional list of pairwise counts imap (dict): a map between label and index of confusion matrix imap_count (dict): a map between label and number of true label in y_true

Return type

matrix (np.ndarray)

F1S(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate f1 score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the f1 score

Return type

f1 (float, dict)

F2S(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate f2 score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the f2 score

Return type

f2 (float, dict)

FBS(y_true=None, y_pred=None, beta=1.0, labels=None, average='macro', **kwargs)

The beta parameter determines the weight of recall in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> +inf only recall). Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • beta (float) – the weight of recall in the combined score, default = 1.0

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the fbeta score

Return type

fbeta (float, dict)

GINI(y_true=None, y_pred=None, **kwargs)

Calculates the Gini index between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

Returns

The Gini index

Return type

float, dict

GMS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Calculates the G-mean (Geometric mean) score between y_true and y_pred. Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

The G-mean score.

Return type

float, dict

HL(y_true=None, y_pred=None, **kwargs)

Calculates the Hinge loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of labels (or predicted scores in case of multi-class)

Returns

The Hinge loss

Return type

float

HS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate hamming score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the hamming score

Return type

hl (float, dict)

JSC(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate Jaccard similarity index for multiple classification problem Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Jaccard similarity index

Return type

jsi (float, dict)

JSI(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate Jaccard similarity index for multiple classification problem Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Jaccard similarity index

Return type

jsi (float, dict)

KLDL(y_true=None, y_pred=None, **kwargs)

Calculates the Kullback-Leibler divergence loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of labels (or predicted scores in case of multi-class)

Returns

The Kullback-Leibler divergence loss

Return type

float

LS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate lift score for multiple classification problem Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the lift score

Return type

ls (float, dict)

MCC(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate Matthews Correlation Coefficient Higher is better (Best = 1), Range = [-1, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Matthews correlation coefficient

Return type

mcc (float, dict)

NPV(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate negative predictive value for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the negative predictive value

Return type

npv (float, dict)

PS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate precision score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) –

    {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro” If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:

    'micro':

    Calculate metrics globally by considering each element of the label indicator matrix as a label.

    'macro':

    Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

    'weighted':

    Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).

Returns

the precision score

Return type

precision (float, dict)

RAS(y_true=None, y_pred=None, average='macro', **kwargs)

Calculates the ROC-AUC score between y_true and y_score. Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – A LIST OF PREDICTED SCORES (NOT LABELS)

  • average (str, None) – {‘macro’, ‘weighted’} or None, default=”macro”

Returns

The AUC score.

Return type

float, dict

ROC(y_true=None, y_pred=None, average='macro', **kwargs)

Calculates the ROC-AUC score between y_true and y_score. Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – A LIST OF PREDICTED SCORES (NOT LABELS)

  • average (str, None) – {‘macro’, ‘weighted’} or None, default=”macro”

Returns

The AUC score.

Return type

float, dict

RS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate recall score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the recall score

Return type

recall (float, dict)

SS(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate specificity score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the specificity score

Return type

ss (float, dict)

SUPPORT = {'AS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'BSL': {'best': '0', 'range': '[0, 1]', 'type': 'min'}, 'CEL': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'CKS': {'best': '1', 'range': '[-1, +1]', 'type': 'max'}, 'F1S': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'F2S': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'FBS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'GINI': {'best': '0', 'range': '[0, 1]', 'type': 'min'}, 'GMS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'HL': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'HS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'JSI': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'KLDL': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'LS': {'best': 'no best', 'range': '[0, +inf)', 'type': 'max'}, 'MCC': {'best': '1', 'range': '[-1, +1]', 'type': 'max'}, 'NPV': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'PS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'ROC-AUC': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'RS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'SS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}}
accuracy_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate accuracy score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the accuracy score

Return type

accuracy (float, dict)

brier_score_loss(y_true=None, y_pred=None, **kwargs)[source]

Calculates the Brier Score Loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of labels (or predicted scores in case of multi-class)

Returns

The Brier Score Loss

Return type

float, dict

cohen_kappa_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate Cohen Kappa score for multiple classification problem Higher is better (Best = +1), Range = [-1, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Cohen Kappa score

Return type

cks (float, dict)

confusion_matrix(y_true=None, y_pred=None, labels=None, normalize=None, **kwargs)[source]

Generate confusion matrix and useful information

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • normalize ('true', 'pred', 'all', None) – Normalizes confusion matrix over the true (rows), predicted (columns) conditions or all the population.

Returns

a 2-dimensional list of pairwise counts imap (dict): a map between label and index of confusion matrix imap_count (dict): a map between label and number of true label in y_true

Return type

matrix (np.ndarray)

crossentropy_loss(y_true=None, y_pred=None, **kwargs)[source]

Calculates the Cross-Entropy loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – A LIST OF PREDICTED SCORES (NOT LABELS)

Returns

The Cross-Entropy loss

Return type

float

f1_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate f1 score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the f1 score

Return type

f1 (float, dict)

f2_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate f2 score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the f2 score

Return type

f2 (float, dict)

fbeta_score(y_true=None, y_pred=None, beta=1.0, labels=None, average='macro', **kwargs)[source]

The beta parameter determines the weight of recall in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> +inf only recall). Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • beta (float) – the weight of recall in the combined score, default = 1.0

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the fbeta score

Return type

fbeta (float, dict)

g_mean_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Calculates the G-mean (Geometric mean) score between y_true and y_pred. Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

The G-mean score.

Return type

float, dict

get_processed_data(y_true=None, y_pred=None)[source]
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

Returns

y_true used in evaluation process. y_pred_final: y_pred used in evaluation process one_dim: is y_true has 1 dimensions or not

Return type

y_true_final

get_processed_data2(y_true=None, y_pred=None)[source]
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction scores

Returns

y_true used in evaluation process. y_pred_final: y_pred used in evaluation process one_dim: is y_true has 1 dimensions or not

Return type

y_true_final

static get_support(name=None, verbose=True)[source]
gini_index(y_true=None, y_pred=None, **kwargs)[source]

Calculates the Gini index between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

Returns

The Gini index

Return type

float, dict

hamming_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate hamming score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the hamming score

Return type

hl (float, dict)

hinge_loss(y_true=None, y_pred=None, **kwargs)[source]

Calculates the Hinge loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of labels (or predicted scores in case of multi-class)

Returns

The Hinge loss

Return type

float

jaccard_similarity_coefficient(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)

Generate Jaccard similarity index for multiple classification problem Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Jaccard similarity index

Return type

jsi (float, dict)

jaccard_similarity_index(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate Jaccard similarity index for multiple classification problem Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Jaccard similarity index

Return type

jsi (float, dict)

kullback_leibler_divergence_loss(y_true=None, y_pred=None, **kwargs)[source]

Calculates the Kullback-Leibler divergence loss between y_true and y_pred. Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of labels (or predicted scores in case of multi-class)

Returns

The Kullback-Leibler divergence loss

Return type

float

lift_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate lift score for multiple classification problem Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the lift score

Return type

ls (float, dict)

matthews_correlation_coefficient(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate Matthews Correlation Coefficient Higher is better (Best = 1), Range = [-1, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the Matthews correlation coefficient

Return type

mcc (float, dict)

negative_predictive_value(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate negative predictive value for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the negative predictive value

Return type

npv (float, dict)

precision_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate precision score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) –

    {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro” If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:

    'micro':

    Calculate metrics globally by considering each element of the label indicator matrix as a label.

    'macro':

    Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

    'weighted':

    Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).

Returns

the precision score

Return type

precision (float, dict)

recall_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate recall score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the recall score

Return type

recall (float, dict)

roc_auc_score(y_true=None, y_pred=None, average='macro', **kwargs)[source]

Calculates the ROC-AUC score between y_true and y_score. Higher is better (Best = +1), Range = [0, +1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – A LIST OF PREDICTED SCORES (NOT LABELS)

  • average (str, None) – {‘macro’, ‘weighted’} or None, default=”macro”

Returns

The AUC score.

Return type

float, dict

specificity_score(y_true=None, y_pred=None, labels=None, average='macro', **kwargs)[source]

Generate specificity score for multiple classification problem Higher is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (tuple, list, np.ndarray) – a list of integers or strings for known classes

  • y_pred (tuple, list, np.ndarray) – a list of integers or strings for y_pred classes

  • labels (tuple, list, np.ndarray) – List of labels to index the matrix. This may be used to reorder or select a subset of labels.

  • average (str, None) – {‘micro’, ‘macro’, ‘weighted’} or None, default=”macro”

Returns

the specificity score

Return type

ss (float, dict)

permetrics.clustering module

class permetrics.clustering.ClusteringMetric(y_true=None, y_pred=None, X=None, force_finite=True, finite_value=None, **kwargs)[source]

Bases: permetrics.evaluator.Evaluator

Defines a ClusteringMetric class that hold all internal and external metrics for clustering problems

Parameters
  • y_true (tuple, list, np.ndarray, default = None) – The ground truth values. This is for calculating external metrics

  • y_pred (tuple, list, np.ndarray, default = None) – The prediction values. This is for both calculating internal and external metrics

  • X (tuple, list, np.ndarray, default = None) – The features of datasets. This is for calculating internal metrics

  • force_finite (bool, default = True) – When result is not finite, it can be NaN or Inf. Their result will be replaced by finite_value

  • finite_value (float, default = None) – The value that used to replace the infinite value or NaN value.

ARS(y_true=None, y_pred=None, **kwargs)

Computes the Adjusted rand score between two clusterings. Bigger is better (Best = 1), Range = [-1, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Adjusted rand score

Return type

result (float)

BHI(X=None, y_pred=None, **kwargs)

The Ball-Hall Index (1995) is the mean of the mean dispersion across all clusters. The largest difference between successive clustering levels indicates the optimal number of clusters. Smaller is better (Best = 0), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The Ball-Hall index

Return type

result (float)

BI(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwarg)

Computes the Beale Index Smaller is better (Best=0), Range = [0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Beale Index

Return type

result (float)

BRI(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)

Computes the Banfeld-Raftery Index. Smaller is better (No best value), Range=(-inf, inf) This index is the weighted sum of the logarithms of the traces of the variance covariance matrix of each cluster

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Banfeld-Raftery Index

Return type

result (float)

CDS(y_true=None, y_pred=None, **kwargs)

Computes the Czekanowski-Dice score between two clusterings. It is the harmonic mean of the precision and recall coefficients. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Czekanowski-Dice score

Return type

result (float)

CHI(X=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)

Compute the Calinski and Harabasz (1974) index. It is also known as the Variance Ratio Criterion. The score is defined as ratio between the within-cluster dispersion and the between-cluster dispersion. Bigger is better (No best value), Range=[0, inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The resulting Calinski-Harabasz index.

Return type

result (float)

CS(y_true=None, y_pred=None, **kwargs)

Computes the completeness score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the ratio of samples that are correctly assigned to the same cluster to the total number of samples in the data.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The completeness score.

Return type

result (float)

DBCVI(X=None, y_pred=None, force_finite=True, finite_value=1.0, **kwarg)

Computes the Density-based Clustering Validation Index Smaller is better (Best=0), Range = [0, 1]

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Density-based Clustering Validation Index

Return type

result (float)

DBI(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)

Computes the Davies-Bouldin index Smaller is better (Best = 0), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Davies-Bouldin index

Return type

result (float)

DHI(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)

Computes the Duda Index or Duda-Hart index Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Duda-Hart index

Return type

result (float)

DI(X=None, y_pred=None, use_modified=True, force_finite=True, finite_value=0.0, **kwargs)

Computes the Dunn Index Bigger is better (No best value), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • use_modified (bool) – The modified version we proposed to speed up the computational time for this metric, default=True

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Dunn Index

Return type

result (float)

DRI(X=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)

Computes the Det-Ratio index Bigger is better (No best value), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Det-Ratio index

Return type

result (float)

ES(y_true=None, y_pred=None, **kwargs)

Computes the Entropy score Smaller is better (Best = 0), Range = [0, +inf)

Entropy is a metric used to evaluate the quality of clustering results, particularly when the ground truth labels of the data points are known. It measures the amount of uncertainty or disorder within the clusters produced by a clustering algorithm.

Here’s how the Entropy score is calculated:

  1. For each cluster, compute the class distribution by counting the occurrences of each class label within the cluster.

  2. Normalize the class distribution by dividing the count of each class label by the total number of data points in the cluster.

  3. Compute the entropy for each cluster using the normalized class distribution.

  4. Weight the entropy of each cluster by its relative size (proportion of data points in the whole dataset).

  5. Sum up the weighted entropies of all clusters.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Entropy score

Return type

result (float)

FMS(y_true=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)

Computes the Fowlkes-Mallows score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Fowlkes-Mallows score

Return type

result (float)

FmS(y_true=None, y_pred=None, **kwargs)

Computes the F-Measure score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It is the harmonic mean of the precision and recall coefficients, given by the formula F = 2PR / (P + R). It provides a single score that summarizes both precision and recall. The Fa-measure is a weighted version of the F-measure that allows for a trade-off between precision and recall. It is defined as Fa = (1 + a)PR / (aP + R), where a is a parameter that determines the relative importance of precision and recall.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The F-Measure score

Return type

result (float)

GAS(y_true=None, y_pred=None, **kwargs)

Computes the Gamma Score between two clustering solutions. Bigger is better (Best = 1), Range = [-1, 1]

Ref: Cluster Validation for Mixed-Type Data (Rabea Aschenbruck and Gero Szepannek)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Gamma Score

Return type

result (float)

GPS(y_true=None, y_pred=None, **kwargs)

Computes the Gplus Score between two clustering solutions. Smaller is better (Best = 0), Range = [0, 1]

Ref: Cluster Validation for Mixed-Type Data (Rabea Aschenbruck and Gero Szepannek)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Gplus Score

Return type

result (float)

HGS(y_true=None, y_pred=None, force_finite=True, finite_value=- 1.0, **kwargs)

Computes the Hubert Gamma score between two clusterings. Bigger is better (Best = 1), Range=[-1, +1]

The Hubert Gamma index ranges from -1 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared, a value of 0 indicates no association between the partitions, and a value of -1 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Hubert Gamma score

Return type

result (float)

HI(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwarg)

Computes the Hartigan index for a clustering solution. Smaller is better (best=0), Range = [0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Hartigan index

Return type

result (float)

HS(y_true=None, y_pred=None, **kwargs)

Computes the Homogeneity Score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the extent to which each cluster contains only data points that belong to a single class or category. In other words, homogeneity assesses whether all the data points in a cluster are members of the same true class or label. A higher homogeneity score indicates better clustering results, where each cluster corresponds well to a single ground truth class.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Homogeneity Score

Return type

result (float)

JS(y_true=None, y_pred=None, **kwargs)

Computes the Jaccard score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

The Jaccard score is similar to the Czekanowski-Dice score, but it is less sensitive to differences in cluster size. However, like the Czekanowski-Dice score, it may not be sensitive to certain types of differences between partitions. Therefore, it is often used in conjunction with other external indices to get a more complete picture of the similarity between partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Jaccard score

Return type

result (float)

KDI(X=None, y_pred=None, use_normalized=True, **kwargs)

Computes the Ksq-DetW Index Bigger is better (No best value), Range=(-inf, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • use_normalized (bool) – We normalize the scatter matrix before calculate the Det to reduce the value, default=True

Returns

The Ksq-DetW Index

Return type

result (float)

KS(y_true=None, y_pred=None, **kwargs)

Computes the Kulczynski score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It is the arithmetic mean of the precision and recall coefficients, which means that it takes into account both precision and recall. The Kulczynski index ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Kulczynski score

Return type

result (float)

LDRI(X=None, y_pred=None, force_finite=True, finite_value=- 10000000000.0, **kwargs)

Computes the Log Det Ratio Index Bigger is better (No best value), Range=(-inf, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Log Det Ratio Index

Return type

result (float)

LSRI(X=None, y_pred=None, force_finite=True, finite_value=- 10000000000.0, **kwargs)

Computes the Log SS Ratio Index Bigger is better (No best value), Range=(-inf, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Log SS Ratio Index

Return type

result (float)

MIS(y_true=None, y_pred=None, **kwargs)

Computes the Mutual Information score between two clusterings. Bigger is better (No best value), Range = [0, +inf)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Mutual Information score

Return type

result (float)

MNS(y_true=None, y_pred=None, **kwargs)

Computes the Mc Nemar score between two clusterings. Bigger is better (No best value), Range=(-inf, +inf)

It is an adaptation of the non-parametric McNemar test for the comparison of frequencies between two paired samples. The McNemar index ranges from -inf to inf, where a bigger value indicates perfect agreement between the two partitions being compared

Under the null hypothesis that the discordances between the partitions P1 and P2 are random, the McNemar index follows approximately a normal distribution. The McNemar index can be transformed into a chi-squared distance, which follows a chi-squared distribution with 1 degree of freedom

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Mc Nemar score

Return type

result (float)

MSEI(X=None, y_pred=None, **kwarg)

Computes the Mean Squared Error Index Smaller is better (Best = 0), Range = [0, +inf)

MSEI measures the mean of squared distances between each data point and its corresponding centroid or cluster center.

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The Mean Squared Error Index

Return type

result (float)

NMIS(y_true=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)

Computes the normalized mutual information between two clusterings. It is a variation of the mutual information score that normalizes the result to take values between 0 and 1. It is defined as the mutual information divided by the average entropy of the true and predicted clusterings. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The normalized mutual information score.

Return type

result (float)

PhS(y_true=None, y_pred=None, force_finite=True, finite_value=- 10000000000.0, **kwargs)

Computes the Phi score between two clusterings. Bigger is better (No best value), Range = (-inf, +inf)

It is a classical measure of the correlation between two dichotomous variables, and it can be used to measure the similarity between two partitions. The Phi index ranges from -inf to +inf, where a bigger value indicates perfect agreement between the two partitions being compared, a value of 0 indicates no association between the partitions, and a smaller value indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Phi score

Return type

result (float)

PrS(y_true=None, y_pred=None, **kwargs)

Computes the Precision score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]. It is different than precision score in classification metrics

It measures the proportion of points that are correctly grouped together in P2, given that they are grouped together in P1. It is calculated as the ratio of yy (the number of points that are correctly grouped together in both P1 and P2) to the sum of yy and ny (the number of points that are grouped together in P2 but not in P1). The formula for P is P = yy / (yy + ny).

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Precision score

Return type

result (float)

PuS(y_true=None, y_pred=None, **kwargs)

Computes the Purity score Bigger is better (Best = 1), Range = [0, 1]

Purity is a metric used to evaluate the quality of clustering results, particularly in situations where the ground truth labels of the data points are known. It measures the extent to which the clusters produced by a clustering algorithm match the true class labels of the data.

Here’s how Purity is calculated:
  1. For each cluster, find the majority class label among the data points in that cluster.

  2. Sum up the sizes of the clusters that belong to the majority class label.

  3. Divide the sum by the total number of data points.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Purity score

Return type

result (float)

RRS(y_true=None, y_pred=None, **kwargs)

Computes the Russel-Rao score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the proportion of concordances between the two partitions by computing the proportion of pairs of samples that are in the same cluster in both partitions. The Russel-Rao index ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Russel-Rao score

Return type

result (float)

RSI(X=None, y_pred=None, **kwarg)

Computes the R-squared index Bigger is better (Best=1), Range = (-inf, 1]

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The R-squared index

Return type

result (float)

RTS(y_true=None, y_pred=None, **kwargs)

Computes the Rogers-Tanimoto score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the similarity between two partitions by computing the proportion of pairs of samples that are either in the same cluster in both partitions or in different clusters in both partitions, with an adjustment for the number of pairs of samples that are in different clusters in one partition but in the same cluster in the other partition. The Rogers-Tanimoto index ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Rogers-Tanimoto score

Return type

result (float)

RaS(y_true=None, y_pred=None, **kwargs)

Computes the Rand score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The rand score.

Return type

result (float)

ReS(y_true=None, y_pred=None, **kwargs)

Computes the Recall score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the proportion of points that are correctly grouped together in P2, given that they are grouped together in P1. It is calculated as the ratio of yy to the sum of yy and yn (the number of points that are grouped together in P1 but not in P2). The formula for R is R = yy / (yy + yn).

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Recall score

Return type

result (float)

SI(X=None, y_pred=None, multi_output=False, force_finite=True, finite_value=- 1.0, **kwargs)

Computes the Silhouette Index Bigger is better (Best = 1), Range = [-1, +1]

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • multi_output (bool) – Returned scores for each cluster, default=False

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Silhouette Index

Return type

result (float)

SS1S(y_true=None, y_pred=None, **kwargs)

Computes the Sokal-Sneath 1 score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the similarity between two partitions by computing the proportion of pairs of samples that are in the same cluster in both partitions, with an adjustment for the number of pairs of samples that are in different clusters in one partition but in the same cluster in the other partition. The Sokal-Sneath indices range from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Sokal-Sneath 1 score

Return type

result (float)

SS2S(y_true=None, y_pred=None, **kwargs)

Computes the Sokal-Sneath 2 score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the similarity between two partitions by computing the proportion of pairs of samples that are in the same cluster in both partitions, with an adjustment for the number of pairs of samples that are in different clusters in one partition but in the same cluster in the other partition. The Sokal-Sneath indices range from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Sokal-Sneath 2 score

Return type

result (float)

SSEI(X=None, y_pred=None, **kwarg)

Computes the Sum of Squared Error Index Smaller is better (Best = 0), Range = [0, +inf)

SSEI measures the sum of squared distances between each data point and its corresponding centroid or cluster center. It quantifies the compactness of the clusters. Here’s how you can calculate the SSE in a clustering problem:

  1. Assign each data point to its nearest centroid or cluster center based on some distance metric (e.g., Euclidean distance).

  2. For each data point, calculate the squared Euclidean distance between the data point and its assigned centroid.

  3. Sum up the squared distances for all data points to obtain the SSE.

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The Sum of Squared Error Index

Return type

result (float)

SUPPORT = {'ARS': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'BHI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'BI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'BRI': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'min'}, 'CDS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'CHI': {'best': 'no best', 'range': '[0, +inf)', 'type': 'max'}, 'CS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'DBCVI': {'best': '0', 'range': '[0, 1]', 'type': 'min'}, 'DBI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'DHI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'DI': {'best': 'no best', 'range': '[0, +inf)', 'type': 'max'}, 'DRI': {'best': 'no best', 'range': '[0, +inf)', 'type': 'max'}, 'ES': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'FMS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'FmS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'GAS': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'GPS': {'best': '0', 'range': '[0, 1]', 'type': 'min'}, 'HGS': {'best': '1', 'range': '[-1, 1]', 'type': 'max'}, 'HI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'HS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'JS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'KDI': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'KS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'LDRI': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'LSRI': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'MIS': {'best': 'no best', 'range': '[0, +inf)', 'type': 'max'}, 'MNS': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'MSEI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'NMIS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'PhS': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'PrS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'PuS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'RRS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'RSI': {'best': '1', 'range': '(-inf, +1]', 'type': 'max'}, 'RTS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'RaS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'ReS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'SI': {'best': '1', 'range': '[-1, +1]', 'type': 'max'}, 'SS1S': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'SS2S': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'SSEI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}, 'TS': {'best': 'no best', 'range': '(-inf, +inf)', 'type': 'max'}, 'VMS': {'best': '1', 'range': '[0, 1]', 'type': 'max'}, 'XBI': {'best': '0', 'range': '[0, +inf)', 'type': 'min'}}
TS(y_true=None, y_pred=None, **kwargs)

Computes the Tau Score between two clustering solutions. Bigger is better (No best value), Range = (-inf, +inf)

Ref: Cluster Validation for Mixed-Type Data (Rabea Aschenbruck and Gero Szepannek)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Tau Score

Return type

result (float)

VMS(y_true=None, y_pred=None, **kwargs)

Computes the V measure score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It is a combination of two other metrics: homogeneity and completeness. Homogeneity measures whether all the data points in a given cluster belong to the same class. Completeness measures whether all the data points of a certain class are assigned to the same cluster. The V-measure combines these two metrics into a single score.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The V measure score

Return type

result (float)

XBI(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)

Computes the Xie-Beni index. Smaller is better (Best = 0), Range=[0, +inf)

The Xie-Beni index is an index of fuzzy clustering, but it is also applicable to crisp clustering. The numerator is the mean of the squared distances of all of the points with respect to their barycenter of the cluster they belong to. The denominator is the minimal squared distances between the points in the clusters. The minimum value indicates the best number of clusters.

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Xie-Beni index

Return type

result (float)

adjusted_rand_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Adjusted rand score between two clusterings. Bigger is better (Best = 1), Range = [-1, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Adjusted rand score

Return type

result (float)

ball_hall_index(X=None, y_pred=None, **kwargs)[source]

The Ball-Hall Index (1995) is the mean of the mean dispersion across all clusters. The largest difference between successive clustering levels indicates the optimal number of clusters. Smaller is better (Best = 0), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The Ball-Hall index

Return type

result (float)

banfeld_raftery_index(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)[source]

Computes the Banfeld-Raftery Index. Smaller is better (No best value), Range=(-inf, inf) This index is the weighted sum of the logarithms of the traces of the variance covariance matrix of each cluster

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Banfeld-Raftery Index

Return type

result (float)

beale_index(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwarg)[source]

Computes the Beale Index Smaller is better (Best=0), Range = [0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Beale Index

Return type

result (float)

calinski_harabasz_index(X=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)[source]

Compute the Calinski and Harabasz (1974) index. It is also known as the Variance Ratio Criterion. The score is defined as ratio between the within-cluster dispersion and the between-cluster dispersion. Bigger is better (No best value), Range=[0, inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The resulting Calinski-Harabasz index.

Return type

result (float)

check_X(X)[source]
completeness_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the completeness score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the ratio of samples that are correctly assigned to the same cluster to the total number of samples in the data.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The completeness score.

Return type

result (float)

czekanowski_dice_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Czekanowski-Dice score between two clusterings. It is the harmonic mean of the precision and recall coefficients. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Czekanowski-Dice score

Return type

result (float)

davies_bouldin_index(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)[source]

Computes the Davies-Bouldin index Smaller is better (Best = 0), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Davies-Bouldin index

Return type

result (float)

density_based_clustering_validation_index(X=None, y_pred=None, force_finite=True, finite_value=1.0, **kwarg)[source]

Computes the Density-based Clustering Validation Index Smaller is better (Best=0), Range = [0, 1]

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Density-based Clustering Validation Index

Return type

result (float)

det_ratio_index(X=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)[source]

Computes the Det-Ratio index Bigger is better (No best value), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Det-Ratio index

Return type

result (float)

duda_hart_index(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)[source]

Computes the Duda Index or Duda-Hart index Smaller is better (Best = 0), Range = [0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Duda-Hart index

Return type

result (float)

dunn_index(X=None, y_pred=None, use_modified=True, force_finite=True, finite_value=0.0, **kwargs)[source]

Computes the Dunn Index Bigger is better (No best value), Range=[0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • use_modified (bool) – The modified version we proposed to speed up the computational time for this metric, default=True

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Dunn Index

Return type

result (float)

entropy_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Entropy score Smaller is better (Best = 0), Range = [0, +inf)

Entropy is a metric used to evaluate the quality of clustering results, particularly when the ground truth labels of the data points are known. It measures the amount of uncertainty or disorder within the clusters produced by a clustering algorithm.

Here’s how the Entropy score is calculated:

  1. For each cluster, compute the class distribution by counting the occurrences of each class label within the cluster.

  2. Normalize the class distribution by dividing the count of each class label by the total number of data points in the cluster.

  3. Compute the entropy for each cluster using the normalized class distribution.

  4. Weight the entropy of each cluster by its relative size (proportion of data points in the whole dataset).

  5. Sum up the weighted entropies of all clusters.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Entropy score

Return type

result (float)

f_measure_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the F-Measure score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It is the harmonic mean of the precision and recall coefficients, given by the formula F = 2PR / (P + R). It provides a single score that summarizes both precision and recall. The Fa-measure is a weighted version of the F-measure that allows for a trade-off between precision and recall. It is defined as Fa = (1 + a)PR / (aP + R), where a is a parameter that determines the relative importance of precision and recall.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The F-Measure score

Return type

result (float)

fowlkes_mallows_score(y_true=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)[source]

Computes the Fowlkes-Mallows score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Fowlkes-Mallows score

Return type

result (float)

gamma_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Gamma Score between two clustering solutions. Bigger is better (Best = 1), Range = [-1, 1]

Ref: Cluster Validation for Mixed-Type Data (Rabea Aschenbruck and Gero Szepannek)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Gamma Score

Return type

result (float)

get_processed_external_data(y_true=None, y_pred=None, force_finite=None, finite_value=None)[source]
Parameters
  • y_true (tuple, list, np.ndarray) – The ground truth values

  • y_pred (tuple, list, np.ndarray) – The prediction values

  • force_finite (bool) – Force the result as finite number

  • finite_value (float) – The finite number

Returns

y_true used in evaluation process. y_pred_final: y_pred used in evaluation process le: label encoder object force_finite: Force the result as finite number finite_value: The finite number

Return type

y_true_final

get_processed_internal_data(y_pred=None, force_finite=None, finite_value=None)[source]
Parameters
  • y_pred (tuple, list, np.ndarray) – The prediction values

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

y_pred used in evaluation process le: label encoder object force_finite finite_value

Return type

y_pred_final

static get_support(name=None, verbose=True)[source]
gplus_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Gplus Score between two clustering solutions. Smaller is better (Best = 0), Range = [0, 1]

Ref: Cluster Validation for Mixed-Type Data (Rabea Aschenbruck and Gero Szepannek)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Gplus Score

Return type

result (float)

hartigan_index(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwarg)[source]

Computes the Hartigan index for a clustering solution. Smaller is better (best=0), Range = [0, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Hartigan index

Return type

result (float)

homogeneity_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Homogeneity Score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the extent to which each cluster contains only data points that belong to a single class or category. In other words, homogeneity assesses whether all the data points in a cluster are members of the same true class or label. A higher homogeneity score indicates better clustering results, where each cluster corresponds well to a single ground truth class.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Homogeneity Score

Return type

result (float)

hubert_gamma_score(y_true=None, y_pred=None, force_finite=True, finite_value=- 1.0, **kwargs)[source]

Computes the Hubert Gamma score between two clusterings. Bigger is better (Best = 1), Range=[-1, +1]

The Hubert Gamma index ranges from -1 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared, a value of 0 indicates no association between the partitions, and a value of -1 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Hubert Gamma score

Return type

result (float)

jaccard_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Jaccard score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

The Jaccard score is similar to the Czekanowski-Dice score, but it is less sensitive to differences in cluster size. However, like the Czekanowski-Dice score, it may not be sensitive to certain types of differences between partitions. Therefore, it is often used in conjunction with other external indices to get a more complete picture of the similarity between partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Jaccard score

Return type

result (float)

ksq_detw_index(X=None, y_pred=None, use_normalized=True, **kwargs)[source]

Computes the Ksq-DetW Index Bigger is better (No best value), Range=(-inf, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • use_normalized (bool) – We normalize the scatter matrix before calculate the Det to reduce the value, default=True

Returns

The Ksq-DetW Index

Return type

result (float)

kulczynski_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Kulczynski score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It is the arithmetic mean of the precision and recall coefficients, which means that it takes into account both precision and recall. The Kulczynski index ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Kulczynski score

Return type

result (float)

log_det_ratio_index(X=None, y_pred=None, force_finite=True, finite_value=- 10000000000.0, **kwargs)[source]

Computes the Log Det Ratio Index Bigger is better (No best value), Range=(-inf, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Log Det Ratio Index

Return type

result (float)

log_ss_ratio_index(X=None, y_pred=None, force_finite=True, finite_value=- 10000000000.0, **kwargs)[source]

Computes the Log SS Ratio Index Bigger is better (No best value), Range=(-inf, +inf)

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Log SS Ratio Index

Return type

result (float)

mc_nemar_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Mc Nemar score between two clusterings. Bigger is better (No best value), Range=(-inf, +inf)

It is an adaptation of the non-parametric McNemar test for the comparison of frequencies between two paired samples. The McNemar index ranges from -inf to inf, where a bigger value indicates perfect agreement between the two partitions being compared

Under the null hypothesis that the discordances between the partitions P1 and P2 are random, the McNemar index follows approximately a normal distribution. The McNemar index can be transformed into a chi-squared distance, which follows a chi-squared distribution with 1 degree of freedom

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Mc Nemar score

Return type

result (float)

mean_squared_error_index(X=None, y_pred=None, **kwarg)[source]

Computes the Mean Squared Error Index Smaller is better (Best = 0), Range = [0, +inf)

MSEI measures the mean of squared distances between each data point and its corresponding centroid or cluster center.

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The Mean Squared Error Index

Return type

result (float)

mutual_info_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Mutual Information score between two clusterings. Bigger is better (No best value), Range = [0, +inf)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Mutual Information score

Return type

result (float)

normalized_mutual_info_score(y_true=None, y_pred=None, force_finite=True, finite_value=0.0, **kwargs)[source]

Computes the normalized mutual information between two clusterings. It is a variation of the mutual information score that normalizes the result to take values between 0 and 1. It is defined as the mutual information divided by the average entropy of the true and predicted clusterings. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The normalized mutual information score.

Return type

result (float)

phi_score(y_true=None, y_pred=None, force_finite=True, finite_value=- 10000000000.0, **kwargs)[source]

Computes the Phi score between two clusterings. Bigger is better (No best value), Range = (-inf, +inf)

It is a classical measure of the correlation between two dichotomous variables, and it can be used to measure the similarity between two partitions. The Phi index ranges from -inf to +inf, where a bigger value indicates perfect agreement between the two partitions being compared, a value of 0 indicates no association between the partitions, and a smaller value indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Phi score

Return type

result (float)

precision_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Precision score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]. It is different than precision score in classification metrics

It measures the proportion of points that are correctly grouped together in P2, given that they are grouped together in P1. It is calculated as the ratio of yy (the number of points that are correctly grouped together in both P1 and P2) to the sum of yy and ny (the number of points that are grouped together in P2 but not in P1). The formula for P is P = yy / (yy + ny).

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Precision score

Return type

result (float)

purity_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Purity score Bigger is better (Best = 1), Range = [0, 1]

Purity is a metric used to evaluate the quality of clustering results, particularly in situations where the ground truth labels of the data points are known. It measures the extent to which the clusters produced by a clustering algorithm match the true class labels of the data.

Here’s how Purity is calculated:
  1. For each cluster, find the majority class label among the data points in that cluster.

  2. Sum up the sizes of the clusters that belong to the majority class label.

  3. Divide the sum by the total number of data points.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Purity score

Return type

result (float)

r_squared_index(X=None, y_pred=None, **kwarg)[source]

Computes the R-squared index Bigger is better (Best=1), Range = (-inf, 1]

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The R-squared index

Return type

result (float)

rand_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Rand score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The rand score.

Return type

result (float)

recall_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Recall score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the proportion of points that are correctly grouped together in P2, given that they are grouped together in P1. It is calculated as the ratio of yy to the sum of yy and yn (the number of points that are grouped together in P1 but not in P2). The formula for R is R = yy / (yy + yn).

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Recall score

Return type

result (float)

rogers_tanimoto_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Rogers-Tanimoto score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the similarity between two partitions by computing the proportion of pairs of samples that are either in the same cluster in both partitions or in different clusters in both partitions, with an adjustment for the number of pairs of samples that are in different clusters in one partition but in the same cluster in the other partition. The Rogers-Tanimoto index ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Rogers-Tanimoto score

Return type

result (float)

russel_rao_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Russel-Rao score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the proportion of concordances between the two partitions by computing the proportion of pairs of samples that are in the same cluster in both partitions. The Russel-Rao index ranges from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Russel-Rao score

Return type

result (float)

silhouette_index(X=None, y_pred=None, multi_output=False, force_finite=True, finite_value=- 1.0, **kwargs)[source]

Computes the Silhouette Index Bigger is better (Best = 1), Range = [-1, +1]

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • multi_output (bool) – Returned scores for each cluster, default=False

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Silhouette Index

Return type

result (float)

sokal_sneath1_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Sokal-Sneath 1 score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the similarity between two partitions by computing the proportion of pairs of samples that are in the same cluster in both partitions, with an adjustment for the number of pairs of samples that are in different clusters in one partition but in the same cluster in the other partition. The Sokal-Sneath indices range from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Sokal-Sneath 1 score

Return type

result (float)

sokal_sneath2_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Sokal-Sneath 2 score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It measures the similarity between two partitions by computing the proportion of pairs of samples that are in the same cluster in both partitions, with an adjustment for the number of pairs of samples that are in different clusters in one partition but in the same cluster in the other partition. The Sokal-Sneath indices range from 0 to 1, where a value of 1 indicates perfect agreement between the two partitions being compared. A value of 0 indicates complete disagreement between the two partitions.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Sokal-Sneath 2 score

Return type

result (float)

sum_squared_error_index(X=None, y_pred=None, **kwarg)[source]

Computes the Sum of Squared Error Index Smaller is better (Best = 0), Range = [0, +inf)

SSEI measures the sum of squared distances between each data point and its corresponding centroid or cluster center. It quantifies the compactness of the clusters. Here’s how you can calculate the SSE in a clustering problem:

  1. Assign each data point to its nearest centroid or cluster center based on some distance metric (e.g., Euclidean distance).

  2. For each data point, calculate the squared Euclidean distance between the data point and its assigned centroid.

  3. Sum up the squared distances for all data points to obtain the SSE.

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

Returns

The Sum of Squared Error Index

Return type

result (float)

tau_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the Tau Score between two clustering solutions. Bigger is better (No best value), Range = (-inf, +inf)

Ref: Cluster Validation for Mixed-Type Data (Rabea Aschenbruck and Gero Szepannek)

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The Tau Score

Return type

result (float)

v_measure_score(y_true=None, y_pred=None, **kwargs)[source]

Computes the V measure score between two clusterings. Bigger is better (Best = 1), Range = [0, 1]

It is a combination of two other metrics: homogeneity and completeness. Homogeneity measures whether all the data points in a given cluster belong to the same class. Completeness measures whether all the data points of a certain class are assigned to the same cluster. The V-measure combines these two metrics into a single score.

Parameters
  • y_true (array-like) – The true labels for each sample.

  • y_pred (array-like) – The predicted cluster labels for each sample.

Returns

The V measure score

Return type

result (float)

xie_beni_index(X=None, y_pred=None, force_finite=True, finite_value=10000000000.0, **kwargs)[source]

Computes the Xie-Beni index. Smaller is better (Best = 0), Range=[0, +inf)

The Xie-Beni index is an index of fuzzy clustering, but it is also applicable to crisp clustering. The numerator is the mean of the squared distances of all of the points with respect to their barycenter of the cluster they belong to. The denominator is the minimal squared distances between the points in the clusters. The minimum value indicates the best number of clusters.

Parameters
  • X (array-like of shape (n_samples, n_features)) – A list of n_features-dimensional data points. Each row corresponds to a single data point.

  • y_pred (array-like of shape (n_samples,)) – Predicted labels for each sample.

  • force_finite (bool) – Make result as finite number

  • finite_value (float) – The value that used to replace the infinite value or NaN value.

Returns

The Xie-Beni index

Return type

result (float)