core.evaluation package

Submodules

core.evaluation.f_score module

class core.evaluation.f_score.Accuracy(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the Accuracy between prediction and ground truth

The Accuracy are identified by percentage of correct prediction

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

Accuracy score

run()[source]
class core.evaluation.f_score.AccuracyTolerance(predict, truth)[source]

Bases: core.evaluation.superclass.OccupancyEvaluation

Calculate the AccuracyTolerance between prediction and ground truth

The AccuracyTolerance are identified same as Accuracy, but with differences smaller than the given tolerance will be considered as a correct prediction

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

  • tolerance (int) – the maximum differences between prediction and truth to mark as correct

Return type

float

Returns

AccuracyTolerance score

run()[source]
class core.evaluation.f_score.F1Score(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the F1 Score between prediction and ground truth

The F1 Score are identified by 2 * TP / (2 * TP + FP + FN)

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

F1 Score score

run()[source]
class core.evaluation.f_score.Fallout(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the Fallout between prediction and ground truth

The Fallout are identified by FP/(FP+TN)

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

Fallout score

run()[source]
class core.evaluation.f_score.FalseNegative(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the False-Negative between prediction and ground truth

The False-Negative indicate the proportion of the actual occupied states that are identified as unoccupied

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

int

Returns

number of entries that is FN

run()[source]
class core.evaluation.f_score.FalsePositive(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the False-Positive between prediction and ground truth

The False-Positive indicate the proportion of the actual unoccupied states that are identified as occupied

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

int

Returns

number of entries that is FP

run()[source]
class core.evaluation.f_score.Missrate(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the Missrate between prediction and ground truth

The Missrate are identified by 1 - Recall

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

Missrate score

run()[source]
class core.evaluation.f_score.Precision(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the Precision between prediction and ground truth

The Precision indicates the percentage of occupancy predictions which are correct by TP/(TP+FP)

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

Precision score

run()[source]
class core.evaluation.f_score.Recall(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the Recall between prediction and ground truth

Recall is the percentage of the true occupied states which are identified by TP/(TP+FN)

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

Recall score

run()[source]
class core.evaluation.f_score.Selectivity(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the Selectivity between prediction and ground truth

The Selectivity are identified by 1 - Fallout

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

Selectivity score

run()[source]
class core.evaluation.f_score.TrueNegative(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the True-Negative between prediction and ground truth

The True-Negative indicate the proportion of the actual unoccupied states that are correctly identified

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

int

Returns

number of entries that is TN

run()[source]
class core.evaluation.f_score.TruePositive(predict, truth)[source]

Bases: core.evaluation.superclass.BinaryEvaluation

Calculate the True-Positive between prediction and ground truth

The True-Positive indicate the proportion of the actual occupied states that are correctly identified

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

int

Returns

number of entries that is TP

run()[source]

core.evaluation.mae module

class core.evaluation.mae.MAE(predict, truth)[source]

Bases: core.evaluation.superclass.OccupancyEvaluation

Calculate the Mean Absolute Error between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

MAE score

run()[source]

core.evaluation.mape module

class core.evaluation.mape.MAPE(predict, truth)[source]

Bases: core.evaluation.superclass.OccupancyEvaluation

Calculate the Mean Absolute Percentage Error between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

MAPE score

run()[source]

core.evaluation.mase module

class core.evaluation.mase.MASE(predict, truth)[source]

Bases: core.evaluation.superclass.OccupancyEvaluation

Calculate the Mean Absolute Scaled Error between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

MASE score

run()[source]

core.evaluation.nrmse module

class core.evaluation.nrmse.NRMSE(predict, truth)[source]

Bases: core.evaluation.superclass.OccupancyEvaluation

Calculate the Normalized Root Mean Square Error between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

  • mode (str) – the mode of nRMSE. Can select 'minmax' or 'mean'

Return type

float

Returns

nRMSE score

run()[source]

core.evaluation.rmse module

class core.evaluation.rmse.RMSE(predict, truth)[source]

Bases: core.evaluation.superclass.OccupancyEvaluation

Calculate the Root Mean Square Error between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

float

Returns

RMSE score

run()[source]

core.evaluation.superclass module

class core.evaluation.superclass.BinaryEvaluation(predict, truth)[source]

Bases: object

Use all binary occupancy evaluation metrics to evaluate the differences between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

core.evaluation.superclass.BinaryEvaluation

add_metrics(list_of_metrics)[source]

Add one or multiple metrics into the evaluation queue

Parameters

list_of_metrics (str or list(str)) – one or multiple metrics that additionally add to the evaluation queue.

Returns

None

get_all_metrics()[source]

Get all subclasses

Parameter

None

Returns

None

remove_metrics(list_of_metrics)[source]

Remove one or multiple metrics from the evaluation queue

Parameters

list_of_metrics (str or list(str)) – one or multiple metrics that want to remove from the evaluation queue

Returns

None

run_all_metrics()[source]

Run all metrics that currently in the queue

Parameter

None

Return type

dict(str, float or int)

Returns

a dictionary map each metrics and their corresponding result

class core.evaluation.superclass.OccupancyEvaluation(predict, truth)[source]

Bases: object

Use all occupancy level estimation metrics to evaluate the differences between prediction and ground truth

Parameters
  • predict (numpy.ndarray) – the predicted values from occupancy estimation models

  • truth (numpy.ndarray) – the ground truth value from the Dataset

Return type

core.evaluation.superclass.OccupancyEvaluation

add_metrics(list_of_metrics)[source]

Add one or multiple metrics into the evaluation queue

Parameters

list_of_metrics (str or list(str)) – one or multiple metrics that additionally add to the evaluation queue

Returns

None

get_all_metrics()[source]

Get all subclasses

Parameter

None

Returns

None

remove_metrics(list_of_metrics)[source]

Remove one or multiple metrics from the evaluation queue

Parameters

list_of_metrics (str or list(str)) – one or multiple metrics that want to remove from the evaluation queue

Returns

None

run_all_metrics()[source]

Run all metrics that currently in the queue

Parameter

None

Return type

dict(str, float or int)

Returns

a dictionary map each metrics and their corresponding result

class core.evaluation.superclass.Result[source]

Bases: object

Create a 3D array to fast select and reshape result

Parameter

None

Returns

core.evaluation.superclass.Result

get_result(dataset=None, model=None, metric=None, fixed='auto')[source]

Shrink, select and reshape result by given require query

Parameters
  • dataset (str or None or list(str)) – one or multiple datasets that user want as result. If None then all datasets will be selected

  • model (str or None or list(str)) – one or multiple models that user want as result. If None then all models will be selected

  • metric (str or None or list(str)) – one or multiple metrics that user want as result. If None then all metrics will be selected

  • fixed (str) – find which asix only have one value in order to create 2D result. If 'auto' then it will automatically find the dimension with only one value. Value must be 'auto', 'dataset', 'model', or 'metric'

Return type

numpy.ndarary

Returns

a 2D array contains the data for plotting

set_result(result)[source]

Initialize the data in self

Parameters

result (dict(str, dict(str, dict(str, float or int)))) – whole result from the experiment

Returns

None

Module contents