core.evaluation package¶
Submodules¶
core.evaluation.f_score module¶
- 
class core.evaluation.f_score.Accuracy(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the Accuracy between prediction and ground truth - The Accuracy are identified by percentage of correct prediction - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- Accuracy score 
 
- 
class core.evaluation.f_score.AccuracyTolerance(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.OccupancyEvaluation- Calculate the AccuracyTolerance between prediction and ground truth - The AccuracyTolerance are identified same as Accuracy, but with differences smaller than the given tolerance will be considered as a correct prediction - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
- tolerance (int) – the maximum differences between prediction and truth to mark as correct 
 
- Return type
- Returns
- AccuracyTolerance score 
 
- 
class core.evaluation.f_score.F1Score(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the F1 Score between prediction and ground truth - The F1 Score are identified by 2 * TP / (2 * TP + FP + FN) - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- F1 Score score 
 
- 
class core.evaluation.f_score.Fallout(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the Fallout between prediction and ground truth - The Fallout are identified by FP/(FP+TN) - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- Fallout score 
 
- 
class core.evaluation.f_score.FalseNegative(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the False-Negative between prediction and ground truth - The False-Negative indicate the proportion of the actual occupied states that are identified as unoccupied - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- number of entries that is FN 
 
- 
class core.evaluation.f_score.FalsePositive(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the False-Positive between prediction and ground truth - The False-Positive indicate the proportion of the actual unoccupied states that are identified as occupied - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- number of entries that is FP 
 
- 
class core.evaluation.f_score.Missrate(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the Missrate between prediction and ground truth - The Missrate are identified by 1 - Recall - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- Missrate score 
 
- 
class core.evaluation.f_score.Precision(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the Precision between prediction and ground truth - The Precision indicates the percentage of occupancy predictions which are correct by TP/(TP+FP) - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- Precision score 
 
- 
class core.evaluation.f_score.Recall(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the Recall between prediction and ground truth - Recall is the percentage of the true occupied states which are identified by TP/(TP+FN) - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- Recall score 
 
- 
class core.evaluation.f_score.Selectivity(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the Selectivity between prediction and ground truth - The Selectivity are identified by 1 - Fallout - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- Selectivity score 
 
- 
class core.evaluation.f_score.TrueNegative(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the True-Negative between prediction and ground truth - The True-Negative indicate the proportion of the actual unoccupied states that are correctly identified - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- number of entries that is TN 
 
- 
class core.evaluation.f_score.TruePositive(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.BinaryEvaluation- Calculate the True-Positive between prediction and ground truth - The True-Positive indicate the proportion of the actual occupied states that are correctly identified - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- number of entries that is TP 
 
core.evaluation.mae module¶
- 
class core.evaluation.mae.MAE(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.OccupancyEvaluation- Calculate the Mean Absolute Error between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- MAE score 
 
core.evaluation.mape module¶
- 
class core.evaluation.mape.MAPE(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.OccupancyEvaluation- Calculate the Mean Absolute Percentage Error between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- MAPE score 
 
core.evaluation.mase module¶
- 
class core.evaluation.mase.MASE(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.OccupancyEvaluation- Calculate the Mean Absolute Scaled Error between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- MASE score 
 
core.evaluation.nrmse module¶
- 
class core.evaluation.nrmse.NRMSE(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.OccupancyEvaluation- Calculate the Normalized Root Mean Square Error between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
- mode (str) – the mode of nRMSE. Can select - 'minmax'or- 'mean'
 
- Return type
- Returns
- nRMSE score 
 
core.evaluation.rmse module¶
- 
class core.evaluation.rmse.RMSE(predict, truth)[source]¶
- Bases: - core.evaluation.superclass.OccupancyEvaluation- Calculate the Root Mean Square Error between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
- Returns
- RMSE score 
 
core.evaluation.superclass module¶
- 
class core.evaluation.superclass.BinaryEvaluation(predict, truth)[source]¶
- Bases: - object- Use all binary occupancy evaluation metrics to evaluate the differences between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
 
- 
class core.evaluation.superclass.OccupancyEvaluation(predict, truth)[source]¶
- Bases: - object- Use all occupancy level estimation metrics to evaluate the differences between prediction and ground truth - Parameters
- predict (numpy.ndarray) – the predicted values from occupancy estimation models 
- truth (numpy.ndarray) – the ground truth value from the Dataset 
 
- Return type
 
- 
class core.evaluation.superclass.Result[source]¶
- Bases: - object- Create a 3D array to fast select and reshape result - Parameter
- None 
- Returns
- core.evaluation.superclass.Result 
 - 
get_result(dataset=None, model=None, metric=None, fixed='auto')[source]¶
- Shrink, select and reshape result by given require query - Parameters
- dataset (str or None or list(str)) – one or multiple datasets that user want as result. If - Nonethen all datasets will be selected
- model (str or None or list(str)) – one or multiple models that user want as result. If - Nonethen all models will be selected
- metric (str or None or list(str)) – one or multiple metrics that user want as result. If - Nonethen all metrics will be selected
- fixed (str) – find which asix only have one value in order to create 2D result. If - 'auto'then it will automatically find the dimension with only one value. Value must be- 'auto',- 'dataset',- 'model', or- 'metric'
 
- Return type
- numpy.ndarary 
- Returns
- a 2D array contains the data for plotting