Metrics

The Solar Forecast Arbiter evaluation framework provides a suite of metrics for evaluating deterministic and probablistic solar forecasts. These metrics are used for different purposes, e.g., comparing the forecast and the measurement, comparing the performance of multiple forecasts, and evaluating an event forecast.

Metrics for Deterministic Forecasts

The following metrics provide measures of the performance of deterministic forecasts. Each metric is computed from a set of forecasts and corresponding observations .

In the metrics below, we adopt the following nomenclature:

  • number of samples
  • forecasted value
  • observed (actual) value
  • normalizing factor (with the same units as the forecasted and observed values)
  • the mean of the forecasted and observed values, respectively

Mean Absolute Error (MAE)

The absolute error is the absolute value of the difference between the forecasted and observed values. The MAE is defined as:

Mean Bias Error (MBE)

The bias is the difference between the forecasted and observed values. The MBE is defined as:

Root Mean Square Error (RMSE)

The RMSE is the square root of the averaged of the squared differences between the forecasted and observed values, and is defined as:

RMSE is a frequently used measure for evaluating forecast accuracy. Since the errors are squared before being averaged, the RMSE gives higher weight to large errors.

Forecast Skill ()

The forecast skill measures the performance of a forecast relative to a reference forecast. The Solar Forecast Arbiter uses the definition of forecast skill based on RMSE:

where is the RMSE of the forecast of interest, and is the RMSE of the reference forecast, e.g., persistence.

Mean Absolute Percentage Error (MAPE)

The absolute percentage error is the absolute value of the difference between the forecasted and observed values,

Normalized Root Mean Square Error (NRMSE):

The NRMSE [%] is the normalized form of the RMSE and is defined as:

Centered (unbiased) Root Mean Square Error (CRMSE)

The CRMSE describes the variation in errors around the mean and is defined as:

The CRMSE is related to the RMSE and MBE through , and can be decomposed into components related to the standard deviation and correlation coefficient:

where and are the standard deviations of the forecast and observation, respectively, and is the correlation coefficient.

Pearson Correlation Coefficient ()

Correlation indicates the strength and direction of a linear relationship between two variables. The Pearson correlation coefficient, aka, the sample correlation coefficient, measures the linear dependency between the forecasted and observed values, and is defined as the ratio of the covariance of the variables to the product of their standard deviation:

Coefficient of Determination ()

The coefficient of determination measures the extent that the variability in the forecast errors is explained by variability in the observed values, and is defined as:

By this definition, a perfect forecast has a value of 1.

Kolmogorov-Smirnov Test Integral (KSI)

The KSI quantifies the level of agreement between the cumulative distribution function (CDFs) of the forecasted and observed values, and is defined as:

where and are the minimum and maximum values of the observations, and is the absolute difference between the two empirical CDFs:

where is defined as for and .

In practice, is typical. A KSI value of zero implies that the CDFs of the forecast and observed values are equal.

KSI can be normalized as:

where and . When the normalized KSI can be interpreted as a stastical that tests the hypothesis that the two empirical CDFs represent samples drawn from the same population.

OVER

Conceptually, the OVER metric modifies the KSI to quantify the difference between the two CDFs, but only where the CDFs differ by more than a critical limit . The OVER is calculated as:

where

The OVER metric can be normalized using the same approach as for KSI.

Combined Performance Index (CPI)

The CPI can be thought of as a combination of KSI, OVER, and RMSE:

Metrics for Deterministic Event Forecasts

An event is defined by values that exceed or fall below a threshold. A typical event is the ramp in power of solar generation, which is determine by:

where is the solar power output at time and is the duration of the ramp event.

Based on the predefined threshold, all observations or forecasts can be evaluated by placing them in either the “event occurred” (Positive) or “event did not occur” (Negative) categories. Then individual pairs of forecasts and observations can be placed into one of four groups based on whether the event forecast agrees (or disagrees) with the event observed value:

  • True Positive (TP): Forecast = Event, Observed = Event
  • False Positive (FP): Forecast = Event, Observed = No Event
  • True Negative (TN): Forecast = No Event, Observed = No Event
  • False Negative (FN): Forecast = No Event, Observed = Event

By then counting the the number of TP, FP, TN and FN values, the following metrics can be computed:

Probability of Detection (POD)

The POD is the fraction of observed events correctly forecasted as events:

False Alarm Ratio (FAR)

The FAR is the fraction of forecasted events that did not occur:

Probability of False Detection (POFD)

The POFD is the fraction of observed non-events that were forecasted as events:

Critical Success Index (CSI)

The CSI evaluates how well an event forecast predicts observed events, e.g., ramps in irradiance or power. THe CSI is the relative frequency of hits, i.e., how well predicted “yes” events correspond to observed “yes” events:

Event Bias (EBIAS)

The EBIAS is the ratio of counts of forecast and observed events:

Event Accuracy (EA)

The EA is the fraction of events that were forecasted correctly, i.e., forecast = “yes” and observed = “yes” or forecast = “no” and observed = “no”:

where is the number of samples.

Metrics for Probablistic Forecasts

Probablistic forecasts represent uncertainty in the forecast quantity by providing a probability distribution or a prediction interval, rather than a single value.

In the metrics below, we adopt the following nomenclature:

  • probability forecast for an event at each time
  • discrete values that appear in the probability forecast
  • indicator for event : if an event occurs at time and otherwise
  • the number of times each forecast value appears in the forecast
  • number of forecast events
  • the relative frequency of each forecast value in the forecast
  • the average of at the times when

  • the average of for all times

Brier Score (BS)

The BS measures the accuracy of forecast probability for one or more events:

Smaller values of BS indicate better agreement between forecasts and observations.

Brier Skill Score (BSS)

The BSS is based on the BS and measures the performance of a probability forecast relative to a reference forecast:

where is the BS of the forecast of interest, and is the BS of the reference forecast. BSS greater than zero indicates the forecast performed better than the reference and vice versa for BSS less than zero, while BSS equal to zero indicates the forecast is no better (or worse) than the reference.

When the probability forecast takes on a finite number of values (e.g. 0.0, 0.1, …, 0.9, 1.0), the BS can be decomposed into a sum of three metrics that give additional insight into a probability forecast:

Reliability (REL)

The REL is given by:

Reliability is the weighted averaged of the squared differences between the forecast probabilities and the relative frequencies of the observed event in the forecast subsample of times where . A forecast is perfectly reliably if . This occurs when the relative event frequency in each subsample is equal to the forecast probability for the subsample.

Resolution (RES)

The RES is given by:

Resolution is the weighted averaged of the squared differences between the releative event frequency for each forecast subsample and the overall event frequency. Resolution measures the forecast’s ability to produce subsample forecast periods where the event frequency is different. Higher values of RES are desirable.

Uncertainty (UNC)

The UNC is given by:

Uncertainty is the variance of the event indicator . Low values of UNC indicate that the event being forecasted occurs only rarely.

Sharpness (SH)

The SH represents the degree of “concentration” of a forecast comprising a prediction interval of the form within which the forecast quantity is expected to fall with probability . A good forecast should have a low sharpness value. The prediction interval endpoints are associated with quantiles and , where . For a single prediction interval, the SH is:

and for a timeseries of prediction intervals, the SH is given by the average:

Continuous Ranked Probability Score (CRPS)

The CRPS is a score that is a designed to measure both the realiability and sharpness of a probablistic forecast. For a timeseries of forecasts comprising a CDF at each time point, the CRPS is:

where is the CDF of the forecast quantity at time point , and is the CDF associated with the observed value :

The CRPS reduces to the mean absolute error (MAE) if the forecast is deterministic.