Difference between revisions of "GLUE analysis"

From SourceWiki
Jump to navigation Jump to search
m
 
Line 2: Line 2:
 
[[Category:Hydrology in R]]
 
[[Category:Hydrology in R]]
  
 
+
Since I am now at Imperial College London, this page has moved and can now be found on [http://wiki.r-project.org/rwiki/doku.php?id=guides:tutorials:hydrological_data_analysis the wiki page of the R-project].
===Introduction===
 
 
 
This is an example of R code to perform a Generalised Likelihood Uncertainty Estimation (GLUE) on a hydrological model. In the example we use [[topmodel]] (implemented as ''topmodel()'') on a catchment in the Ecuadorian Andes (Huagrahuma). Other models can be used as long as they are implemented as an R function.
 
 
 
More examples and scripts for doing uncertainty analysis in R can be found on [http://www.glue-uncertainty.org/ Keith Beven's uncertainty pages].
 
 
 
===Libraries ===
 
 
 
Library Hmisc ([http://cran.r-project.org/web/packages/Hmisc/index.html CRAN]) is needed for the ''wgt.quantile()'' function. See ?wgt.quantile for details.
 
 
 
  library(Hmisc)
 
  library(topmodel)
 
  data(huagrahuma)
 
 
 
===Procedure===
 
Sample parameter sets from a prior parameter distribution. This example uses the uniform distribution, which can be sampled with "runif()". The parameters vch and psi are not used in this example but need to be initialised. For an explanation of the parameter values see the page on [[Running Topmodel]]. We take initially 1000 parameter sets. Depending on the performance of the model, many of them will be rejected as non-behavioural. If you end up with too few parameter behavioural parameter sets, you will need to increase your sample size.
 
 
 
  qs0  <- runif(1000, min = 0.0001, max = 0.00025)
 
  lnTe  <- runif(1000, min = -2, max = 3)
 
  m    <- runif(1000, min = 0, max = 0.1)
 
  Sr0  <- runif(1000, min = 0, max = 0.2)
 
  Srmax <- runif(1000, min = 0, max = 0.1)
 
  td    <- runif(1000, min = 0, max = 3)
 
  vch  <- 1000
 
  vr    <- runif(1000, min = 100, max = 2500)
 
  k0    <- runif(1000, min = 0, max = 10)
 
  CD    <- runif(1000, min = 0, max = 5)
 
  dt    <- 0.25
 
 
 
  parameters <- cbind(qs0,lnTe,m,Sr0,Srmax,td,vch,vr,k0,CD,dt)
 
 
 
Now each row in the matrix ''parameters'' contains a sampled parameter set. Run the model for the calibration period with the each set to obtain the simulated discharge:
 
 
 
  Qsim <- topmodel(parameters[1,],topidx,delay,rain,ET0)
 
 
 
Calculate the likelihood of this parameter set using the simulated and observed discharge. The choice of the likelihood function is up to the user, but the Nash - Sutcliffe efficiency is given here as an example:
 
 
 
  eff <- NSeff(Qobs,Qsim)
 
 
 
Decide whether the parameter set is behavioural or not and retain the parameter set if behavioural
 
 
 
NOTE: in GLUE, this decision is subjective. For a more scientifically sound determination of the behavioural limit, see Beven (2006). Here we will use an efficiency of 0.6 as a threshold. The efficiency, parameter set and simulated discharge of a behavioural run are stored in resp. the objects ''total.eff'', ''total.param.set'' and ''total.qsim''
 
 
 
  if(eff > 0.6) {
 
      total.eff <- c(total.eff,eff)
 
      behavioural.parameters <- cbind(behavioural.parameters,parameters)
 
  }
 
 
 
The above procedure should be repeated until enough behavioural runs are obtained (e.g. using a while()-loop)
 
 
 
If the above code is used, each column of the matrix ''behavioural.parameters'' contains a behavioural parameter set. The corresponding performance is found at the same location in the vector ''total.eff''
 
 
 
Rerun the model for the prediction period, using each of the behavioural parameter sets. The simulated discharges are stored in the columns of a matrix called predicted.qsim
 
 
 
  predicted.qsim <- model(behavioural.parameters[,1], rain, ...)
 
 
 
  for(i in 2:dim(param.set)[2]) {
 
      qsim <- model(behavioural.parameters[,i], rain, ...)
 
      predicted.qsim <- cbind(predicted.obs,qobs)
 
  }
 
 
 
Normalise the efficiencies so that they sum up to 1:
 
 
 
  eff <- eff - 0.6
 
  eff <- eff/sum(eff)
 
 
 
Calculate the quantiles for each timestep. Here we take the 0.05 and 0.95 quantiles, resulting in 90% prediction limits.
 
 
 
  limits <- apply(Qsim,1,"wtd.quantile", weights = Eff, probs = c(0.05,0.95), normwt=T)
 
 
 
===Final notes===
 
 
 
* If topmodel is used, some loops can be avoided. topmodel() can work on entire parameter set matrices, which makes things faster. It can also return the Nash-Sutcliffe efficiency directly (see the topmodel page)
 
* The procedure can be very memory intensive because all simulated discharges for all parameter sets are stored in memory (the matrix predicted.qsim). If the model can give output per timestep, the above procedure can be repeated for each timestep separately to reduce memory usage
 
 
 
===References===
 
 
 
* Beven, K., and Binley, A. The future of distributed models: Model calibration and uncertainty prediction. Hydrological Processes 6 (1992), 279-298.
 
* Beven, K. A manifesto for the equifinality thesis. Journal of Hydrology 320 (2006), 18-36.
 

Latest revision as of 18:13, 12 August 2009


Since I am now at Imperial College London, this page has moved and can now be found on the wiki page of the R-project.