Downscaling data/indicators to higher resolution for test cases.

home Forums SWICCA Forum Downscaling data/indicators to higher resolution for test cases.

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
  • #1173
    Paolo GECOS

    Hi everybody, according to last training on climate services, and particularly Ronald examples and exercises on Italy and Greece, we thought to adopt this strategy, any suggestion/correction is welcome:

    For point data/indicators (like P, P-ET): identify local stations, try simplest BC (scaling of means) or best BC (full PDF correction) , like in the example of Martonano station, apply correction to C.C. simulated series to adapt them to each local station and finally interpolate them to finer grid resolution directly from local stations points.

    For indicators like flow or FDC that represent average behavior of upstream catchment: given high chance to get non correctable time series (like in Strada Casale Station example), directly use “C factor” of the 0.5 ° cell the river catchment belongs to and apply it to local station data to get corresponding C.C. scenario time series or FDC , for larger catchment average weighted C factor using cell portion belonging to catchment as weight.


    In our case study (UPV – Jucar river basin) we need to work with time series of accumulated upstream flows.
    In order to build the time series for C.C. we can use the simplest BC method or the full PDF correction.

    To use the simplest BC method we should build a previous statistic model and adapt it to C.C. scenarios.

    On the other hand, using the full PDF correction will be easier, but we don’t know how to proceed. We can send the historical time series to Ronald or somebody who works with PDF correction, or we can do de PDF correction if somebody give us the C.C. series.

    We need some advice in this sense.


    I wasn’t at the workshop, so I don’t know the practical examples you mention, so perhaps Ronald can address that. However, I’m a bit confused on whether you are discussing bias correction or downscaling? Bias correction would be to remove bias from a modeled timeseries, where a similar reference data exists. Statistical downscaling can use similar methods as used for bias correction, but then has the purpose of adjusting the statistical properties to mimic a higher resolution.

    In SWICCA, the use is primarily of indicators, such as the average change in a variable. Thus, there are mostly no modeled timeseries that would require bias correction. Note also that the precipitation and temperature data used to force the hydrological models were already bias corrected within the IMPACT2C project.

    Taking a local timeseries and scaling it with a climate change signal from the indicators can be considered a sort of downscaling. Is that what is discussed here? That could be done using e.g. the change in the mean, or the change in different moments, or percentiles of a distribution. Can you please clarify what you want to do? Perhaps with a practical example?

    Paolo GECOS

    We are interested in downscaling to finer resolution actual information you provided at 0.5 ° cell size in the demonstrator, however requirements are different for the two case studies.

    For case study that uses indicators such as the change in Flow duration curves percentiles, we think we can use them as they are provided now. We don’t need to downscale a model time series of discharge (Ronald showed in his example it could be tricky indeed), we are fine with percentiles variations that represent average upstream catchment behavior. Provided resolution, even if a ittle too coarse, is still usable as one ore more cells may fit inside a catchment, so using the average values from involved cells should work.

    For case study that uses indicators such as precipitation and temperature we need finer resolution (roughly 2X2 km2) time series of 10 days values in C. C. If we got your suggestion right we could do this taking the local time series for every station and trying to modify them using the modelled indicators.

    In the case of mentioned 10 days P-T variables , for example we cuold use the 10 days modelled time series of “changes in the variable” and just apply it to every local stations data belonging tho that cell.


    Yes, scaling of a local observed timeseries with some climate change factor is what is often called the delta change method. This is a kind of donwscaling of climate information, and would be suitable in this case.

    stefano GECOs

    Hi I’ve found and tested two R libraries for downscaling data, I would like to know if someone has experience with them and if the implemented methods can be used for our downscaling purposes (basically Temp e Precipitation).


    in particular the function biascorrection1D available inside this script

    here you can find a script and data for test

    Many Thanks for your support.
    Stefano Bagli


    I have used the qmap library before, and it works well (if you have timeseries and not just indicators). You can chose between fitting a function to your distribution, or using and empirical distribution.

    The downscalingR package was new to me, but the methods seem straight forward. The “delta” and “scaling” methods should work well observational timeseries together with indicators as provided on the SWICCA portal.


    Paolo GECOS

    We use ECV (daily river flow 0.5 deg and catchments) to get seasonal flow duration curves in C.C: conditions
    Particularly we apply delta change method to local observed flow duration curves (available for period 1991-2001) in order to correct percentiles in C.C. conditions by taking changes of percentiles in swicca modeled FDC and applying this changes to observed FDC.

    We guess if we can exclude one or more hydrological models (E-HYpe, VIC 421, LISFLOOD), in order to reduce possible output combinations, making some assumptions on how good they are in describing most relevant mechanisms in discharge generation for the area of interest.
    We guess that this is possible by examinating shape of flow duration curve: we proceeded this way

    – we have local studies that give for this area two adimensional Flow Duration Curves (A_FDC, normalized by mean discharge over observed discharge time series) for catchments area above and below 100 km2.
    – We compared this two representative curves with A_FDC from the three hydrological models in the reference period 1971-2001 for every hydrological model, and every input/forcing both for 0.5 deg data and catchment scale data (the latest only for E-HYPE21)
    – we found quite different behavior of the three models and particularly found out a better fit between observed A_FDC and modeled A_FDC for E-Hype21 at catchment level.
    a few graphs showing this are available in this link

    The question is : can we assume from this comparison that hydrological model E-HYpe21 at catchment scale is more suitable for the area of interest and exclude other models, then apply delta change method in order to modify actual observed FDC to obtain Climate change FDC?

    many thanks for the help

    • This reply was modified 7 years, 2 months ago by Paolo GECOS.

    Hi Paolo,

    adimensional FDC means that you applied some sort of scaling to normalise discharge? Looking at your figures I guess that is what you did.

    Regarding your question, I would say it is ok to choose the catchment scale results for your study (btw, the model name is EHYPE3.1 for catchment results). This because your catchments are comparatively small and the gridded results show an average over a larger area.


    Paolo GECOS

    Hi René
    Thank you for your reply, we have normalized the FDCs dividing them by average flow and our catchments are indeed relatively small (roughly 50 to 600 km2)
    for this case we will then go for catchment level discharge data from E-HYPE 3.1.

    Kind regards

    • This reply was modified 7 years, 1 month ago by Paolo GECOS.
    Alexandros Ziogas

    Due to differences identified between modeled and observed precipitation in the catchments of Evinos and Mornos, daily precipitation and temperature data provided by SWICCA, with spatial analysis of 0.5 degrees, were downscaled based on local data. Mean values, mean & variance and Full PDF correction was applied. We note that Full PDF correction led to historical time series with statistical characteristics better matching the observed time series, than BC corrected data. However, Full PDF correction led to time series for the future period which depict a significantly different climate change than the change predicted by the original, 0.5 degrees SWICCA data. This was not observed for the BC corrected time series.
    In order to cope with this issue, there was an effort to change the Full PDF correction rules applied for close-to-zero daily precipitation, since we noticed that Full PDF correction led to time series with a lot of zero daily precipitation values. This effort led to slightly better results but still the problem remained. Another correction effort was based on removing the trend from the historical 0.5 degrees time series, applying the Full PDF correction then and adding the trend back again. That approach kind of corrected the problem. However, we decided to use the BC corrected time series.


    the bias adjustment method can indeed affect the climate change signal, e.g. if there is a bias in the variance. Methods that adjust for this will also affect the climate change signal, which is why the results can differ between different methods.
    It sounds like you are approaching this topic well by switching to “wet” pdfs, and by trying to remove trends. Perfectly retaining the change signal is probably difficult, but to significantly reduce the effect (as you state you have) might be enough, given any other improvement you might get by the full pdf adjustment.


Viewing 12 posts - 1 through 12 (of 12 total)
  • The forum ‘SWICCA Forum’ is closed to new topics and replies.