Caching results of computations for later use#
These examples were provided by Florian Ziemen (ziemen@dkrz.de) for use on the Levante Supercomputer of DKRZ. Some of the ideas were contributed by Lukas Kluft and Tobi Kölling, and others. The examples are by no means meant to be perfect. They should just provide some input on how things can be done.
Copyright 2022 Florian Ziemen / DKRZ
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Preparations#
[1]:
# basics
import intake
import numpy as np
import xarray as xr
import dask # memory-efficient parallel computation and delayed execution (lazy evaluation).
# fewer red warnings
import warnings, matplotlib
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=matplotlib.cbook.mplDeprecation)
Paths for data cache#
We create a data cache in our /scratch/ directory. Data there will be deleted automatically after two weeks. This way, we can cache things and the cache is automatically purged after a while.
[2]:
%run gem_helpers.ipynb # see https://easy.gems.dkrz.de/Processing/Intake/gem_helpers.html
[3]:
data_cache_path = make_tempdir("intake_demo_data")
print("Caching results in", data_cache_path)
Caching results in /scratch/k/k202134/intake_demo_data
Helper function for caching and loading#
[4]:
def cache_data(data, filename):
cache_file = os.path.join(data_cache_path, filename)
if not os.access(cache_file, os.R_OK):
data = data.compute()
data.to_netcdf(cache_file)
else:
if type(data) == xr.core.dataarray.DataArray:
data = xr.open_dataarray(cache_file)
else:
data = xr.open_dataset(cache_file)
return data
The catalog containing all the data#
[5]:
catalog_file = "/work/ka1081/Catalogs/dyamond-nextgems.json"
[6]:
cat = intake.open_esm_datastore(catalog_file)
cat
ICON-ESM catalog with 130 dataset(s) from 88823 asset(s):
unique | |
---|---|
variable_id | 546 |
project | 2 |
institution_id | 12 |
source_id | 19 |
experiment_id | 4 |
simulation_id | 12 |
realm | 5 |
frequency | 12 |
time_reduction | 4 |
grid_label | 7 |
level_type | 9 |
time_min | 918 |
time_max | 1094 |
grid_id | 3 |
format | 1 |
uri | 88813 |
[7]:
# a somewhat closer look at the catalog using get_from_cat
get_from_cat(cat, ["project", "source_id", "experiment_id", "simulation_id"])
[7]:
project | source_id | experiment_id | simulation_id | |
---|---|---|---|---|
0 | DYAMOND_WINTER | ARPEGE-NH-2km | DW-ATM | r1i1p1f1 |
1 | DYAMOND_WINTER | GEM | DW-ATM | r1i1p1f1 |
2 | DYAMOND_WINTER | GEOS-1km | DW-ATM | r1i1p1f1 |
3 | DYAMOND_WINTER | GEOS-3km | DW-ATM | r1i1p1f1 |
4 | DYAMOND_WINTER | GEOS-6km | DW-CPL | r1i1p1f1 |
5 | DYAMOND_WINTER | ICON-NWP-2km | DW-ATM | r1i1p1f1 |
6 | DYAMOND_WINTER | ICON-SAP-5km | DW-ATM | dpp0014 |
7 | DYAMOND_WINTER | ICON-SAP-5km | DW-CPL | dpp0029 |
8 | DYAMOND_WINTER | IFS-4km | DW-CPL | r1i1p1f1 |
9 | DYAMOND_WINTER | IFS-9km | DW-CPL | r1i1p1f1 |
10 | DYAMOND_WINTER | NICAM-3km | DW-ATM | r1i1p1f1 |
11 | DYAMOND_WINTER | NICAM-3km | DW-CPL | r1i1p1f1 |
12 | DYAMOND_WINTER | SAM2-4km | DW-ATM | r1i1p1f1 |
13 | DYAMOND_WINTER | SCREAM-3km | DW-ATM | r1i1p1f1 |
14 | DYAMOND_WINTER | SHiELD-3km | DW-ATM | r1i1p1f1 |
15 | DYAMOND_WINTER | UM-5km | DW-ATM | r1i1p1f1 |
16 | NextGEMS | ICON-ESM | Cycle2-alpha | dpp0066 |
17 | NextGEMS | ICON-ESM | Cycle2-alpha | dpp0067 |
18 | NextGEMS | ICON-SAP-5km | Cycle1 | dpp0052 |
19 | NextGEMS | ICON-SAP-5km | Cycle1 | dpp0054 |
20 | NextGEMS | ICON-SAP-5km | Cycle1 | dpp0065 |
21 | NextGEMS | IFS-FESOM2-4km | Cycle1 | hlq0 |
22 | NextGEMS | IFS-NEMO-4km | Cycle1 | hmrt |
23 | NextGEMS | IFS-NEMO-9km | Cycle1 | hmt0 |
24 | NextGEMS | IFS-NEMO-DEEPon-4km | Cycle1 | hmwz |
Selecting ‘tas’ for two simulations from the catalog#
[8]:
var = "tas"
hits = cat.search(simulation_id=["dpp0066", "dpp0067"], variable_id=[var])
hits
ICON-ESM catalog with 2 dataset(s) from 479 asset(s):
unique | |
---|---|
variable_id | 37 |
project | 1 |
institution_id | 1 |
source_id | 1 |
experiment_id | 1 |
simulation_id | 2 |
realm | 1 |
frequency | 1 |
time_reduction | 1 |
grid_label | 1 |
level_type | 1 |
time_min | 406 |
time_max | 406 |
grid_id | 1 |
format | 1 |
uri | 479 |
Loading the data into xarray#
[9]:
# a dictionary containing all datasets in this match
dataset_dict = hits.to_dataset_dict(
cdf_kwargs={
"chunks": dict(
time=4,
height=1,
)
}
)
# The chunks thing is important to prevent python from loading waaaaay to much data at a time and thus crashing.
--> The keys in the returned dictionary of datasets are constructed as follows:
'project.institution_id.source_id.experiment_id.simulation_id.realm.frequency.time_reduction.grid_label.level_type'
A first look at the contents#
[10]:
for name, data in dataset_dict.items():
print(name, "\n\n", data, "\n\n\n\n")
NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0067.atm.30minute.mean.gn.ml
<xarray.Dataset>
Dimensions: (time: 3458, height_2: 1, ncells: 83886080)
Coordinates:
* height_2 (height_2) float64 2.0
* time (time) float64 2.02e+07 2.02e+07 2.02e+07 ... 2.02e+07 2.02e+07
Dimensions without coordinates: ncells
Data variables:
tas (time, height_2, ncells) float32 dask.array<chunksize=(4, 1, 83886080), meta=np.ndarray>
Attributes: (12/14)
title: ICON simulation
references: see MPIM/DWD publications
number_of_grid_used: 39
uuidOfHGrid: e85b34ae-6577-11eb-81a9-93127e10b90d
history: ./icon at 20220219 165833\n./icon at 20220219 23...
grid_file_uri: http://icon-downloads.mpimet.mpg.de/grids/public...
... ...
source: icon-dkrz\tgit@gitlab.dkrz.de:icon/icon-dkrz.git...
institution: Max Planck Institute for Meteorology/Deutscher W...
comment: Daniel Klocke (m218027) on l30783 (Linux 4.18.0-...
Conventions: CF-1.6
CDO: Climate Data Operators version 1.9.10 (https://m...
intake_esm_dataset_key: NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0067.atm...
NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0066.atm.30minute.mean.gn.ml
<xarray.Dataset>
Dimensions: (time: 19488, height_2: 1, ncells: 20971520)
Coordinates:
* height_2 (height_2) float64 2.0
* time (time) float64 2.02e+07 2.02e+07 2.02e+07 ... 2.021e+07 2.021e+07
Dimensions without coordinates: ncells
Data variables:
tas (time, height_2, ncells) float32 dask.array<chunksize=(4, 1, 20971520), meta=np.ndarray>
Attributes: (12/13)
title: ICON simulation
references: see MPIM/DWD publications
number_of_grid_used: 15
uuidOfHGrid: 0f1e7d66-637e-11e8-913b-51232bb4d8f9
history: ./icon at 20220218 173036\n./icon at 20220218 20...
grid_file_uri: http://icon-downloads.mpimet.mpg.de/grids/public...
... ...
intake_esm_varname: ['tas']
source: icon-dkrz\tgit@gitlab.dkrz.de:icon/icon-dkrz.git...
institution: Max Planck Institute for Meteorology/Deutscher W...
comment: Rene Redler (m300083) on l40544 (Linux 4.18.0-30...
Conventions: CF-1.6
intake_esm_dataset_key: NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0066.atm...
Adding grid information#
[11]:
# to plot, we need to associate a grid file with the data. The easiest way is to get the info from the grid_file_uri attribute in the data.
data_dict = {}
for name, dataset in dataset_dict.items():
print(name)
grid_file_path = "/pool/data/ICON" + dataset.grid_file_uri.split(".de")[1]
grid_data = xr.open_dataset(grid_file_path).rename(
cell="ncells"
) # the dimension has different names in the grid file and in the output.
data = xr.merge((dataset, grid_data))
fix_time_axis(data)
data_dict[name] = data
NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0067.atm.30minute.mean.gn.ml
NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0066.atm.30minute.mean.gn.ml
[12]:
# A brief overview of our data
for name, data in data_dict.items():
pass
data[var]
[12]:
<xarray.DataArray 'tas' (time: 19488, height_2: 1, ncells: 20971520)> dask.array<concatenate, shape=(19488, 1, 20971520), dtype=float32, chunksize=(4, 1, 20971520), chunktype=numpy.ndarray> Coordinates: * height_2 (height_2) float64 2.0 * time (time) datetime64[ns] 2020-01-20 ... 2021-02-28T23:30:00.000115200 clon (ncells) float64 ... clat (ncells) float64 ... Dimensions without coordinates: ncells Attributes: standard_name: tas long_name: temperature in 2m units: K param: 0.0.0 CDI_grid_type: unstructured number_of_grid_in_reference: 1
Computing time averages#
[13]:
# Lazy evaluation in dask means computations are only performed when the result is used.
for data in data_dict.values():
data["tas"].mean(
dim="time"
) # no actual work is done yet due to dask lazy evaluation.
Caching time-averaged data for subsequent use#
[14]:
# Computing the average March TAS and storing it to a netcdf file for later use.
# This will be slow on the first run, but fast on any subsequent one.
# See helpers at top for implementation
for (name, data) in data_dict.items():
tas_mar = data["tas"].isel(time=(data.time.dt.month == 3)).mean(dim="time")
tas_mar = cache_data(tas_mar, f"tas_mar_{name}.nc")
print(f"Min (tas) for {name} is {tas_mar.max().values}")
Min (tas) for NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0067.atm.30minute.mean.gn.ml is 309.189208984375
Min (tas) for NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0066.atm.30minute.mean.gn.ml is 308.2714538574219
[15]:
for (name, data) in data_dict.items():
tas_mar = data["tas"].isel(time=(data.time.dt.month == 3)).mean(dim="time")
tas_mar = cache_data(tas_mar, f"tas_mar_{name}.nc")
print(f"Max (tas) for {name} is {tas_mar.max().values}")
Max (tas) for NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0067.atm.30minute.mean.gn.ml is 309.189208984375
Max (tas) for NextGEMS.MPI-M.ICON-ESM.Cycle2-alpha.dpp0066.atm.30minute.mean.gn.ml is 308.2714538574219