From 1f40cf52a364f5d02b120974f418dc88afd1b2ac Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Tue, 27 Aug 2024 21:05:28 +0530 Subject: [PATCH 1/7] Added a new page for faster aggregations --- docs/source/index.rst | 1 + docs/source/reference_aggregation.rst | 189 ++++++++++++++++++++++++++ 2 files changed, 190 insertions(+) create mode 100644 docs/source/reference_aggregation.rst diff --git a/docs/source/index.rst b/docs/source/index.rst index 4572bd4a..15a2589d 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -68,6 +68,7 @@ so that blocks from one or more files can be arranged into aggregate datasets ac beyond nonzarr reference + reference_aggregation contributing advanced diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst new file mode 100644 index 00000000..01e5f256 --- /dev/null +++ b/docs/source/reference_aggregation.rst @@ -0,0 +1,189 @@ +Other Methods of Aggregations +============================= + +As we have already seen in this `page `_, +that the main purpose of ``kerchunk`` it to generate references, to view whole archive of files like +GRIB2, NetCDF etc. allowing us for *in-situ* access to the data. In this part of the documentation, +we will see some other efficient ways of combining references. + +GRIB Aggregations +----------------- + +This new method for reference aggregation, developed by **Camus Energy**, is based on GRIB2 files. Utilizing +this method can significantly reduce the time required to combine references, cutting it down to +a fraction of the previous duration. In reality, this approach builds upon consolidating references +with ``kerchunk.combine.MultiZarrtoZarr``, making it faster. + +*How is it faster* + +Every GRIB file stored on cloud platforms such as **AWS** and **GCP** is accompanied by its +corresponding ``.idx`` file. This file otherwise known as an *index* file contains the key +metadata of the messages in the GRIB files. These metadata include `index`, `offset`, `datetime`, +`variable` and `forecast time` for their respective messages stored in the files. + +These metadata will be used to build an k_index for every GRIB message that we will be +indexing. The indexing process primarily involves the `pandas `_ library. + +.. list-table:: k_index for a single GRIB file + :header-rows: 1 + :widths: 5 10 15 10 20 15 10 20 20 30 10 10 10 + + * - + - varname + - typeOfLevel + - stepType + - name + - step + - level + - time + - valid_time + - uri + - offset + - length + - inline_value + * - 0 + - gh + - isobaricInhPa + - instant + - Geopotential height + - 0 days 06:00:00 + - 0.0 + - 2017-01-01 06:00:00 + - 2017-01-01 12:00:00 + - s3://noaa-gefs-pds/gefs.20170101/06/gec00.t06z... + - 0 + - 47493 + - None + * - 1 + - t + - isobaricInhPa + - instant + - Temperature + - 0 days 06:00:00 + - 0.0 + - 2017-01-01 06:00:00 + - 2017-01-01 12:00:00 + - s3://noaa-gefs-pds/gefs.20170101/06/gec00.t06z... + - 47493 + - 19438 + - None + * - 2 + - r + - isobaricInhPa + - instant + - Relative humidity + - 0 days 06:00:00 + - 0.0 + - 2017-01-01 06:00:00 + - 2017-01-01 12:00:00 + - s3://noaa-gefs-pds/gefs.20170101/06/gec00.t06z... + - 66931 + - 10835 + - None + * - 3 + - u + - isobaricInhPa + - instant + - U component of wind + - 0 days 06:00:00 + - 0.0 + - 2017-01-01 06:00:00 + - 2017-01-01 12:00:00 + - s3://noaa-gefs-pds/gefs.20170101/06/gec00.t06z... + - 77766 + - 22625 + - None + * - 4 + - v + - isobaricInhPa + - instant + - V component of wind + - 0 days 06:00:00 + - 0.0 + - 2017-01-01 06:00:00 + - 2017-01-01 12:00:00 + - s3://noaa-gefs-pds/gefs.20170101/06/gec00.t06z... + - 100391 + - 20488 + - None + + +.. note:: + The index in ``idx`` file indexes the GRIB messages where as the ``k_index`` (kerchunk index) + we build as part of this workflow index the variables in those messages. + +*What now* + +After creating the k_index as per the desired duration, we will use the ``DataTree`` model +from the `xarray-datatree `_ to view a +part of the aggregation or the whole. Below is a tree model made from an aggregation of +GRIB files produced from **GEFS** model hosted in AWS S3 bucket. + +.. code-block:: bash + + DataTree('None', parent=None) + ├── DataTree('prmsl') + │ │ Dimensions: () + │ │ Data variables: + │ │ *empty* + │ │ Attributes: + │ │ name: Pressure reduced to MSL + │ └── DataTree('instant') + │ │ Dimensions: () + │ │ Data variables: + │ │ *empty* + │ │ Attributes: + │ │ stepType: instant + │ └── DataTree('meanSea') + │ Dimensions: (latitude: 181, longitude: 360, time: 1, step: 1, + │ model_horizons: 1, valid_times: 237) + │ Coordinates: + │ * latitude (latitude) float64 1kB 90.0 89.0 88.0 87.0 ... -88.0 -89.0 -90.0 + │ * longitude (longitude) float64 3kB 0.0 1.0 2.0 3.0 ... 357.0 358.0 359.0 + │ meanSea float64 8B ... + │ number (time, step) int64 8B ... + │ step (model_horizons, valid_times) timedelta64[ns] 2kB ... + │ time (model_horizons, valid_times) datetime64[ns] 2kB ... + │ valid_time (model_horizons, valid_times) datetime64[ns] 2kB ... + │ Dimensions without coordinates: model_horizons, valid_times + │ Data variables: + │ prmsl (model_horizons, valid_times, latitude, longitude) float64 124MB ... + │ Attributes: + │ typeOfLevel: meanSea + └── DataTree('ulwrf') + │ Dimensions: () + │ Data variables: + │ *empty* + │ Attributes: + │ name: Upward long-wave radiation flux + └── DataTree('avg') + │ Dimensions: () + │ Data variables: + │ *empty* + │ Attributes: + │ stepType: avg + └── DataTree('nominalTop') + Dimensions: (latitude: 181, longitude: 360, time: 1, step: 1, + model_horizons: 1, valid_times: 237) + Coordinates: + * latitude (latitude) float64 1kB 90.0 89.0 88.0 87.0 ... -88.0 -89.0 -90.0 + * longitude (longitude) float64 3kB 0.0 1.0 2.0 3.0 ... 357.0 358.0 359.0 + nominalTop float64 8B ... + number (time, step) int64 8B ... + step (model_horizons, valid_times) timedelta64[ns] 2kB ... + time (model_horizons, valid_times) datetime64[ns] 2kB ... + valid_time (model_horizons, valid_times) datetime64[ns] 2kB ... + Dimensions without coordinates: model_horizons, valid_times + Data variables: + ulwrf (model_horizons, valid_times, latitude, longitude) float64 124MB ... + Attributes: + typeOfLevel: nominalTop + +.. tip:: + For a full tutorial on this workflow, refer this `kerchunk cookbook `_ + in `Project Pythia `_. + +.. raw:: html + + From 6eaf758bb800860cb3b144d9636900ffc8c31a1f Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Tue, 27 Aug 2024 21:09:11 +0530 Subject: [PATCH 2/7] updated the description --- docs/source/reference_aggregation.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst index 01e5f256..bbac60f1 100644 --- a/docs/source/reference_aggregation.rst +++ b/docs/source/reference_aggregation.rst @@ -12,7 +12,8 @@ GRIB Aggregations This new method for reference aggregation, developed by **Camus Energy**, is based on GRIB2 files. Utilizing this method can significantly reduce the time required to combine references, cutting it down to a fraction of the previous duration. In reality, this approach builds upon consolidating references -with ``kerchunk.combine.MultiZarrtoZarr``, making it faster. +with ``kerchunk.combine.MultiZarrtoZarr``, making it faster. The functions and operations used in this +will be a part of the ``kerchunk``'s API. *How is it faster* From 1c7a15b000de0aa4baaea9bb9334d3cb3bcda478 Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Tue, 27 Aug 2024 22:19:17 +0530 Subject: [PATCH 3/7] added the presentation link --- docs/source/reference_aggregation.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst index bbac60f1..3fb51635 100644 --- a/docs/source/reference_aggregation.rst +++ b/docs/source/reference_aggregation.rst @@ -13,7 +13,8 @@ This new method for reference aggregation, developed by **Camus Energy**, is bas this method can significantly reduce the time required to combine references, cutting it down to a fraction of the previous duration. In reality, this approach builds upon consolidating references with ``kerchunk.combine.MultiZarrtoZarr``, making it faster. The functions and operations used in this -will be a part of the ``kerchunk``'s API. +will be a part of the ``kerchunk``'s API. You can follow this `video `_ +for the intial discussion. *How is it faster* From 29e3edee0797c79ce18f737181508e08031d1997 Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Wed, 28 Aug 2024 18:51:51 +0530 Subject: [PATCH 4/7] updated according to suggestions --- docs/source/reference_aggregation.rst | 55 ++++++++++++++++++--------- 1 file changed, 38 insertions(+), 17 deletions(-) diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst index 3fb51635..bc09786d 100644 --- a/docs/source/reference_aggregation.rst +++ b/docs/source/reference_aggregation.rst @@ -1,30 +1,54 @@ -Other Methods of Aggregations +Aggregation special cases ============================= As we have already seen in this `page `_, -that the main purpose of ``kerchunk`` it to generate references, to view whole archive of files like -GRIB2, NetCDF etc. allowing us for *in-situ* access to the data. In this part of the documentation, -we will see some other efficient ways of combining references. +that the main purpose of ``kerchunk`` it to generate references, to view whole archive +of files like GRIB2, NetCDF etc. allowing us for direct access to the data. In +this part of the documentation, we will see some other efficient ways of +combining references. GRIB Aggregations ----------------- -This new method for reference aggregation, developed by **Camus Energy**, is based on GRIB2 files. Utilizing -this method can significantly reduce the time required to combine references, cutting it down to -a fraction of the previous duration. In reality, this approach builds upon consolidating references -with ``kerchunk.combine.MultiZarrtoZarr``, making it faster. The functions and operations used in this -will be a part of the ``kerchunk``'s API. You can follow this `video `_ -for the intial discussion. +This reference aggregation method of GRIB files, developed by **Camus Energy**, functions if +accompanying ``.idx`` files are present. + +**But this procedure has some certain restrictions:** + + - GRIB files must paired with their ``.idx`` files + - The ``.idx`` file must be of *text* type. + - Only specialised for time-series data, where GRIB files + have *identical* structure. + - Aggregation only works for a specific **forecast horizon** files. + +Utilizing this method can significantly reduce the time required to combine +references, cutting it down to a fraction of the previous duration. The original +idea was showcased in this `talk `_. *How is it faster* -Every GRIB file stored on cloud platforms such as **AWS** and **GCP** is accompanied by its -corresponding ``.idx`` file. This file otherwise known as an *index* file contains the key +The ``.idx`` file otherwise known as an *index* file contains the key metadata of the messages in the GRIB files. These metadata include `index`, `offset`, `datetime`, `variable` and `forecast time` for their respective messages stored in the files. +**It follows three step approach:** + + 1. Extract and persist metadata directly from a few arbitrary grib + files for a given product such as HRRR SUBH, GEFS, GFS etc. + 2. Use the metadata mapping to build an index table of every grib + message from the ``.idx`` files + 3. Combine the index data with the metadata to build any FMRC + slice (Horizon, RunTime, ValidTime, BestAvailable) + +.. tip:: + To confirm the indexing of messages, see this `notebook `_. + These metadata will be used to build an k_index for every GRIB message that we will be -indexing. The indexing process primarily involves the `pandas `_ library. +indexing. Indexing process primarily involves the `pandas `_ library. + +.. note:: + The index in ``.idx`` file indexes the GRIB messages where as the ``k_index`` (kerchunk index) + we build as part of this workflow index the variables in those messages. .. list-table:: k_index for a single GRIB file :header-rows: 1 @@ -110,10 +134,6 @@ indexing. The indexing process primarily involves the `pandas `_ in `Project Pythia `_. From 808844844d3c437412616e83777d3e65f9fb0733 Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Fri, 30 Aug 2024 11:00:28 +0530 Subject: [PATCH 5/7] made the suggested changes --- docs/source/reference_aggregation.rst | 33 +++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst index bc09786d..4be4cb33 100644 --- a/docs/source/reference_aggregation.rst +++ b/docs/source/reference_aggregation.rst @@ -3,15 +3,16 @@ Aggregation special cases As we have already seen in this `page `_, that the main purpose of ``kerchunk`` it to generate references, to view whole archive -of files like GRIB2, NetCDF etc. allowing us for direct access to the data. In +of files like GRIB2, NetCDF etc, allowing us for direct access to the data. In this part of the documentation, we will see some other efficient ways of combining references. GRIB Aggregations ----------------- -This reference aggregation method of GRIB files, developed by **Camus Energy**, functions if -accompanying ``.idx`` files are present. +This reference aggregation method of GRIB files, developed by `Camus Energy `_, +and it functions if accompanying ``.idx`` files are present. It involves creating a reference index +for every GRIB message across the files that we want to aggregate. **But this procedure has some certain restrictions:** @@ -19,7 +20,8 @@ accompanying ``.idx`` files are present. - The ``.idx`` file must be of *text* type. - Only specialised for time-series data, where GRIB files have *identical* structure. - - Aggregation only works for a specific **forecast horizon** files. + - Reference index can be combined across many horizons + but *each horizon must be indexed separately.* Utilizing this method can significantly reduce the time required to combine references, cutting it down to a fraction of the previous duration. The original @@ -29,9 +31,10 @@ idea was showcased in this `talk `_. -These metadata will be used to build an k_index for every GRIB message that we will be -indexing. Indexing process primarily involves the `pandas `_ library. +Reference index or *k_index*, we get as a result indexes every GRIB message. +The metadata mapping mentioned in the above steps, is an one-to-one mapping of the attributes, +from any GRIB file *with the same horizon* to its ``idx`` file. Indexing process primarily +involves the `pandas `_ library. .. note:: - The index in ``.idx`` file indexes the GRIB messages where as the ``k_index`` (kerchunk index) - we build as part of this workflow index the variables in those messages. + The index in ``.idx`` file indexes the GRIB messages where as the ``k_index`` + (kerchunk index) we build as part of this workflow, index the variables + in those messages. + +The table mentioned below is a k_index made from a single GRIB file. .. list-table:: k_index for a single GRIB file :header-rows: 1 @@ -138,8 +146,9 @@ indexing. Indexing process primarily involves the `pandas `_ to view a -part of the aggregation or the whole. Below is a tree model made from an aggregation of -GRIB files produced from **GEFS** model hosted in AWS S3 bucket. +part(desired variables) or the whole of the aggregation, using the k_index. Below is a +tree model made from an aggregation of GRIB files produced from **GEFS** model hosted +in AWS S3 bucket. .. code-block:: bash From 274b1d5a8faa78d84e771278d9fa63fbf4ca85aa Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Fri, 30 Aug 2024 13:47:12 +0530 Subject: [PATCH 6/7] made some refactoring --- docs/source/reference_aggregation.rst | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst index 4be4cb33..4fba5084 100644 --- a/docs/source/reference_aggregation.rst +++ b/docs/source/reference_aggregation.rst @@ -25,14 +25,7 @@ for every GRIB message across the files that we want to aggregate. Utilizing this method can significantly reduce the time required to combine references, cutting it down to a fraction of the previous duration. The original -idea was showcased in this `talk `_. - -*How is it faster* - -The ``.idx`` file otherwise known as an *index* file contains the key -metadata of the messages in the GRIB files. These metadata include `index`, `offset`, `datetime`, -`variable` and `forecast time` for their respective messages stored in the files. This metadata -will be used to index every GRIB message. This method follows a three step approach. +idea was showcased in this `talk `_. It follows a three step approach. **Three step approach:** @@ -43,17 +36,18 @@ will be used to index every GRIB message. This method follows a three step appro 3. Combine the index data with the metadata to build any FMRC slice (Horizon, RunTime, ValidTime, BestAvailable) -.. tip:: - To confirm the indexing of messages, see this `notebook `_. +*How is it faster* + +The ``.idx`` file otherwise known as an *index* file contains the key +metadata of the messages in the GRIB files. These metadata include `index`, `offset`, `datetime`, +`variable` and `forecast time` for their respective messages. This metadata +will be used to index every GRIB message. By following this approach, we only have to ``scan_grib`` a single GRIB file, not the whole archive. -Reference index or *k_index*, we get as a result indexes every GRIB message. -The metadata mapping mentioned in the above steps, is an one-to-one mapping of the attributes, -from any GRIB file *with the same horizon* to its ``idx`` file. Indexing process primarily -involves the `pandas `_ library. +Building the index of a time horizon, requires a single one-to-one mapping of GRIB/Zarr metadata to the attributes in the idx file. Only constraint is the mapping needs to be made from a single GRIB file, belonging to the *same time horizon*. The indexing process primarily involves the `pandas `_ library. To confirm this, see this `notebook `_. .. note:: The index in ``.idx`` file indexes the GRIB messages where as the ``k_index`` - (kerchunk index) we build as part of this workflow, index the variables + (kerchunk index), index the variables in those messages. The table mentioned below is a k_index made from a single GRIB file. From 73c040b8d10ac6eca4a5726e52a3663c23f65d8d Mon Sep 17 00:00:00 2001 From: Anu-Ra-g Date: Fri, 30 Aug 2024 17:22:59 +0530 Subject: [PATCH 7/7] added some other details --- docs/source/reference_aggregation.rst | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/docs/source/reference_aggregation.rst b/docs/source/reference_aggregation.rst index 4fba5084..ee8e28fa 100644 --- a/docs/source/reference_aggregation.rst +++ b/docs/source/reference_aggregation.rst @@ -20,12 +20,13 @@ for every GRIB message across the files that we want to aggregate. - The ``.idx`` file must be of *text* type. - Only specialised for time-series data, where GRIB files have *identical* structure. - - Reference index can be combined across many horizons - but *each horizon must be indexed separately.* + - Each horizon(forecast time) must be indexed separately. + Utilizing this method can significantly reduce the time required to combine references, cutting it down to a fraction of the previous duration. The original -idea was showcased in this `talk `_. It follows a three step approach. +idea was showcased in this `talk `_. +It follows a three step approach. **Three step approach:** @@ -36,21 +37,29 @@ idea was showcased in this `talk `_ library. To confirm this, see this `notebook `_. +Building the index of a time horizon, first requires a single one-to-one mapping of GRIB/Zarr +metadata to the attributes in the idx file. Only constraint is the mapping needs to be +made from a single GRIB file, belonging to the *same time horizon*. The indexing process +primarily involves the `pandas `_ library. To confirm this, +see this `notebook `_. +After indexing a single time horizon, you can combine this index with indexes of +other time horizon and store it. .. note:: The index in ``.idx`` file indexes the GRIB messages where as the ``k_index`` (kerchunk index), index the variables in those messages. -The table mentioned below is a k_index made from a single GRIB file. +The table mentioned below is a *k_index* made from a single GRIB file. .. list-table:: k_index for a single GRIB file :header-rows: 1