Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add reader for GERB high-resolution HDF5 files #2572

Merged
merged 22 commits into from
Oct 6, 2023

Conversation

pdebuyl
Copy link
Contributor

@pdebuyl pdebuyl commented Sep 12, 2023

  • Closes #xxxx
  • Tests added
  • Fully documented
  • Add your name to AUTHORS.md if not there already

Hi satpy,

I propose a reader for the L2 HR product of the GERB instrument. The data documentation is available here: https://gerb.oma.be/doku.php?id=documentation

I put three sample files here: https://gerb.oma.be/public/pdebuyl/sample_data/

I would be glad to see this included in satpy.

Quick background: GERB stands for "Geostationary Earth Radiation Budget". The project, for which I work, produces top of atmosphere fluxes in the reflected short wave domain and in the outgoing long wave domain (thermal).

The images are 1237 by 1237 pixels, which is the 9km (nadir) version of the SEVIRI grid. The instrument flies on MSG satellites.

I am happy to answer to any question of course and to improve my code to reach the requirements of satpy. Example of use:

import satpy
scene = satpy.Scene(reader="gerb_l2_hr_h5", filenames=["G1_SEV2_L20_HR_SOL_TH_20120621_101500_ED01.hdf"])
scene.load(['Thermal Flux', 'Solar Flux'])
scene.show('Thermal Flux')
scene.show('Solar Flux')

@djhoese djhoese added enhancement code enhancements, features, improvements component:readers labels Sep 12, 2023
Copy link
Member

@djhoese djhoese left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job putting this together. It is at a really good starting point, but I'm hoping you could make some changes to clean it up a bit.

The CI checks are complaining about a couple things and I tried to address some of them in my comments/suggestions. You can click "Details" next to the pre-commit CI check and that will give you a better idea for what it is complaining about. You could also install pre-commit locally to have these checks happen when you make a commit.

I have two main concerns with the code otherwise:

  1. We'll need tests before we can merge this into Satpy. My second suggestion below might help with that since you could base your tests off of other similar readers.
  2. You may want to base your file handler off of the HDF5 utility handler (
    class HDF5FileHandler(BaseFileHandler):
    ). This will handle all the dask loading stuff for you and should handle a lot of common performance gotchas. And if we add S3 support for that utility file handler in the future your reader will get that support for free. I talk about it in my inline comments, but the biggest non-dask behavior is the handling of the fill/error values. We can brainstorm solutions to that later on if needed and if it isn't clear what I'm talking about.

satpy/readers/gerb_l2_hr_h5.py Outdated Show resolved Hide resolved
satpy/readers/gerb_l2_hr_h5.py Outdated Show resolved Hide resolved
Comment on lines 99 to 109
def get_area_def(self, dsid):
"""Area definition for the GERB product"""

if abs(self.ssp_lon) < 1e-6:
return get_area_def("msg_seviri_fes_9km")
elif abs(self.ssp_lon - 9.5) < 1e-6:
return get_area_def("msg_seviri_fes_9km")
elif abs(self.ssp_lon - 45.5) < 1e-6:
return get_area_def("msg_seviri_iodc_9km")
else:
raise ValueError
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do the GERB files not have geolocation information internally? It'd be best if we didn't have to depend on the builtin Satpy areas.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I didn't know it was preferred. I actually started using a proj config line, would this be better? There are several approaches in the existing readers.

In any case, the grid for GERB is fixed for the lifetime of the instrument, so the only variable parameter is the SSP longitude.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the grid for GERB is fixed for the lifetime of the instrument

If I had a dollar for every time I've heard something about an instrument isn't supposed to change...

I guess when I say preferred I mean it is my preference. Normally the data files provide their own geolocation information. Otherwise I would consider the files at fault. You're putting the data in the files, describe it. In some readers we've had to hardcode geolocation/area information but only as a last resort. I guess what you've done here is the best we're going to get.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle, we can provide a HDF5 file, and we do when releasing a full dataset. For users wishing to download a short time series, this will be problematic however :-/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the NetCDF CF realm of data files providing the projection information shouldn't be more than 10s or maybe 100 bytes (ex. grid_mapping variables). There would also then need to be x and y variables for the pixel coordinates so there is some extra weight to adding that information.

What you've done here is fine.

satpy/readers/gerb_l2_hr_h5.py Outdated Show resolved Hide resolved
@@ -0,0 +1,31 @@
reader:
name: gerb_l2_hr_h5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a not "HR" version of these files? Are these "official" products? Are there other L2 products that aren't HDF5? I'm wondering if we could just call this reader gerb_l2 or gerb_l2_h5 or something shorter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In practice, the L2 HR (9km) product is the one we recommend to users. We have a so-called ARG L1.5 (50 km) product and a BARG L2 (50km). Those are the official products of the GERB instrument (RAL Space, Imperial College and the Royal Meteorological Institute of Belgium produce the data on behalf of EUMETSAT).

There might be an issue if/when we produce "edition 2" data in the future that might be in NetCDF.

I think that gerb_l2_h5 should be fine though :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that makes sense. If there is a chance of NetCDF versions in the future then maybe the h5 is the best for now. We could always try to throw support for all the files in a generic gerb_l2 reader, but maybe that's a little too optimistic about the similarities of the files (especially when some of them don't exist yet).

Comment on lines 48 to 51
ds_min = ds[...].min()
if ds_min < 0:
mask = ds == ds_min
ds_real[mask] = np.nan
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll mention other ideas for dask-ification in my main review comment, but wanted to point out that this is very inefficient and if/when it is done with dask arrays it will be even more inefficient. It was hard for me to understand the product guide regarding fill values, but it looks like the fill or "Error" value is always the minimum value of the in-file data type. Does that sound right?

In that case, can we hardcode this fill value in the YAML or look it up using something like np.iinfo(ds.dtype).min. Or are there attributes in the file that tell us this information? That way we don't have to look at every pixel just to determine the minimum value that we use for masking. Also, why does the minimum value have to be < 0 to be used for masking?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "error value" is not encoded in the file. I added a fill_value entry in the yaml and used that. Note that to do that masking at the integer step I modified the logic a tiny bit.

There is one data field in the GERB file, the cloud cover, stored as "uint8" for which the fill value is 255 and the "min" trick cannot be used. This is not relevant for the fluxes though.

@property
def end_time(self):
"""Get end time."""
return self.start_time + timedelta(minutes=14, seconds=59)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally this end_time is the nominal time of the data and is usually pretty "human friendly". I'm guessing the repeat cycle of the instrument is 15 minutes? In that case could we make this 15 minutes instead of 14:59? Or in some cases we just make this equal to start_time if we can't get it from the file data.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would making the end time exactly 15 mins not present problems as it'll overlap with the start time of the next scan, though?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simonrp84 that was my motivation (plus the fact that some other readers seem to do that)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pdebuyl What other readers? I tried to find it when I made my comment, but I was pretty sure @mraspaud and I defined end_times in Satpy as being exclusive. So start/end times being every 15 minutes would be fine because any downstream application should be interpreting the end time as happening the instant before the time specified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By chance, I had a look at the hsaf_h5 reader (for inspiration as it is another HDF5 reader) and that one use 23 hours 59 minutes, hence my confusion. I'll put 15 minutes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like that was a suggestion from @pnuu here:

#2285 (comment)

Perhaps we need to discuss this further. I know the FCI folks and I had trouble in our SIFT application when the nice human-friendly times weren't used because it was awkward to represent the blocks of time in a GUI...but maybe that was more about using real observation/scanning times rather than off-by-one inclusive times versus exclusive times.

satpy/etc/readers/gerb_l2_hr_h5.yaml Outdated Show resolved Hide resolved
@pdebuyl
Copy link
Contributor Author

pdebuyl commented Sep 13, 2023

Hi @djhoese thanks a lot for taking the time to review the reader. I'll proceed step by step, starting with linter issues.

@codecov
Copy link

codecov bot commented Sep 13, 2023

Codecov Report

Merging #2572 (39c7d56) into main (666dcaa) will increase coverage by 0.00%.
Report is 18 commits behind head on main.
The diff coverage is 94.48%.

@@           Coverage Diff           @@
##             main    #2572   +/-   ##
=======================================
  Coverage   94.91%   94.92%           
=======================================
  Files         351      352    +1     
  Lines       51215    51212    -3     
=======================================
  Hits        48611    48611           
+ Misses       2604     2601    -3     
Flag Coverage Δ
behaviourtests 4.27% <0.00%> (-0.02%) ⬇️
unittests 95.54% <94.48%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
satpy/tests/reader_tests/test_gerb_l2_hr_h5.py 100.00% <100.00%> (ø)
satpy/readers/gerb_l2_hr_h5.py 81.57% <81.57%> (ø)

... and 9 files with indirect coverage changes

@coveralls
Copy link

Pull Request Test Coverage Report for Build 6171738021

  • 20 of 51 (39.22%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.06%) to 95.41%

Changes Missing Coverage Covered Lines Changed/Added Lines %
satpy/readers/gerb_l2_hr_h5.py 20 51 39.22%
Totals Coverage Status
Change from base Build 6146020463: -0.06%
Covered Lines: 48746
Relevant Lines: 51091

💛 - Coveralls

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Sep 14, 2023

Hey @djhoese

I addressed several issues but not all. What remains, is the following:

  1. The area definition: if the seviri grids remain in place, which I guess will be the case, is it an issue to use those are definitions?
  2. I prefer to keep gerb_l2_hr_h5 for the name if possible.
  3. I removed the call to .min() I hope the current version is cleaner.
  4. Regarding the choice between 14 minutes 59s and 15 minutes, I don't know what is preferred.

There is timing data in the files as well, I could parse that if it is better for Satpy. I attach an example of the data below.

HDF5 "G4_SEV4_L20_HR_SOL_TH_20190808_201500_V010.hdf" {
DATASET "/Times/Time (per row)" {
   DATATYPE  H5T_STRING {
      STRSIZE 22;
      STRPAD H5T_STR_NULLTERM;
      CSET H5T_CSET_ASCII;
      CTYPE H5T_C_S1;
   }
   DATASPACE  SIMPLE { ( 1237 ) / ( 1237 ) }
   DATA {
   (0): "20190808 20:27:39.400", "20190808 20:27:38.799",
   (2): "20190808 20:27:38.198", "20190808 20:27:37.598",
   (4): "20190808 20:27:36.997", "20190808 20:27:36.396",
   (6): "20190808 20:27:35.796", "20190808 20:27:35.195",
   (8): "20190808 20:27:34.594", "20190808 20:27:33.994",
...
   (1234): "20190808 20:15:18.201", "20190808 20:15:17.600",
   (1236): "20190808 20:15:17.000"
   }
}
}

@djhoese
Copy link
Member

djhoese commented Sep 14, 2023

  1. SEVIRI area definitions is fine.
  2. Reader name is fine. I'd prefer to drop the _hr_ but it's your reader so you probably know better than I do.
  3. Yes, cleaner. I made a comment about future dask-friendly stuff.
  4. For time, you could parse the file, but for such a simple property maybe just do 15 minutes. If you or someone else has examples of readers that do the equivalent of 14:59 then we can open the discussion to other pytroll developers in slack. I'm 95% sure that it should be 15 minutes though.

@djhoese
Copy link
Member

djhoese commented Sep 14, 2023

I think overall I'd like to see the use of the HDF5 utility file handler base class and tests added.

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Sep 14, 2023

Hi David, at this point I believe the most important is to add tests. Do you have a good example for me to check?

@djhoese
Copy link
Member

djhoese commented Sep 15, 2023

Hi David, at this point I believe the most important is to add tests. Do you have a good example for me to check?

@pdebuyl it depends if you're going to use the HDF5 utility class. You basically have two reasonable options when writing tests for readers:

  1. Create a real file on disk that is very similar to the real data files. Write your tests and point to those files. This is a relatively new practice of Satpy developers, but it catches so many more errors since it doesn't require awkward mocking of low-level file reading functionality...but it does require a lot of work for binary file types. HDF5 and NetCDF4 aren't that bad.
  2. The utility class has a helper test class as well and can be seen being used in the viirs_sdr reader tests and many others. It basically mocks the file opening logic to say "here's a dictionary of variables and metadata that you would have read from the file". This is arguably not as "strong" of a way of doing the tests since we aren't actually testing all of the on-disk file reading, but is nice that all the data is in-memory.

Which ever way you'd like to go with it I can try to point you to more examples.

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Sep 18, 2023

I changed to the HDF5 base reader class, that was quite straightforward (at first I feared not being able to access Dataset attributes but that was fine). I'll check the tests, preferably with an actual file as the mock tests confuse me a bit (some extra abstraction there).

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Oct 4, 2023

Hello @djhoese I added a test file for the reader. The CI fails, but it is apparently related to a hdf/netcdf package installs under windows, should I care about it?

Regarding the test, I create a real HDF5 containing fake data. I dumped an actual file in a hierarchy of dicts and used it to write code to re-write a file with a similar structure. If this is useful, I can share the code that I wrote for this (it does not take into account HDF5 references and custom dtypes).

@djhoese
Copy link
Member

djhoese commented Oct 4, 2023

Merge with upstream main and the CI failure related to building the environment should go away (I fixed it and merged it a couple hours ago).

I'll have to take a look at your code, but what you've said seems reasonable.

Comment on lines +28 to +31
@pytest.fixture(scope="session")
def gerb_l2_hr_h5_dummy_file(tmp_path_factory):
"""Create a dummy HDF5 file for the GERB L2 HR product."""
filename = tmp_path_factory.mktemp("data") / FNAME
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code quality checkers are a little upset with how big this function is. How much of this is used in the actual reading code? Is this a complete copy of the files structure? If it isn't all used you could remove it from the test. A lot of it also looks like it could go into a function (copy C_S1, set size, create variable, write, etc). Maybe even a for loop. Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, this is code generated by a routine that will duplicate some instructions. A lot actually. And very little of the file content is currently used in the code. Basically, the reader loads the most used data and makes sure that the geolocation is correct.

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Oct 4, 2023

I rebased with main, refactored a bit the code, removed a lot of the fake file content, and added the datasets {Solar,Thermal} Radiance to the reader.

I am ok with it but now I can't plot resampled data. I get weird errors like "'DataArray' object has no attribute 'reshape'" or "'DataArray' object has no attribute 'values'". This seemed to me like the data is not available to the scene anymore so I tested the diff below and it works again.

Note that loading the data as a xarray.DataArray worked fine before the rebase from main, so something in the management of the scene's underlying data might have changed.

diff --git a/satpy/readers/gerb_l2_hr_h5.py b/satpy/readers/gerb_l2_hr_h5.py
index 4dad36f0e..bf89f3aa3 100644
--- a/satpy/readers/gerb_l2_hr_h5.py
+++ b/satpy/readers/gerb_l2_hr_h5.py
@@ -29,6 +29,7 @@ from datetime import timedelta
 
 import dask.array as da
 import h5py
+import numpy as np
 import xarray as xr
 
 from satpy.readers.hdf5_utils import HDF5FileHandler
@@ -43,7 +44,7 @@ def gerb_get_dataset(hfile, name, ds_info):
 
     The routine takes into account the quantisation factor and fill values.
     """
-    ds = xr.DataArray(hfile[name][...])
+    ds = hfile[name][...]
     ds_attrs = hfile[name].attrs
     ds_fill = ds_info['fill_value']
     fill_mask = ds != ds_fill
@@ -51,7 +52,7 @@ def gerb_get_dataset(hfile, name, ds_info):
         ds = ds*ds_attrs['Quantisation Factor']
     else:
         ds = ds*1.
-    ds = ds.where(fill_mask)
+    ds = np.where(fill_mask, ds, np.nan)
     return ds

@djhoese
Copy link
Member

djhoese commented Oct 4, 2023

Can you give me a full traceback for one of those errors you're seeing?

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Oct 4, 2023

(py3.11) pierre@pc:~/RS_DATA/GERB$ python sample_scene.py  --region maspalomas
/opt/py3.11/lib/python3.11/site-packages/dask/array/core.py:3470: UserWarning: Passing an object to dask.array.from_array which is already a Dask collection. This can lead to unexpected behavior.
  warnings.warn(
/opt/py3.11/lib/python3.11/site-packages/dask/array/core.py:3470: UserWarning: Passing an object to dask.array.from_array which is already a Dask collection. This can lead to unexpected behavior.
  warnings.warn(
Traceback (most recent call last):
  File "/home/pierre/RS_DATA/GERB/sample_scene.py", line 21, in <module>
    plt.imshow(local_scene['Thermal Flux'].to_numpy(), transform=crs, extent=crs.bounds, origin='upper', cmap=plt.cm.hot)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/py3.11/lib/python3.11/site-packages/xarray/core/dataarray.py", line 776, in to_numpy
    return self.variable.to_numpy()
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/py3.11/lib/python3.11/site-packages/xarray/core/variable.py", line 1299, in to_numpy
    data, *_ = chunkmanager.compute(data)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/py3.11/lib/python3.11/site-packages/xarray/core/daskmanager.py", line 70, in compute
    return compute(*data, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/py3.11/lib/python3.11/site-packages/dask/base.py", line 628, in compute
    results = schedule(dsk, keys, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/py3.11/lib/python3.11/site-packages/xarray/core/common.py", line 278, in __getattr__
    raise AttributeError(
AttributeError: 'DataArray' object has no attribute 'reshape'. Did you mean: 'shape'?

Sample code

import satpy
import numpy as np
import matplotlib.pyplot as plt
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--region', default="FD")
parser.add_argument('--save', action="store_true")
args = parser.parse_args()

scene = satpy.Scene(reader="gerb_l2_hr_h5", filenames=["G1_SEV2_L20_HR_SOL_TH_20120621_101500_ED01.hdf"])
scene.load(['Thermal Flux', 'Solar Flux'])

if args.region != "FD":
    local_scene = scene.resample(args.region)
else:
    local_scene = scene

crs = local_scene['Thermal Flux'].attrs['area'].to_cartopy_crs()
ax = plt.axes(projection=crs); ax.coastlines(); ax.gridlines(); ax.set_global()                                                                                                                
plt.imshow(local_scene['Thermal Flux'].to_numpy(), transform=crs, extent=crs.bounds, origin='upper', cmap=plt.cm.hot)
plt.colorbar(orientation='horizontal')
plt.title("GERB Thermal Flux [W/m²]")

if args.save:
    plt.savefig(f'GERB_OLR_201206211015_{args.region}.png')
    plt.clf()
else:
    plt.figure()

ax = plt.axes(projection=crs)
ax.coastlines()
ax.gridlines()
ax.set_global()
plt.imshow(local_scene['Solar Flux'].to_numpy(), transform=crs, extent=crs.bounds, origin='upper', cmap=plt.cm.hot)
plt.colorbar(orientation='horizontal')
plt.title("GERB Solar Flux [W/m²]")

if args.save:
    plt.savefig(f'GERB_RSW_201206211015_{args.region}.png')
else:
    plt.show()

invoked as python sample_scene.py --region maspalomas. (any region will do of course). Without the --region option there is no error. If you want to check, you can find sample data here: https://gerb.oma.be/public/pdebuyl/sample_data/


The routine takes into account the quantisation factor and fill values.
"""
ds = xr.DataArray(hfile[name][...])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one reason you're getting the dask errors and warnings you're seeing. The result of hfile[name] (I think) should be an xarray.DataArray already. And don't do [...] or you'll load it into memory which is not what we want anymore now that the reader is trying to be dask friendly. See my other comment for more info...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, got it. I am used to h5py and didn't realise that HDF5FileHandler would already take care of that. (+ had a look at the hsaf hdf5 reader that does something similar).

satpy/readers/gerb_l2_hr_h5.py Outdated Show resolved Hide resolved
@pdebuyl
Copy link
Contributor Author

pdebuyl commented Oct 5, 2023

Hi @djhoese thanks for the hand-holding :-)

This seems much cleaner now from the xarray dataset handling. Tests + example program pass on my side, I'll be happy to resolve any further issue if necessary.

Copy link
Member

@djhoese djhoese left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test file creation looks a lot better. There's still a lot of duplicate code there, but maybe I'm not looking close enough at the details. A function that takes the variable name, the array dtype, and the quantisation factor (always f64) could replace 4 or more variable creations, right? Even the ones that need Offset, they could call this additional helper function and then add Offset afterward.

If you're out of time and won't be able to get to this let me know.

@pdebuyl
Copy link
Contributor Author

pdebuyl commented Oct 5, 2023

I applied your recommendend simplification to the reader (get rid of __init__ and bypass use of hfile in gerb_get_dataset.

Regarding the duplicate effort in the test, I don't know if it is worth it to do it.

Copy link
Member

@djhoese djhoese left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Thanks.

@djhoese djhoese changed the title add reader for GERB high-resolution HDF5 files Add reader for GERB high-resolution HDF5 files Oct 6, 2023
@djhoese djhoese merged commit 496d666 into pytroll:main Oct 6, 2023
18 of 20 checks passed
@pdebuyl
Copy link
Contributor Author

pdebuyl commented Oct 6, 2023

Wiii, thanks @djhoese

@pdebuyl pdebuyl deleted the add_gerb_l2_hr_h5 branch October 6, 2023 06:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:readers enhancement code enhancements, features, improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants