Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image Functions #704

Open
minhtien-trinh opened this issue Sep 9, 2024 · 2 comments
Open

Image Functions #704

minhtien-trinh opened this issue Sep 9, 2024 · 2 comments

Comments

@minhtien-trinh
Copy link

minhtien-trinh commented Sep 9, 2024

As discussed with @LucaMarconato @melonora and @josenimo, I think it would be great to extend the functionality and ease of use of spatialdata by adding a few functions. When I first started using spatialdata I ran into a few issues like napari crashing due to image size, difficult image loading and overall very laborious image handling. The functions I suggest to implement are as follows:

  1. Image Loader for universal file formats such as JPG, PNG, TIFF, Ome-TIFF
    -> from my testing skimage.imread is able to handle all 4 of these formats, maybe a simple addition to Image2DModel or another simple function that loads any image into a dask array?

  2. Image size/available GPU memory check
    -> warn the user if the image size exceeds VRAM since big images may cause napari to stutter/crash
    -> @melonora suggested the vispy.gloo.gl package

  3. A Check/hook to verify that the image has been written to zarr before opening in napari

example for cpu check
def estimate_memory_requirements(dask_array):
    # Calculate total number of elements in the array
    num_elements = dask_array.size
    # Determine the size of each element in bytes
    element_size = dask_array.dtype.itemsize
    # Total memory requirement in bytes
    total_memory = num_elements * element_size
    
    return total_memory

def check_system_resources(memory_required):
    # Check available RAM
    available_ram = psutil.virtual_memory().available
    
    # Assuming that we will need approximately the same amount of GPU memory
    try:
        import pynvml
        pynvml.nvmlInit()
        handle = pynvml.nvmlDeviceGetHandleByIndex(0)
        available_gpu_memory = pynvml.nvmlDeviceGetMemoryInfo(handle).free
        pynvml.nvmlShutdown()
    except ImportError:
        available_gpu_memory = None  # pynvml is not installed or GPU is not available
    
    ram_sufficient = memory_required <= available_ram
    gpu_sufficient = available_gpu_memory is None or memory_required <= available_gpu_memory
    
    return ram_sufficient, gpu_sufficient, available_ram, available_gpu_memory

def load_and_check_image(image_path):
    dask_array = dask_image.imread.imread(image_path)
    memory_required = estimate_memory_requirements(dask_array)
    
    ram_sufficient, gpu_sufficient, available_ram, available_gpu_memory = check_system_resources(memory_required)
    
    if not ram_sufficient:
        print(f"\U00002757 Warning: Not enough RAM. Required: {memory_required / (1024**3):.2f} GB, Available: {available_ram / (1024**3):.2f} GB")
    if gpu_sufficient is False:
        print(f"\U00002757 Warning: Not enough GPU memory. Required: {memory_required / (1024**3):.2f} GB, Available: {available_gpu_memory / (1024**3):.2f} GB")
    
    if ram_sufficient and (gpu_sufficient is None or gpu_sufficient):
        print("\U00002705 System resources are sufficient to handle the image load.")
    else:
        print("\U0000274C System resources are insufficient to handle the image load. Downscaling recommended.")
    
    return dask_array
@LucaMarconato
Copy link
Member

LucaMarconato commented Oct 3, 2024

Hi, thanks for tracking the discussion in a GitHub issue.

Image Loader for universal file formats such as JPG, PNG, TIFF, Ome-TIFF
-> from my testing skimage.imread is able to handle all 4 of these formats, maybe a simple addition to Image2DModel or another simple function that loads any image into a dask array?

Yes, that would be very convenient. But I think it better to have this in spatialdata-io so we keep the models minimalistic. Currently we make an exception to this rule and we allow to parse geojson files with the ShapesModel. I'd also move this to spatialdata-io so that we have that spatialdata only reads .zarr files, and all the other extensions are handled by spatialdata-io. The rationale is to avoid maintenance burden due to edge cases in different files extensions. On the other hand the file extensions that you mentioned are very universal, so they could fit the image models. @giovp comments on this?

Summary:

  • (proposing to) add IO convenience functions for common extension in spaitaldata-io
  • (consider to) move the .geojson parser to spatialdata-io so that spatialdata only deals with .zarr files

A Check/hook to verify that the image has been written to zarr before opening in napari
I wonder where we could put this check, because it doesn't just involve napari-spatialdata and every operation would be slow (spatialdata-plot, query operations, etc), if large image data is not saved (unless the user really wants that for their specific use cases). Maybe we could add this check as an private API in spatialdata and then have napari-spatialdata, spatialdata-plot and some spatialdata APIs operate on that. In alternative we could choose to call this function and warn the user when print(sdata) is called; maybe better.

Summary:

  • add an internal API that warns the user that the data is not read from disk
  • consider either calling this API when print(sdata) is called, or in spatialdata-plot and napari-spatialdata. Probably the first is better.

The function would do the following:

  • check the image and raster data;
  • the warning would be displayed only if an image/labels element:
    • is too big (checking .shape, here .chunks would not be important).
    • AND if the not backed by a Zarr store (this information can be obtained by calling get_dask_backing_files and checking if the backing files are a valid Zarr store. The user can disable the warning via a global flag

Image size/available GPU memory check
-> warn the user if the image size exceeds VRAM since big images may cause napari to stutter/crash
-> @melonora suggested the vispy.gloo.gl package
For the point above we need a number to now when an image/labels element is too big. Here I would recommend to keep things simple and just have a reasonable constant that the user can choose and not an automatic way to infer this number. The rational is as follows:

  • not every machine has an NVIDIA graphic card
  • even if one has an NVIDIA graphic card, one may be using another graphic card, or one may not be interested in visualization, or one may have actually less available memory to use because for instance most of the VRAM is taken up by another job
  • even if one machine has low RAM, one may be interested to compensate for this using swap memory or other strategy.

Also, I imagine the code not to be portable with the new Apple Silicon architecture, where one doesn't have an NVIDIA graphic card and has mixed RAM/VRAM memory that is handled by the OS.

Summary:

  • better to start by keeping things simple and have a single constant to determine what is "big". Eventually in the future consider some code to automatically infer this.

@LucaMarconato
Copy link
Member

Final comment:

  • we should add a warning in the .parse() of raster models when the raster data has a large size but the user didn't specify the scale_factors argument or has large values for.chunks. That is likely a mistake and would lead to slow performance even if the data is written to disk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants