Skip to content

Commit

Permalink
Prevent write out path modifiction in s3Mode. Add documentation about…
Browse files Browse the repository at this point in the history
… installation and direct s3 access to readme.
  • Loading branch information
kgabor committed Apr 23, 2024
1 parent 97618f4 commit c5cb8f6
Show file tree
Hide file tree
Showing 3 changed files with 93 additions and 54 deletions.
60 changes: 50 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,25 @@

# bigdataviewer-omezarr

Building and Installation
-------------------------

`bigdataviewer-omezarr` releases are available via the `AllenNeuralDynamics` (unlisted) Fiji update site.
In Fiji, go to Update... -> Advanced mode -> Manage Update Sites -> Add Unlisted Site. Enter name `AllenNeuralDynamics` and URL `https://sites.imagej.net/AllenNeuralDynamics/`. After refreshing, `jars/bigdataviewer-omezarr.jar` will be available for installation.

For more information on Fiji update sites, please visit the [imagej documentation](https://imagej.net/update-sites/setup).

Alternatively, the git repository checkout can be built by maven:
```shell
mvn clean install
```
then copy the compiled snapshot version jar (e.g. `bigdataviewer-omezarr-0.2.2-SNAPSHOT.jar`)
from the local `target/` folder into Fiji's `jars/` folder.

OME-Zarr support
----------------

This package provides OME-Zarr reading support to bigdataviewer. In addition to the OME-NGFF json metadata, a
This package provides OME-Zarr reading support to bigdataviewer and BigStitcher. In addition to the OME-NGFF json metadata, a
bigdataviewer `dataset.xml` dataset definition is required that refers to the image format `bdv.multimg.zarr`. Currently,
two basic loader classes are provided: `XmlIoZarrImageLoader` and `ZarrImageLoader`.

Expand All @@ -23,20 +38,25 @@ The OME-NGFF layouts supported by this package is limited. Most notably:
* `unit` in `axes` definitions in `.zattrs` are ignored. The same units are implicitly assumed across images. Use
`voxelSize` in `dataset.xml` to define physical units.

* For multiresolution images, the first dataset defined in the `.zattrs` must be the raw (finest) resolution. Anisotropy
* For multi-resolution images, the first dataset defined in the `.zattrs` must be the raw (finest) resolution. Anisotropy
defined for the raw resolution in `.zattrs` are ignored, use `voxelSize` in `dataset.xml` instead. Downsampling factors
are determined from `coordinateTransformations` compared to the raw resolution.
are determined from `coordinateTransformations` compared to the raw resolution. Only factor of 2 downsampling sequences
have been tested.

* Multiple images must be located in separate zgroup folders. Only one image per top level `.zattrs` file allowed
* Multiple images must be located in separate zgroup folders. Only one image per top level `.zattrs` file is allowed
(multiple entries in the `multiscales` section are disregarded).

bdv.multimg.zarr format
-----------------------

The `dataset.xml` file should define the location of the zarr image for each _view_ in the following format:
The `dataset.xml` file should define the location of the zarr image for each _view_ in the following format.

### Local filesystem access


```xml
<ImageLoader format="bdv.multimg.zarr" version="1.0">
<!-- type="relative" invokes path resolution that is relative to the xml file location itself. -->
<zarr type="absolute">/absolute_path_to_zarr_root</zarr>
<zgroups>
<zgroup setup="0" timepoint="0">
Expand All @@ -47,13 +67,33 @@ The `dataset.xml` file should define the location of the zarr image for each _vi
</ImageLoader>
```

Build instructions
------------------
### Direct S3 access

Build and install:
Direct AWS S3 access is also supported. It is triggered by the presence of the `<s3bucket>` tag in the xml:

```shell
mvn clean install
```xml
<ImageLoader format="bdv.multimg.zarr" version="1.0">
<s3bucket>aind-open-data</s3bucket>
<!-- Use type=absolute and never start with a leading slash. -->
<zarr type="absolute">prefix_to_zarr_root_within_s3_bucket</zarr>
<zgroups>
<zgroup setup="0" timepoint="0">
<path>relative_path_from_zarr_root</path>
</zgroup>
...
</zgroups>
</ImageLoader>
```

S3 credentials must be accessible to the default AWS authentication chain. In
most cases, credentials may be set in environment variables. While there is a fallback to try anonymous access if none
is provided, authentication is usually faster even to open data buckets with explicit credentials.

Also, `AWS_REGION` and `AWS_DEFAULT_REGION` must be set.

```
export AWS_REGION=us-west-2
export AWS_DEFAULT_REGION=us-west-2
```

Acknowledgement
Expand Down
7 changes: 7 additions & 0 deletions src/main/java/bdv/img/omezarr/MultiscaleImage.java
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,13 @@ public ZarrKeyValueReader build() throws N5Exception {
return new N5ZarrReader(multiscalePath);
}
}
/**
* Getter for s3 mode.
* @return True if S3 mode is active.
*/
public boolean isS3Mode() {
return s3Mode;
}

/**
* New reader builder for a sub-image.
Expand Down
80 changes: 36 additions & 44 deletions src/main/java/bdv/img/omezarr/XmlIoZarrImageLoader.java
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@
* %%
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
Expand All @@ -33,15 +33,11 @@
import bdv.ViewerSetupImgLoader;
import bdv.spimdata.SpimDataMinimal;
import bdv.spimdata.XmlIoSpimDataMinimal;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.AnonymousAWSCredentials;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

import java.io.File;
import java.util.Map;
import java.util.TreeMap;

import mpicbg.spim.data.SpimDataException;
import mpicbg.spim.data.XmlHelpers;
import mpicbg.spim.data.generic.sequence.AbstractSequenceDescription;
Expand All @@ -53,27 +49,28 @@
import static mpicbg.spim.data.XmlHelpers.loadPath;
import static mpicbg.spim.data.XmlKeys.IMGLOADER_FORMAT_ATTRIBUTE_NAME;

@ImgLoaderIo( format = "bdv.multimg.zarr", type = ZarrImageLoader.class )
public class XmlIoZarrImageLoader implements XmlIoBasicImgLoader<ZarrImageLoader>
{
@ImgLoaderIo(format = "bdv.multimg.zarr", type = ZarrImageLoader.class)
public class XmlIoZarrImageLoader implements XmlIoBasicImgLoader<ZarrImageLoader> {
@Override
public Element toXml(final ZarrImageLoader imgLoader, final File basePath )
{
final Element e_imgloader = new Element( "ImageLoader" );
public Element toXml(final ZarrImageLoader imgLoader, final File basePath) {
final Element e_imgloader = new Element("ImageLoader");
final MultiscaleImage.ZarrKeyValueReaderBuilder readerBuilder = imgLoader.getZarrKeyValueReaderBuilder();
e_imgloader.setAttribute( IMGLOADER_FORMAT_ATTRIBUTE_NAME, "bdv.multimg.zarr" );
e_imgloader.setAttribute( "version", "1.0" );
e_imgloader.addContent(XmlHelpers.pathElement("zarr",
new File(readerBuilder.getMultiscalePath()), null));
if (readerBuilder.getBucketName()!=null)
{
e_imgloader.setAttribute(IMGLOADER_FORMAT_ATTRIBUTE_NAME, "bdv.multimg.zarr");
e_imgloader.setAttribute("version", "1.0");
if (readerBuilder.isS3Mode()) {
final Element e_bucket = new Element("s3bucket");
e_bucket.addContent(readerBuilder.getBucketName());
e_imgloader.addContent(e_bucket);
final Element e_zarr = new Element("zarr");
e_zarr.setAttribute("type", "absolute");
e_zarr.addContent(readerBuilder.getMultiscalePath());
e_imgloader.addContent(e_zarr);
} else {
e_imgloader.addContent(XmlHelpers.pathElement("zarr",
new File(readerBuilder.getMultiscalePath()), null));
}
final Element e_zgroups = new Element("zgroups");
for (final Map.Entry<ViewId, String> ze: imgLoader.getZgroups().entrySet())
{
for (final Map.Entry<ViewId, String> ze : imgLoader.getZgroups().entrySet()) {
final ViewId vId = ze.getKey();
final Element e_zgroup = new Element("zgroup");
e_zgroup.setAttribute("setup", String.valueOf(vId.getViewSetupId()));
Expand All @@ -88,48 +85,43 @@ public Element toXml(final ZarrImageLoader imgLoader, final File basePath )
}

@Override
public ZarrImageLoader fromXml(final Element elem, final File basePath, final AbstractSequenceDescription< ?, ?, ? > sequenceDescription )
{
final Element zgroupsElem = elem.getChild( "zgroups" );
final TreeMap<ViewId, String > zgroups = new TreeMap<>();
public ZarrImageLoader fromXml(final Element elem, final File basePath,
final AbstractSequenceDescription<?, ?, ?> sequenceDescription) {
final Element zgroupsElem = elem.getChild("zgroups");
final TreeMap<ViewId, String> zgroups = new TreeMap<>();
// TODO validate that sequenceDescription and zgroups have the same entries
for ( final Element c : zgroupsElem.getChildren( "zgroup" ) )
{
final int timepointId = Integer.parseInt( c.getAttributeValue( "timepoint" ) );
final int setupId = Integer.parseInt( c.getAttributeValue( "setup" ) );
final String path = c.getChild( "path" ).getText();
zgroups.put( new ViewId( timepointId,setupId ), path );
for (final Element c : zgroupsElem.getChildren("zgroup")) {
final int timepointId = Integer.parseInt(c.getAttributeValue("timepoint"));
final int setupId = Integer.parseInt(c.getAttributeValue("setup"));
final String path = c.getChild("path").getText();
zgroups.put(new ViewId(timepointId, setupId), path);
}
final Element s3Bucket = elem.getChild("s3bucket");
final MultiscaleImage.ZarrKeyValueReaderBuilder keyValueReaderBuilder;
if (s3Bucket==null)
{
final File zpath = loadPath( elem, "zarr", basePath );
if (s3Bucket == null) {
final File zpath = loadPath(elem, "zarr", basePath);
keyValueReaderBuilder = new MultiscaleImage.ZarrKeyValueReaderBuilder(zpath.getAbsolutePath());
}
else
{
} else {
// `File` class should not be used for uri manipulation as it replaces slashes with backslashes on Windows
keyValueReaderBuilder = new MultiscaleImage.ZarrKeyValueReaderBuilder(s3Bucket.getText(),
elem.getChildText("zarr"));
}
return new ZarrImageLoader(keyValueReaderBuilder, zgroups, sequenceDescription);
}

public static void main( String[] args ) throws SpimDataException
{
public static void main(String[] args) throws SpimDataException {
// final String fn = "/home/gkovacs/data/davidf_zarr_dataset.xml";
final String fn = "/home/gabor.kovacs/data/davidf_zarr_dataset.xml";
// final String fn = "/home/gabor.kovacs/data/zarr_reader_test_2022-11-16/bdv_zarr_test3.xml";
// final String fn = "/Users/kgabor/data/davidf_zarr_dataset.xml";
final SpimDataMinimal spimData = new XmlIoSpimDataMinimal().load( fn );
final ViewerImgLoader imgLoader = ( ViewerImgLoader ) spimData.getSequenceDescription().getImgLoader();
final SpimDataMinimal spimData = new XmlIoSpimDataMinimal().load(fn);
final ViewerImgLoader imgLoader = (ViewerImgLoader) spimData.getSequenceDescription().getImgLoader();
final ViewerSetupImgLoader<?, ?> setupImgLoader = imgLoader.getSetupImgLoader(0);
int d = setupImgLoader.getImage(0).numDimensions();
setupImgLoader.getMipmapResolutions();
// BigDataViewer.open(spimData, "BigDataViewer Zarr Example", new ProgressWriterConsole(), ViewerOptions.options());
System.out.println( "imgLoader = " + imgLoader );
System.out.println( "setupimgLoader = " + setupImgLoader );
System.out.println("imgLoader = " + imgLoader);
System.out.println("setupimgLoader = " + setupImgLoader);
}
}

Expand Down

0 comments on commit c5cb8f6

Please sign in to comment.