diff --git a/.nojekyll b/.nojekyll
index 36708be..24ce2a4 100644
--- a/.nojekyll
+++ b/.nojekyll
@@ -1 +1 @@
-6686a5a9
\ No newline at end of file
+747e15ad
\ No newline at end of file
diff --git a/imglib2/stream-api/java/2022/10/30/streams/index.html b/imglib2/stream-api/java/2022/10/30/streams/index.html
new file mode 100644
index 0000000..4449bbf
--- /dev/null
+++ b/imglib2/stream-api/java/2022/10/30/streams/index.html
@@ -0,0 +1,14 @@
+
+
The recently released imglib2-6.3.0 adds support for Java Streams.
+
+
Access Img pixels as a Stream
+
The first addition is that every IterableRealInterval<T> (and sub-classes like IterableInterval, Img, …) can now provide (sequential or parallel) streams over its elements.
In particular the latter two examples, where the terminal operation is some form of reduction, allow for more convenient parallelization than the alternatives. Computing the maximum value in parallel is as simple as
Doing the same with LoopBuilder currently requires to parallelize over chunks, collect partial results into mutable holder objects, and implement the reduction of partial results into the final result.
+
+
+
Access Img values and positions as a Stream
+
A stream of only pixel values, without access to their positions is rather limiting. For example, we would often be interested in the location of the image maximum, not only the value. To achieve this, there is a new utility class net.imglib2.stream.Streams, with methods
that allow to create Streams of LocalizableSampler<T> of the pixels of an IterableInterval (and analogous for IterableRealInterval). You can think of LocalizableSampler<T> as a Cursor<T> which cannot be moved, which is more or less what the default implementation does under the hood.
+
The localizable and localizing variants are analogous to cursor() and localizingCursor() The Stream returned by localizable computes element locations only when asked to (with potentially higher per-element cost). The Stream returned by localizing tracks element locations always (in general faster, but potentially unnecessary).
+
For example, to fill image pixels with position-dependent values, we would use localizing, because we require the position of each element.
Conversely, to compute the maximum value and its location in an image, we would use localizable, because we only ask for the position of one element (the maximum).
+
staticvoidprintMax(Img<IntType> img){
+
+ Optional<LocalizableSampler<IntType>> optionalMax =
+ Streams.localizable(img)
+.parallel()
+.map(LocalizableSampler::copy)
+.max(Comparator.comparingInt(c -> c.get().get()));
+ LocalizableSampler<IntType> max = optionalMax.get();
+System.out.println("max position = "+Util.printCoordinates(max));
+System.out.println("max value = "+ max.get().getInteger());
+}
+
(In both cases, it is fine to chose the respectively other variant with no change in behaviour, and only limited performance impact.)
+
+
+
Pitfalls
+
The T elements of the stream are proxies that are re-used, as usual in ImgLib2. Explicit copying operations must be added if stream elements are supposed to be retained (by stateful intermediate or terminal operations).
+
For example, to collect all DoubleType values between 0 and 1 into a list:
+
List< DoubleType > values = img.stream()
+.filter( t -> t.get()>=0.0&& t.get()<=1.0)
+.map( DoubleType::copy )// <-- this is important!
+.collect( Collectors.toList());
+
The .map(DoubleType::copy) operation is necessary, otherwise the values list will contain many duplicates of the same (re-used proxy) DoubleType instance. The copy could also be done before the .filter(...) operation, but it’s better to do it as late as possible to avoid unnecessary creation of objects.
+
Likewise, the .map(LocalizableSampler::copy) in the printMax() example above is required. There is ongoing work to reduce the necessity of explicit copy operations. For example, in the printMax() example, the .max() operation of the stream could be overridden to only copy when a new maximum candidate is encountered.
+
Note, that already the current implementation takes care not to re-use proxies across parallel execution, so threads of a parallelStream() will not interfere.
+
+
+
Implementation details
+
+
Both, pure-value streams and value-and-position streams make use of LocalizableSpliterator<T>. LocalizableSpliterator<T> extends Spliterator and Localizable, similiar to Cursor extending Iterator and Localizable.
+
There are default LocalizableSpliterator<T> (and RealLocalizableSpliterator<T>) implementations based on Cursor<T> (and RealCursor<T>). Therefore, the new streams API works for every IterableRealInterval, without the need to touch existing implementations.
+
Additionally, the standard Img classes have custom LocalizableSpliterator<T>, that leverage knowledge of underlying storage for improved performance.
+
+
+
+
Performance
+
It’s complicated…
+
One the one hand, there comes considerable performance overhead in replacing simple loops with stream operations. This has nothing to do with ImgLib2, it is just a “feature” of the underlying machinery. This can be observed for example by benchmarking looping over an int[] array:
That is, the Stream version is > 4 times slower. Equivalent performance overhead often can be observed in ImgLib2, when replacing Cursor based loops with Stream operations.
+
On the other hand, custom Spliterator implementations sometimes benefit more than cursors from tuning to the underlying storage. (Because iteration is “internal” with the spliterator, while the cursor must return control to the caller after every visited element.) For example, consider the following benchmark method (equivalent code for other variations omitted, see github for full details):
That is, the performance difference between localizing and non-localizing Cursors is much more pronounced than the difference between Cursor loop and Stream. In fact, the Stream version is even faster than the localizingCursor version. On top of that, it is trivial to parallelize.
+
Finally, we did not investigate polymorphism effects so far. It is very much possible that this affects performance and we may have to investigate employing LoopBuilders class-copying mechanism to counter these effects.
+
In summary, I think one should not hesitate to use Streams where it makes sense from a readability and ease-of-use perspective. If performance is a critical concern, it is best to benchmark various approaches, because the behaviour is not easy to predict.
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2022-10-30-streams/mandelbrot.jpg b/posts/2022-10-30-streams/mandelbrot.jpg
new file mode 100644
index 0000000..eeb4387
Binary files /dev/null and b/posts/2022-10-30-streams/mandelbrot.jpg differ
diff --git a/search.json b/search.json
index 64022b6..5fc3c98 100644
--- a/search.json
+++ b/search.json
@@ -56,11 +56,53 @@
"text": "Recommended pattern for defining actions\nAction definitions in BigDataViewer and Mastodon are organized in the following way.\nA set of related actions is collected into a MyActions (for example) class. Action names and default shortcuts are defined as public static final constants, because they are used both for defining the actions, and for creating action Descriptions.\nThe actions contained in MyActions are described in a public static inner class Descriptions extends CommandDescriptionProvider.\nIn the Descriptions constructor, we give a scope for the respective library / tool. Ideally, the scope should be defined public static somewhere so that is can easily used outside the component to discover its actions. For example, BigDataViewer uses this scope. If another tool (BigStitcher, BigWarp, etc.) wants to include BDV shortcuts into its customizable keymaps, they can be easily discovered like that.\n\n\nCode\nimport org.scijava.plugin.Plugin;\nimport org.scijava.ui.behaviour.io.gui.CommandDescriptionProvider;\n\nfinal var DEMO_SCOPE = new CommandDescriptionProvider.Scope( \"tpietzsch.keymap-idiom\" );\nfinal var DEMO_CONTEXT = \"demo\";\n\npublic class MyActions\n{\n // define action name constants\n public static final String ACTION_A = \"Action A\";\n public static final String ACTION_B = \"Action B\";\n public static final String PREFERENCES = \"Preferences\";\n\n // define default shortcut constants\n public static final String[] ACTION_A_KEYS = { \"SPACE\" };\n\n public static final String[] ACTION_B_KEYS = { \"B\", \"shift B\" };\n public static final String[] PREFERENCES_KEYS = { \"meta COMMA\", \"ctrl COMMA\" };\n\n\n /*\n * Command descriptions for all provided commands\n */\n @Plugin( type = CommandDescriptionProvider.class )\n public static class Descriptions extends CommandDescriptionProvider\n {\n public Descriptions()\n {\n super( DEMO_SCOPE, DEMO_CONTEXT );\n }\n\n @Override\n public void getCommandDescriptions( final CommandDescriptions descriptions )\n {\n descriptions.add( ACTION_A, ACTION_A_KEYS, \"trigger Action A\" );\n descriptions.add( ACTION_B, ACTION_B_KEYS, \"trigger Action B\" );\n descriptions.add( PREFERENCES, PREFERENCES_KEYS, \"Show the Preferences dialog.\" );\n }\n }\n\n \n /**\n * Install into the specified {@link Actions}.\n */\n public static void install( final Actions actions, final MainPanel mainPanel, final PreferencesDialog preferencesDialog )\n {\n actions.runnableAction( () -> mainPanel.setText( \"Action A triggered\" ),\n ACTION_A, ACTION_A_KEYS );\n actions.runnableAction( () -> mainPanel.setText( \"Action B triggered\" ),\n ACTION_B, ACTION_B_KEYS );\n actions.runnableAction( () -> preferencesDialog.setVisible( !preferencesDialog.isVisible() ),\n PREFERENCES, PREFERENCES_KEYS );\n }\n}\n\n\nMyActions contains one install method that installs all actions into a provided Actions argument. Ideally, MyActions is stateless, and install method is static.\nThe remaining arguments to install are whatever is needed to create the actions. In the example, the mainPanel is needed to create “Action A” and “Action B”, and the preferencesDialog is needed to create the action to show/hide it.\nSo, MyActions.install(...) is called to install into a provided Actions. Usually every frame/panel in the application should have an Actions instance, which is linked to the KeymapManager so that keymap updates propagate correctly.\nAnd that’s it… This is currently the recommended way to structure and bundle action definitions. You can find the full example on github.\nSee BigDataViewer’s NavigationActions as an example “in the wild”. For behaviours (mouse gestures, etc.) the structure is the same. See BigDataViewer’s TransformEventHandler2D for example."
},
{
- "objectID": "posts/2022-09-27-n5-imglib2.html",
- "href": "posts/2022-09-27-n5-imglib2.html",
- "title": "How to work with the N5 API and ImgLib2?",
+ "objectID": "posts/2022-10-30-streams/2022-10-30-streams.html",
+ "href": "posts/2022-10-30-streams/2022-10-30-streams.html",
+ "title": "Adding Stream support to ImgLib2",
"section": "",
- "text": "In this notebook, we will learn how to work with the N5 API and ImgLib2.\nThe N5 API unifies block-wise access to potentially very large n-dimensional data over a variety of storage backends. Those backends currently are the simple N5 format on the local filesystem, Google Cloud and AWS-S3, the HDF5 file format and Zarr. The ImgLib2 bindings use this API to make this data available as memory cached lazy cell images through ImgLib2.\nThis notebook uses code and data examples from the ImgLib2 large data tutorial I2K2020 workshop (GitHub repository).\nFirst let’s add the necessary dependencies. We will load the n5-ij module which will transitively load ImgLib2 and all the N5 API modules that we will be using in this notebook. It will also load ImageJ which we will use to display data.\n\n\nCode\n%%loadFromPOM\n<repository>\n <id>scijava.public</id>\n <url>https://maven.scijava.org/content/groups/public</url>\n</repository>\n<dependency>\n <groupId>org.janelia.saalfeldlab</groupId>\n <artifactId>n5</artifactId>\n <version>2.5.1</version>\n</dependency>\n<dependency>\n <groupId>org.janelia.saalfeldlab</groupId>\n <artifactId>n5-ij</artifactId>\n <version>3.2.2</version>\n</dependency>\n\n\nNow, we register a simple renderer that uses ImgLib2’s ImageJ bridge and Spencer Park’s image renderer to render the first 2D slice of a RandomAccessibleInterval into the notebook. We also add a renderer for arrays and maps, because we want to list directories and attributes maps later.\n\n\nCode\nimport com.google.gson.*;\nimport io.github.spencerpark.jupyter.kernel.display.common.*;\nimport io.github.spencerpark.jupyter.kernel.display.mime.*;\nimport net.imglib2.img.display.imagej.*;\nimport net.imglib2.view.*;\nimport net.imglib2.*;\n\ngetKernelInstance().getRenderer().createRegistration(RandomAccessibleInterval.class)\n .preferring(MIMEType.IMAGE_PNG)\n .supporting(MIMEType.IMAGE_JPEG, MIMEType.IMAGE_GIF)\n .register((rai, context) -> Image.renderImage(\n ImageJFunctions.wrap(rai, rai.toString()).getBufferedImage(),\n context));\n\ngetKernelInstance().getRenderer().createRegistration(String[].class)\n .preferring(MIMEType.TEXT_PLAIN)\n .supporting(MIMEType.TEXT_HTML, MIMEType.TEXT_MARKDOWN)\n .register((array, context) -> Text.renderCharSequence(Arrays.toString(array), context));\n\ngetKernelInstance().getRenderer().createRegistration(long[].class)\n .preferring(MIMEType.TEXT_PLAIN)\n .supporting(MIMEType.TEXT_HTML, MIMEType.TEXT_MARKDOWN)\n .register((array, context) -> Text.renderCharSequence(Arrays.toString(array), context));\n\ngetKernelInstance().getRenderer().createRegistration(Map.class)\n .preferring(MIMEType.TEXT_PLAIN)\n .supporting(MIMEType.TEXT_HTML, MIMEType.TEXT_MARKDOWN)\n .register((map, context) -> Text.renderCharSequence(map.toString(), context));\n\n\nWe will now open N5 datasets from some sources as lazy-loading ImgLib2 cell images. For opening the N5 readers, we will use the helper class N5Factory which parses the URL and/ or some magic byte in file headers to pick the right reader or writer for the various possible N5 backends. If you know which backend you are using, you should probably use the appropriate implementation directly, it’s not difficult.\n\n\nCode\nimport ij.*;\nimport net.imglib2.converter.*;\nimport net.imglib2.type.numeric.integer.*;\nimport org.janelia.saalfeldlab.n5.*;\nimport org.janelia.saalfeldlab.n5.ij.*;\nimport org.janelia.saalfeldlab.n5.imglib2.*;\n\n/* make an N5 reader, we start with a public container on AWS S3 */\nfinal var n5Url = \"https://janelia-cosem.s3.amazonaws.com/jrc_hela-2/jrc_hela-2.n5\";\nfinal var n5Group = \"/em/fibsem-uint16\";\nfinal var n5Dataset = n5Group + \"/s4\";\nfinal var n5 = new N5Factory().openReader(n5Url);\n\n/* open a dataset as a lazy loading ImgLib2 cell image */\nfinal RandomAccessibleInterval<UnsignedShortType> rai = N5Utils.open(n5, n5Dataset);\n\n/* This is a 3D volume, so let's show the center slice */\nViews.hyperSlice(rai, 2, rai.dimension(2) / 2);\n\n\nlog4j:WARN No appenders could be found for logger (com.amazonaws.auth.AWSCredentialsProviderChain).\nlog4j:WARN Please initialize the log4j system properly.\nlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.\n\n\nCould not load AWS credentials, falling back to anonymous.\n\n\n\n\n\n\n\n\n\nThat’s a bit low on contrast, let’s make it look like TEM, and let’s show a few of those hyperslices through the 3D volume:\n\n\nCode\nvar raiContrast = Converters.convert(\n rai,\n (a, b) -> b.setReal(Math.max(0, Math.min(255, 255 - 255 * (a.getRealDouble() - 26000) / 6000))),\n new UnsignedByteType());\ndisplay(Views.hyperSlice(raiContrast, 2, rai.dimension(2) / 10 * 4), \"image/jpeg\");\ndisplay(Views.hyperSlice(raiContrast, 2, rai.dimension(2) / 2), \"image/jpeg\");\ndisplay(Views.hyperSlice(raiContrast, 2, rai.dimension(2) / 10 * 6), \"image/jpeg\");\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n6e32749d-48d5-4c52-be9b-41c43bae02f4\n\n\nWe can list the attributes and their types of every group or dataset, and read any of them into matching types:\n\n\nCode\nvar groupAttributes = n5.listAttributes(n5Group);\nvar datasetAttributes = n5.listAttributes(n5Dataset);\n\ndisplay(\n \"**\" + n5Group + \"** attributes are ```\" +\n groupAttributes.toString().replace(\", \", \",\\n\").replace(\"{\", \"{\\n\") + \"```\",\n \"text/markdown\");\ndisplay(\n \"**\" + n5Dataset + \"** attributes are ```\" +\n datasetAttributes.toString().replace(\", \", \",\\n\").replace(\"{\", \"{\\n\") + \"```\",\n \"text/markdown\");\n\nvar n5Version = n5.getAttribute(\"/\", \"n5\", String.class);\nvar dimensions = n5.getAttribute(n5Dataset, \"dimensions\", long[].class);\nvar compression = n5.getAttribute(n5Dataset, \"compression\", Compression.class);\nvar dataType = n5.getAttribute(n5Dataset, \"dataType\", DataType.class);\n\ndisplay(n5Version);\ndisplay(dimensions);\ndisplay(compression);\ndisplay(dataType);\n\n\n/em/fibsem-uint16 attributes are { pixelResolution=class java.lang.Object, multiscales=class [Ljava.lang.Object;, n5=class java.lang.String, scales=class [Ljava.lang.Object;, axes=class [Ljava.lang.String;, name=class java.lang.String, units=class [Ljava.lang.String;}\n\n\n/em/fibsem-uint16/s4 attributes are { transform=class java.lang.Object, pixelResolution=class java.lang.Object, dataType=class java.lang.String, name=class java.lang.String, compression=class java.lang.Object, blockSize=class [J, dimensions=class [J}\n\n\n2.0.0\n\n\n[750, 100, 398]\n\n\norg.janelia.saalfeldlab.n5.GzipCompression@673562cc\n\n\nuint16\n\n\n6c5c9bc2-ea28-4685-9658-a8fbf3c65df4\n\n\nLet’s save the contrast adjusted uin8 version of the volume into three N5 supported containers (N5, Zarr, and HDF5), parallelize writing for N5 and Zarr:\n\n\nCode\nimport java.nio.file.*;\n\n/* create a temporary directory */\nPath tmpDir = Files.createTempFile(\"\", \"\");\nFiles.delete(tmpDir);\nFiles.createDirectories(tmpDir);\nvar tmpDirStr = tmpDir.toString();\n\ndisplay(tmpDirStr);\n\n/* get the dataset attributes (dataType, compression, blockSize, dimensions) */\nfinal var attributes = n5.getDatasetAttributes(n5Dataset);\n\n/* use 10 threads to parallelize copy */\nfinal var exec = Executors.newFixedThreadPool(10);\n\n/* save this dataset into a filsystem N5 container */\ntry (final var n5Out = new N5Factory().openFSWriter(tmpDirStr + \"/test.n5\")) {\n N5Utils.save(raiContrast, n5Out, n5Dataset, attributes.getBlockSize(), attributes.getCompression(), exec);\n}\n\n/* save this dataset into a filesystem Zarr container */\ntry (final var zarrOut = new N5Factory().openZarrWriter(tmpDirStr + \"/test.zarr\")) {\n N5Utils.save(raiContrast, zarrOut, n5Dataset, attributes.getBlockSize(), attributes.getCompression(), exec);\n}\n\n/* save this dataset into an HDF5 file, parallelization does not help here */\ntry (final var hdf5Out = new N5Factory().openHDF5Writer(tmpDirStr + \"/test.hdf5\")) {\n N5Utils.save(raiContrast, hdf5Out, n5Dataset, attributes.getBlockSize(), attributes.getCompression());\n}\n\n/* shot down the executor service */\nexec.shutdown();\n\ndisplay(Files.list(tmpDir).map(a -> a.toString()).toArray(String[]::new));\n\n\n/tmp/303790804299695858\n\n\n[/tmp/303790804299695858/test.hdf5, /tmp/303790804299695858/test.n5, /tmp/303790804299695858/test.zarr]\n\n\nd55081b3-d9fd-4208-9bae-181c9253712a\n\n\nNow let us look at them and see if they all contain the same data:\n\n\nCode\ntry (final var n5 = new N5Factory().openReader(tmpDirStr + \"/test.n5\")) {\n final RandomAccessibleInterval<UnsignedByteType> rai = N5Utils.open(n5, n5Dataset);\n display(Views.hyperSlice(rai, 2, rai.dimension(2) / 2), \"image/jpeg\");\n}\n\ntry (final var n5 = new N5Factory().openReader(tmpDirStr + \"/test.zarr\")) {\n final RandomAccessibleInterval<UnsignedByteType> rai = N5Utils.open(n5, n5Dataset);\n display(Views.hyperSlice(rai, 2, rai.dimension(2) / 2), \"image/jpeg\"); \n}\n\ntry (final var n5 = new N5Factory().openReader(tmpDirStr + \"/test.hdf5\")) {\n final RandomAccessibleInterval<UnsignedByteType> rai = N5Utils.open(n5, n5Dataset);\n display(Views.hyperSlice(rai, 2, rai.dimension(2) / 2), \"image/jpeg\"); \n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLet’s clean up temporary storage before we end this tutorial.\n\n\nCode\ntry (var n5 = new N5Factory().openWriter(tmpDirStr + \"/test.n5\")) {\n n5.remove();\n}\ntry (var n5 = new N5Factory().openWriter(tmpDirStr + \"/test.zarr\")) {\n n5.remove();\n}\ntry (var n5 = new N5Factory().openWriter(tmpDirStr + \"/test.hdf5\")) {\n n5.remove();\n}\nFiles.delete(tmpDir);"
+ "text": "(first posted at image.sc)"
+ },
+ {
+ "objectID": "posts/2022-10-30-streams/2022-10-30-streams.html#access-img-pixels-as-a-stream",
+ "href": "posts/2022-10-30-streams/2022-10-30-streams.html#access-img-pixels-as-a-stream",
+ "title": "Adding Stream support to ImgLib2",
+ "section": "Access Img pixels as a Stream",
+ "text": "Access Img pixels as a Stream\nThe first addition is that every IterableRealInterval<T> (and sub-classes like IterableInterval, Img, …) can now provide (sequential or parallel) streams over its elements.\npublic interface IterableRealInterval<T> extends RealInterval, Iterable<T> {\n ...\n Stream<T> stream();\n Stream<T> parallelStream();\n}\nThis is entirely equivalent to java.util.Collection\npublic interface Collection<T> extends Iterable<T> {\n ...\n Stream<T> stream();\n Stream<T> parallelStream();\n}\nand allows to operate on pixel values.\nEncounter order of the streams is always compatible with cursor(). That is, Views.flatIterable(img).stream() yields elements in flat iteration order.\nStreams can be used, for example, to set all pixels of an Img to some value:\nstatic <T extends Type<T>> void fill(Img<T> img, T value) {\n \n img.stream().forEach(t->t.set(value));\n}\nto compute the sum of all values in an Img:\nstatic double sum(Img<DoubleType> img) {\n\n return img.stream()\n .mapToDouble(DoubleType::get)\n .sum();\n}\nor to find the maximum value in an Img:\nstatic double max(Img<DoubleType> img) {\n\n return img.stream()\n .mapToDouble(DoubleType::get)\n .max().getAsDouble();\n}\nIn particular the latter two examples, where the terminal operation is some form of reduction, allow for more convenient parallelization than the alternatives. Computing the maximum value in parallel is as simple as\nstatic double max(Img<DoubleType> img) {\n\n return img.parallelStream()\n .mapToDouble(DoubleType::get)\n .max().getAsDouble();\n}\nDoing the same with LoopBuilder currently requires to parallelize over chunks, collect partial results into mutable holder objects, and implement the reduction of partial results into the final result."
+ },
+ {
+ "objectID": "posts/2022-10-30-streams/2022-10-30-streams.html#access-img-values-and-positions-as-a-stream",
+ "href": "posts/2022-10-30-streams/2022-10-30-streams.html#access-img-values-and-positions-as-a-stream",
+ "title": "Adding Stream support to ImgLib2",
+ "section": "Access Img values and positions as a Stream",
+ "text": "Access Img values and positions as a Stream\nA stream of only pixel values, without access to their positions is rather limiting. For example, we would often be interested in the location of the image maximum, not only the value. To achieve this, there is a new utility class net.imglib2.stream.Streams, with methods\npublic static <T> Stream<RealLocalizableSampler<T>> localizable(IterableRealInterval<T> interval)\npublic static <T> Stream<RealLocalizableSampler<T>> localizing(IterableRealInterval<T> interval)\npublic static <T> Stream<LocalizableSampler<T>> localizable(IterableInterval<T> interval)\npublic static <T> Stream<LocalizableSampler<T>> localizing(IterableInterval<T> interval)\nthat allow to create Streams of LocalizableSampler<T> of the pixels of an IterableInterval (and analogous for IterableRealInterval). You can think of LocalizableSampler<T> as a Cursor<T> which cannot be moved, which is more or less what the default implementation does under the hood.\nThe localizable and localizing variants are analogous to cursor() and localizingCursor() The Stream returned by localizable computes element locations only when asked to (with potentially higher per-element cost). The Stream returned by localizing tracks element locations always (in general faster, but potentially unnecessary).\nFor example, to fill image pixels with position-dependent values, we would use localizing, because we require the position of each element.\nstatic void fractal() {\n \n Img<UnsignedByteType> img = ArrayImgs.unsignedBytes(1000, 1000);\n Streams.localizing(img)\n .parallel()\n .forEach(s -> s.get().set(\n mandelbrot(\n (s.getDoublePosition(0) - 800) / 500,\n (s.getDoublePosition(1) - 500) / 500)\n ));\n BdvFunctions.show(img, \"mandelbrot\", Bdv.options().is2D());\n}\n\n\n\nimage|616x500\n\n\nConversely, to compute the maximum value and its location in an image, we would use localizable, because we only ask for the position of one element (the maximum).\nstatic void printMax(Img<IntType> img) {\n\n Optional<LocalizableSampler<IntType>> optionalMax =\n Streams.localizable(img)\n .parallel()\n .map(LocalizableSampler::copy)\n .max(Comparator.comparingInt(c -> c.get().get()));\n LocalizableSampler<IntType> max = optionalMax.get();\n System.out.println(\"max position = \" + Util.printCoordinates(max));\n System.out.println(\"max value = \" + max.get().getInteger());\n}\n(In both cases, it is fine to chose the respectively other variant with no change in behaviour, and only limited performance impact.)"
+ },
+ {
+ "objectID": "posts/2022-10-30-streams/2022-10-30-streams.html#pitfalls",
+ "href": "posts/2022-10-30-streams/2022-10-30-streams.html#pitfalls",
+ "title": "Adding Stream support to ImgLib2",
+ "section": "Pitfalls",
+ "text": "Pitfalls\nThe T elements of the stream are proxies that are re-used, as usual in ImgLib2. Explicit copying operations must be added if stream elements are supposed to be retained (by stateful intermediate or terminal operations).\nFor example, to collect all DoubleType values between 0 and 1 into a list:\nList< DoubleType > values = img.stream()\n .filter( t -> t.get() >= 0.0 && t.get() <= 1.0 )\n .map( DoubleType::copy ) // <-- this is important!\n .collect( Collectors.toList() );\nThe .map(DoubleType::copy) operation is necessary, otherwise the values list will contain many duplicates of the same (re-used proxy) DoubleType instance. The copy could also be done before the .filter(...) operation, but it’s better to do it as late as possible to avoid unnecessary creation of objects.\nLikewise, the .map(LocalizableSampler::copy) in the printMax() example above is required. There is ongoing work to reduce the necessity of explicit copy operations. For example, in the printMax() example, the .max() operation of the stream could be overridden to only copy when a new maximum candidate is encountered.\nNote, that already the current implementation takes care not to re-use proxies across parallel execution, so threads of a parallelStream() will not interfere."
+ },
+ {
+ "objectID": "posts/2022-10-30-streams/2022-10-30-streams.html#implementation-details",
+ "href": "posts/2022-10-30-streams/2022-10-30-streams.html#implementation-details",
+ "title": "Adding Stream support to ImgLib2",
+ "section": "Implementation details",
+ "text": "Implementation details\n\nBoth, pure-value streams and value-and-position streams make use of LocalizableSpliterator<T>. LocalizableSpliterator<T> extends Spliterator and Localizable, similiar to Cursor extending Iterator and Localizable.\nThere are default LocalizableSpliterator<T> (and RealLocalizableSpliterator<T>) implementations based on Cursor<T> (and RealCursor<T>). Therefore, the new streams API works for every IterableRealInterval, without the need to touch existing implementations.\nAdditionally, the standard Img classes have custom LocalizableSpliterator<T>, that leverage knowledge of underlying storage for improved performance."
+ },
+ {
+ "objectID": "posts/2022-10-30-streams/2022-10-30-streams.html#performance",
+ "href": "posts/2022-10-30-streams/2022-10-30-streams.html#performance",
+ "title": "Adding Stream support to ImgLib2",
+ "section": "Performance",
+ "text": "Performance\nIt’s complicated…\nOne the one hand, there comes considerable performance overhead in replacing simple loops with stream operations. This has nothing to do with ImgLib2, it is just a “feature” of the underlying machinery. This can be observed for example by benchmarking looping over an int[] array:\nint[] values = new int[4_000_000];\n\n@Benchmark\npublic long benchmarkForLoopArray() {\n long count = 0;\n for (int value : values) {\n if (value > 127)\n ++count;\n }\n return count;\n}\n\n@Benchmark\npublic long benchmarkStreamArray() {\n return IntStream.of(values).filter(value -> value > 127).count();\n}\nThe result is\nBenchmark Mode Cnt Score Error Units\nArrayStreamBenchmark.benchmarkForLoopArray avgt 15 2,563 ± 0,026 ms/op\nArrayStreamBenchmark.benchmarkStreamArray avgt 15 11,052 ± 0,022 ms/op\nThat is, the Stream version is > 4 times slower. Equivalent performance overhead often can be observed in ImgLib2, when replacing Cursor based loops with Stream operations.\nOn the other hand, custom Spliterator implementations sometimes benefit more than cursors from tuning to the underlying storage. (Because iteration is “internal” with the spliterator, while the cursor must return control to the caller after every visited element.) For example, consider the following benchmark method (equivalent code for other variations omitted, see github for full details):\n@Benchmark\npublic long benchmarkStream() {\n long sum = Streams.localizable(img)\n .mapToLong(s -> s.get().get()\n + s.getIntPosition(0)\n + s.getIntPosition(1)\n + s.getIntPosition(2)\n ).sum();\n return sum;\n}\nThe result looks like\nBenchmark (imgType) Mode Cnt Score Error Units\nLocalizableSamplerStreamBenchmark.benchmarkCursor ArrayImg avgt 15 10,097 ± 0,046 ms/op\nLocalizableSamplerStreamBenchmark.benchmarkLocalizingCursor ArrayImg avgt 15 3,846 ± 0,020 ms/op\nLocalizableSamplerStreamBenchmark.benchmarkLocalizingStream ArrayImg avgt 15 3,337 ± 0,027 ms/op\nLocalizableSamplerStreamBenchmark.benchmarkLocalizingParallelStream ArrayImg avgt 15 0,962 ± 0,583 ms/op\nThat is, the performance difference between localizing and non-localizing Cursors is much more pronounced than the difference between Cursor loop and Stream. In fact, the Stream version is even faster than the localizingCursor version. On top of that, it is trivial to parallelize.\nFinally, we did not investigate polymorphism effects so far. It is very much possible that this affects performance and we may have to investigate employing LoopBuilders class-copying mechanism to counter these effects.\nIn summary, I think one should not hesitate to use Streams where it makes sense from a readability and ease-of-use perspective. If performance is a critical concern, it is best to benchmark various approaches, because the behaviour is not easy to predict."
+ },
+ {
+ "objectID": "index.html",
+ "href": "index.html",
+ "title": "ImgLib2 news and tutorials",
+ "section": "",
+ "text": "Adding Stream support to ImgLib2\n\n\n\n\n\n\nimglib2\n\n\nstream-api\n\n\njava\n\n\n\nExamples and performance discussion of Java Streams in ImgLib2\n\n\n\n\n\nOct 30, 2022\n\n\nTobias Pietzsch\n\n\n\n\n\n\n\n\n\n\n\n\nHow to work with the N5 API and ImgLib2?\n\n\n\n\n\n\nimglib2\n\n\nn5\n\n\nhdf5\n\n\nzarr\n\n\njupyter\n\n\nnotebook\n\n\n\nRead and write ImgLib2 data with the N5 API\n\n\n\n\n\nSep 27, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\n\n\n\n\n\n\nHow to display ImgLib2 data in a notebook?\n\n\n\n\n\n\nimglib2\n\n\njupyter\n\n\nnotebook\n\n\n\nRender ImgLib2 data into notebook objects\n\n\n\n\n\nSep 14, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\n\n\n\n\n\n\nUser-configurable Keymaps\n\n\n\n\n\n\nui-behaviour\n\n\nbigdataviewer\n\n\n\nHow to set up user-configurable keyboard shortcuts using ui-behaviour and BigDataViewer’s Preferences Dialog\n\n\n\n\n\nAug 8, 2022\n\n\nTobias Pietzsch\n\n\n\n\n\n\n\n\n\n\n\n\nSetup the IJava jupyter kernel\n\n\n\n\n\n\njupyter\n\n\nijava\n\n\njshell\n\n\njava\n\n\nkernel\n\n\n\nFollow these instructions to setup the IJava jupyter kernel by Spencer Park.\n\n\n\n\n\nJun 5, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\n\n\n\n\n\n\nJuliaset Lambda\n\n\n\n\n\n\nimglib2\n\n\nlambda\n\n\nfractal\n\n\njuliaset\n\n\nbigdataviewer\n\n\n\nInteractively render the Juliaset as a lambda function in BigDataViewer\n\n\n\n\n\nMay 2, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\nNo matching items"
},
{
"objectID": "about.html",
@@ -70,11 +112,11 @@
"text": "ImgLib2 is a general-purpose, multidimensional image and data processing library.\nIt provides a unified API to work with discrete and continuous n-dimensional data. This API is interface driven and therefore extensible at will.\nImgLib2 includes implementations of standard numeric and non-numeric data types (8-bit unsigned integer, 32-bit floating point, …) as well as a number of less typical data types (complex 64-bit floating point, 64-bit ARGB, base pairs, …). Data values can be accessed directly or through on-the-fly converters or multi-variate functions.\nFor discrete data (images, n-dimensional arrays), ImgLib2 implements a variety of memory layouts, data generation, loading, and caching strategies, including data linearized into single primitive arrays, series of arrays, n-dimensional arrays of arrays (“cells”), stored in memory, generated or loaded from disk on demand, and cached in memory or on disk. Coordinates and values can be accessed directly or through on-the-fly views that invert or permute axes, generate hyperslices or stack slices top higher dimensional datasets, collapse dimensions into vectors\nFor continuous data (functions, n-dimensional interpolants), ImgLib2 implements a variety of interpolators, geometric transformations, and generator functions. Coordinates and values can be accessed directly or transformed on-the-fly.\nNeed a quick start? Install OpenJDK and maven:\nsudo apt install openjdk-16-jdk maven\nThen check out BigDataViewer vistools:\ngit clone https://github.com/bigdataviewer/bigdataviewer-vistools.git\nThen start JShell in the BigDataViewer vistools project directory:\ncd bigdataviewer-vistools\nmvn compile com.github.johnpoth:jshell-maven-plugin:1.3:run\nThen try out this code snippet:\nimport bdv.util.*;\nimport net.imglib2.position.FunctionRealRandomAccessible;\nimport net.imglib2.type.numeric.integer.IntType;\nimport net.imglib2.util.Intervals;\n\nBdvFunctions.show(\n new FunctionRealRandomAccessible<IntType>(\n 2,\n (x, y) -> {\n int i = 0;\n double v = 0,\n c = x.getDoublePosition(0),\n d = x.getDoublePosition(1);\n for (; i < 64 && v < 4096; ++i) {\n final double e = c * c - d * d;\n d = 2 * c * d;\n c = e + 0.2;\n d += 0.6;\n v = Math.sqrt(c * c + d * d);\n ++i;\n }\n y.set(i);\n },\n IntType::new),\n Intervals.createMinMax(-1, -1, 1, 1),\n \"\",\n BdvOptions.options().is2D()).setDisplayRange(0, 64);"
},
{
- "objectID": "index.html",
- "href": "index.html",
- "title": "ImgLib2 news and tutorials",
+ "objectID": "posts/2022-09-27-n5-imglib2.html",
+ "href": "posts/2022-09-27-n5-imglib2.html",
+ "title": "How to work with the N5 API and ImgLib2?",
"section": "",
- "text": "How to work with the N5 API and ImgLib2?\n\n\n\n\n\n\nimglib2\n\n\nn5\n\n\nhdf5\n\n\nzarr\n\n\njupyter\n\n\nnotebook\n\n\n\nRead and write ImgLib2 data with the N5 API\n\n\n\n\n\nSep 27, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\n\n\n\n\n\n\nHow to display ImgLib2 data in a notebook?\n\n\n\n\n\n\nimglib2\n\n\njupyter\n\n\nnotebook\n\n\n\nRender ImgLib2 data into notebook objects\n\n\n\n\n\nSep 14, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\n\n\n\n\n\n\nUser-configurable Keymaps\n\n\n\n\n\n\nui-behaviour\n\n\nbigdataviewer\n\n\n\nHow to set up user-configurable keyboard shortcuts using ui-behaviour and BigDataViewer’s Preferences Dialog\n\n\n\n\n\nAug 8, 2022\n\n\nTobias Pietzsch\n\n\n\n\n\n\n\n\n\n\n\n\nSetup the IJava jupyter kernel\n\n\n\n\n\n\njupyter\n\n\nijava\n\n\njshell\n\n\njava\n\n\nkernel\n\n\n\nFollow these instructions to setup the IJava jupyter kernel by Spencer Park.\n\n\n\n\n\nJun 5, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\n\n\n\n\n\n\nJuliaset Lambda\n\n\n\n\n\n\nimglib2\n\n\nlambda\n\n\nfractal\n\n\njuliaset\n\n\nbigdataviewer\n\n\n\nInteractively render the Juliaset as a lambda function in BigDataViewer\n\n\n\n\n\nMay 2, 2022\n\n\nStephan Saalfeld\n\n\n\n\n\n\nNo matching items"
+ "text": "In this notebook, we will learn how to work with the N5 API and ImgLib2.\nThe N5 API unifies block-wise access to potentially very large n-dimensional data over a variety of storage backends. Those backends currently are the simple N5 format on the local filesystem, Google Cloud and AWS-S3, the HDF5 file format and Zarr. The ImgLib2 bindings use this API to make this data available as memory cached lazy cell images through ImgLib2.\nThis notebook uses code and data examples from the ImgLib2 large data tutorial I2K2020 workshop (GitHub repository).\nFirst let’s add the necessary dependencies. We will load the n5-ij module which will transitively load ImgLib2 and all the N5 API modules that we will be using in this notebook. It will also load ImageJ which we will use to display data.\n\n\nCode\n%%loadFromPOM\n<repository>\n <id>scijava.public</id>\n <url>https://maven.scijava.org/content/groups/public</url>\n</repository>\n<dependency>\n <groupId>org.janelia.saalfeldlab</groupId>\n <artifactId>n5</artifactId>\n <version>2.5.1</version>\n</dependency>\n<dependency>\n <groupId>org.janelia.saalfeldlab</groupId>\n <artifactId>n5-ij</artifactId>\n <version>3.2.2</version>\n</dependency>\n\n\nNow, we register a simple renderer that uses ImgLib2’s ImageJ bridge and Spencer Park’s image renderer to render the first 2D slice of a RandomAccessibleInterval into the notebook. We also add a renderer for arrays and maps, because we want to list directories and attributes maps later.\n\n\nCode\nimport com.google.gson.*;\nimport io.github.spencerpark.jupyter.kernel.display.common.*;\nimport io.github.spencerpark.jupyter.kernel.display.mime.*;\nimport net.imglib2.img.display.imagej.*;\nimport net.imglib2.view.*;\nimport net.imglib2.*;\n\ngetKernelInstance().getRenderer().createRegistration(RandomAccessibleInterval.class)\n .preferring(MIMEType.IMAGE_PNG)\n .supporting(MIMEType.IMAGE_JPEG, MIMEType.IMAGE_GIF)\n .register((rai, context) -> Image.renderImage(\n ImageJFunctions.wrap(rai, rai.toString()).getBufferedImage(),\n context));\n\ngetKernelInstance().getRenderer().createRegistration(String[].class)\n .preferring(MIMEType.TEXT_PLAIN)\n .supporting(MIMEType.TEXT_HTML, MIMEType.TEXT_MARKDOWN)\n .register((array, context) -> Text.renderCharSequence(Arrays.toString(array), context));\n\ngetKernelInstance().getRenderer().createRegistration(long[].class)\n .preferring(MIMEType.TEXT_PLAIN)\n .supporting(MIMEType.TEXT_HTML, MIMEType.TEXT_MARKDOWN)\n .register((array, context) -> Text.renderCharSequence(Arrays.toString(array), context));\n\ngetKernelInstance().getRenderer().createRegistration(Map.class)\n .preferring(MIMEType.TEXT_PLAIN)\n .supporting(MIMEType.TEXT_HTML, MIMEType.TEXT_MARKDOWN)\n .register((map, context) -> Text.renderCharSequence(map.toString(), context));\n\n\nWe will now open N5 datasets from some sources as lazy-loading ImgLib2 cell images. For opening the N5 readers, we will use the helper class N5Factory which parses the URL and/ or some magic byte in file headers to pick the right reader or writer for the various possible N5 backends. If you know which backend you are using, you should probably use the appropriate implementation directly, it’s not difficult.\n\n\nCode\nimport ij.*;\nimport net.imglib2.converter.*;\nimport net.imglib2.type.numeric.integer.*;\nimport org.janelia.saalfeldlab.n5.*;\nimport org.janelia.saalfeldlab.n5.ij.*;\nimport org.janelia.saalfeldlab.n5.imglib2.*;\n\n/* make an N5 reader, we start with a public container on AWS S3 */\nfinal var n5Url = \"https://janelia-cosem.s3.amazonaws.com/jrc_hela-2/jrc_hela-2.n5\";\nfinal var n5Group = \"/em/fibsem-uint16\";\nfinal var n5Dataset = n5Group + \"/s4\";\nfinal var n5 = new N5Factory().openReader(n5Url);\n\n/* open a dataset as a lazy loading ImgLib2 cell image */\nfinal RandomAccessibleInterval<UnsignedShortType> rai = N5Utils.open(n5, n5Dataset);\n\n/* This is a 3D volume, so let's show the center slice */\nViews.hyperSlice(rai, 2, rai.dimension(2) / 2);\n\n\nlog4j:WARN No appenders could be found for logger (com.amazonaws.auth.AWSCredentialsProviderChain).\nlog4j:WARN Please initialize the log4j system properly.\nlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.\n\n\nCould not load AWS credentials, falling back to anonymous.\n\n\n\n\n\n\n\n\n\nThat’s a bit low on contrast, let’s make it look like TEM, and let’s show a few of those hyperslices through the 3D volume:\n\n\nCode\nvar raiContrast = Converters.convert(\n rai,\n (a, b) -> b.setReal(Math.max(0, Math.min(255, 255 - 255 * (a.getRealDouble() - 26000) / 6000))),\n new UnsignedByteType());\ndisplay(Views.hyperSlice(raiContrast, 2, rai.dimension(2) / 10 * 4), \"image/jpeg\");\ndisplay(Views.hyperSlice(raiContrast, 2, rai.dimension(2) / 2), \"image/jpeg\");\ndisplay(Views.hyperSlice(raiContrast, 2, rai.dimension(2) / 10 * 6), \"image/jpeg\");\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n6e32749d-48d5-4c52-be9b-41c43bae02f4\n\n\nWe can list the attributes and their types of every group or dataset, and read any of them into matching types:\n\n\nCode\nvar groupAttributes = n5.listAttributes(n5Group);\nvar datasetAttributes = n5.listAttributes(n5Dataset);\n\ndisplay(\n \"**\" + n5Group + \"** attributes are ```\" +\n groupAttributes.toString().replace(\", \", \",\\n\").replace(\"{\", \"{\\n\") + \"```\",\n \"text/markdown\");\ndisplay(\n \"**\" + n5Dataset + \"** attributes are ```\" +\n datasetAttributes.toString().replace(\", \", \",\\n\").replace(\"{\", \"{\\n\") + \"```\",\n \"text/markdown\");\n\nvar n5Version = n5.getAttribute(\"/\", \"n5\", String.class);\nvar dimensions = n5.getAttribute(n5Dataset, \"dimensions\", long[].class);\nvar compression = n5.getAttribute(n5Dataset, \"compression\", Compression.class);\nvar dataType = n5.getAttribute(n5Dataset, \"dataType\", DataType.class);\n\ndisplay(n5Version);\ndisplay(dimensions);\ndisplay(compression);\ndisplay(dataType);\n\n\n/em/fibsem-uint16 attributes are { pixelResolution=class java.lang.Object, multiscales=class [Ljava.lang.Object;, n5=class java.lang.String, scales=class [Ljava.lang.Object;, axes=class [Ljava.lang.String;, name=class java.lang.String, units=class [Ljava.lang.String;}\n\n\n/em/fibsem-uint16/s4 attributes are { transform=class java.lang.Object, pixelResolution=class java.lang.Object, dataType=class java.lang.String, name=class java.lang.String, compression=class java.lang.Object, blockSize=class [J, dimensions=class [J}\n\n\n2.0.0\n\n\n[750, 100, 398]\n\n\norg.janelia.saalfeldlab.n5.GzipCompression@673562cc\n\n\nuint16\n\n\n6c5c9bc2-ea28-4685-9658-a8fbf3c65df4\n\n\nLet’s save the contrast adjusted uin8 version of the volume into three N5 supported containers (N5, Zarr, and HDF5), parallelize writing for N5 and Zarr:\n\n\nCode\nimport java.nio.file.*;\n\n/* create a temporary directory */\nPath tmpDir = Files.createTempFile(\"\", \"\");\nFiles.delete(tmpDir);\nFiles.createDirectories(tmpDir);\nvar tmpDirStr = tmpDir.toString();\n\ndisplay(tmpDirStr);\n\n/* get the dataset attributes (dataType, compression, blockSize, dimensions) */\nfinal var attributes = n5.getDatasetAttributes(n5Dataset);\n\n/* use 10 threads to parallelize copy */\nfinal var exec = Executors.newFixedThreadPool(10);\n\n/* save this dataset into a filsystem N5 container */\ntry (final var n5Out = new N5Factory().openFSWriter(tmpDirStr + \"/test.n5\")) {\n N5Utils.save(raiContrast, n5Out, n5Dataset, attributes.getBlockSize(), attributes.getCompression(), exec);\n}\n\n/* save this dataset into a filesystem Zarr container */\ntry (final var zarrOut = new N5Factory().openZarrWriter(tmpDirStr + \"/test.zarr\")) {\n N5Utils.save(raiContrast, zarrOut, n5Dataset, attributes.getBlockSize(), attributes.getCompression(), exec);\n}\n\n/* save this dataset into an HDF5 file, parallelization does not help here */\ntry (final var hdf5Out = new N5Factory().openHDF5Writer(tmpDirStr + \"/test.hdf5\")) {\n N5Utils.save(raiContrast, hdf5Out, n5Dataset, attributes.getBlockSize(), attributes.getCompression());\n}\n\n/* shot down the executor service */\nexec.shutdown();\n\ndisplay(Files.list(tmpDir).map(a -> a.toString()).toArray(String[]::new));\n\n\n/tmp/303790804299695858\n\n\n[/tmp/303790804299695858/test.hdf5, /tmp/303790804299695858/test.n5, /tmp/303790804299695858/test.zarr]\n\n\nd55081b3-d9fd-4208-9bae-181c9253712a\n\n\nNow let us look at them and see if they all contain the same data:\n\n\nCode\ntry (final var n5 = new N5Factory().openReader(tmpDirStr + \"/test.n5\")) {\n final RandomAccessibleInterval<UnsignedByteType> rai = N5Utils.open(n5, n5Dataset);\n display(Views.hyperSlice(rai, 2, rai.dimension(2) / 2), \"image/jpeg\");\n}\n\ntry (final var n5 = new N5Factory().openReader(tmpDirStr + \"/test.zarr\")) {\n final RandomAccessibleInterval<UnsignedByteType> rai = N5Utils.open(n5, n5Dataset);\n display(Views.hyperSlice(rai, 2, rai.dimension(2) / 2), \"image/jpeg\"); \n}\n\ntry (final var n5 = new N5Factory().openReader(tmpDirStr + \"/test.hdf5\")) {\n final RandomAccessibleInterval<UnsignedByteType> rai = N5Utils.open(n5, n5Dataset);\n display(Views.hyperSlice(rai, 2, rai.dimension(2) / 2), \"image/jpeg\"); \n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLet’s clean up temporary storage before we end this tutorial.\n\n\nCode\ntry (var n5 = new N5Factory().openWriter(tmpDirStr + \"/test.n5\")) {\n n5.remove();\n}\ntry (var n5 = new N5Factory().openWriter(tmpDirStr + \"/test.zarr\")) {\n n5.remove();\n}\ntry (var n5 = new N5Factory().openWriter(tmpDirStr + \"/test.hdf5\")) {\n n5.remove();\n}\nFiles.delete(tmpDir);"
},
{
"objectID": "posts/2022-05-02-juliaset-lambda.html",
diff --git a/sitemap.xml b/sitemap.xml
index 8bc69b6..4178a1a 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -6,19 +6,23 @@
https://imglib.github.io/imglib2-blog/posts/2022-08-08-keymaps/2022-08-08-keymaps.html
- 2024-02-13T02:49:35.396Z
+ 2024-02-13T02:57:42.247Z
- https://imglib.github.io/imglib2-blog/posts/2022-09-27-n5-imglib2.html
- 2024-02-13T01:46:46.818Z
+ https://imglib.github.io/imglib2-blog/posts/2022-10-30-streams/2022-10-30-streams.html
+ 2024-02-13T03:22:37.322Z
+
+
+ https://imglib.github.io/imglib2-blog/index.html
+ 2024-02-13T01:52:47.681Zhttps://imglib.github.io/imglib2-blog/about.html2024-02-13T02:05:03.097Z
- https://imglib.github.io/imglib2-blog/index.html
- 2024-02-13T01:52:47.681Z
+ https://imglib.github.io/imglib2-blog/posts/2022-09-27-n5-imglib2.html
+ 2024-02-13T01:46:46.818Zhttps://imglib.github.io/imglib2-blog/posts/2022-05-02-juliaset-lambda.html