From 588dcf63a7eb9d703f1d5181bacbd7f86396cbc9 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 12:37:43 -0700 Subject: [PATCH 01/36] Update trigger.md Adding kind variable to the trigger setup --- docs/tutorials/setup_acquisition/trigger.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/trigger.md b/docs/tutorials/setup_acquisition/trigger.md index 75a10e5d..e615c001 100644 --- a/docs/tutorials/setup_acquisition/trigger.md +++ b/docs/tutorials/setup_acquisition/trigger.md @@ -69,7 +69,7 @@ Output triggers can be set to begin exposure, start a new frame, or wait before ```python config.video[0].camera.settings.output_triggers.exposure = acquire.Trigger( - enable=True, line=1, edge="Rising" + edge="Rising", enable=True, line=1, kind="Output" ) ``` From 4e070a4d13b4c4f623ded1df9073b4192327626d Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 12:46:06 -0700 Subject: [PATCH 02/36] Update get_started.md Updating to remove chunking --- docs/get_started.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/docs/get_started.md b/docs/get_started.md index f700c7fe..3898653d 100644 --- a/docs/get_started.md +++ b/docs/get_started.md @@ -138,16 +138,13 @@ config.video[1].camera.settings.shape = (1280, 720) Now we'll configure each output, or sink device. For both simulated cameras, we'll be writing to Zarr, a format which supports chunked arrays. - +For now, we'll simply specify the output file name. For more information about setting up chunking, check out the tutorial [Chunking Data for Zarr Storage](./tutorials/chunked.md) ```python config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") # what file or directory to write the data to config.video[0].storage.settings.filename = "output1.zarr" - -# where applicable, how large should a chunk file get before opening the next chunk file -config.video[0].storage.settings.chunking.max_bytes_per_chunk = 32 * 2**20 # 32 MiB chunk sizes ``` @@ -156,9 +153,6 @@ config.video[1].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr # what file or directory to write the data to config.video[1].storage.settings.filename = "output2.zarr" - -# where applicable, how large should a chunk file get before opening the next chunk file -config.video[1].storage.settings.chunking.max_bytes_per_chunk = 64 * 2**20 # 64 MiB chunk sizes ``` Finally, let's specify how many frames to generate for each camera before stopping our simulated acquisition. From 7fae03822334cbe36691e824bc1f11fa59e1be21 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 12:53:06 -0700 Subject: [PATCH 03/36] Update configure.md Removing references to chunking attribute --- docs/tutorials/setup_acquisition/configure.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/configure.md b/docs/tutorials/setup_acquisition/configure.md index dd084bd6..2d2a87a0 100644 --- a/docs/tutorials/setup_acquisition/configure.md +++ b/docs/tutorials/setup_acquisition/configure.md @@ -65,7 +65,7 @@ config.video[0].camera.settings.pixel_type = acquire.SampleType.U32 ## Configure `Storage` `Storage` objects have 2 attributes, `settings`, a `StorageProperties` object, and an optional attribute `identifier`, which is an instance of the `DeviceIdentifier` class described above. -`StorageProperties` has 2 attributes `external_metadata_json` and `filename` which are strings of the filename or filetree of the output metadata in JSON format and image data in whatever format corresponds to the selected storage device, respectively. `first_frame_id` is an integer ID that corresponds to the first frame of the current acquisition and is typically 0. `pixel_scale_um` is the camera pixel size in microns. `enable_multiscale` is a boolean used to specify if the data should be saved as an image pyramid. See [Multiscale Data Acqusition](../zarr/multiscale.md) for more information. The `chunking` attribute is an instance of the `ChunkingProperties` class, used for Zarr storage. See [Chunking Data for Zarr Storage](../zarr/chunked.md) for more information. +`StorageProperties` has 2 attributes `external_metadata_json` and `filename` which are strings of the filename or filetree of the output metadata in JSON format and image data in whatever format corresponds to the selected storage device, respectively. `first_frame_id` is an integer ID that corresponds to the first frame of the current acquisition and is typically 0. `pixel_scale_um` is the camera pixel size in microns. `acquisition_dimensions` is a list of instances of the `StorageDimension` class, one for each acquisition dimension with the fastest changing dimension listed first and the append dimension listed last. For more information on using the `StorageDimension` class, check out [Chunking Data for Zarr Storage](../zarr/chunked.md). `enable_multiscale` is a boolean used to specify if the data should be saved as an image pyramid. See the [Multiscale tutorial](../zarr/multiscale.md) for more information. We'll specify the name of the output image file below. From d92d994cbfdd7c4fc2a5158b690a7bc78b0bc533 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 12:55:09 -0700 Subject: [PATCH 04/36] Update start_stop.md fixing broken ordered list --- docs/tutorials/setup_acquisition/start_stop.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/docs/tutorials/setup_acquisition/start_stop.md b/docs/tutorials/setup_acquisition/start_stop.md index c62b5919..88200556 100644 --- a/docs/tutorials/setup_acquisition/start_stop.md +++ b/docs/tutorials/setup_acquisition/start_stop.md @@ -82,11 +82,8 @@ DeviceState.Armed ``` 1. The first time we print is immediately after starting acquisition, so no time has elapsed for data collection as compared to the camera exposure time, so while the camera is running, `Running`, there is no data available. - -3. The next print happens after waiting 0.5 seconds, so acquisition is still runnning and now there is acquired data available. - -5. The subsequent print is following calling `runtime.stop()` which waits until the specified max number of frames is collected and then terminates acquisition. Thus, the device is no longer running and there is no available data, since all objects were deleted by calling the `stop` method. The device is in an `Armed` state ready for the next acquisition. - -7. The final print occurs after waiting 5 seconds following the start of acquisition. This waiting period is longer than the 1 second acqusition time (0.1 seconds/frame and 10 frames), so the device is no longer collecting data. However, `runtime.stop()` hasn't been called, so the `AvailableData` object has not yet been deleted. +2. The next print happens after waiting 0.5 seconds, so acquisition is still runnning and now there is acquired data available. +3. The subsequent print is following calling `runtime.stop()` which waits until the specified max number of frames is collected and then terminates acquisition. Thus, the device is no longer running and there is no available data, since all objects were deleted by calling the `stop` method. The device is in an `Armed` state ready for the next acquisition. +4. The final print occurs after waiting 5 seconds following the start of acquisition. This waiting period is longer than the 1 second acqusition time (0.1 seconds/frame and 10 frames), so the device is no longer collecting data. However, `runtime.stop()` hasn't been called, so the `AvailableData` object has not yet been deleted. [Download this tutorial as a Python script](start_stop.py){ .md-button .md-button-center } From 94725cc23495a9ea8aee36d2881fae36dcdc7ffa Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 12:59:59 -0700 Subject: [PATCH 05/36] Update framedata.md --- docs/tutorials/access_data/framedata.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tutorials/access_data/framedata.md b/docs/tutorials/access_data/framedata.md index 23040dfa..b12743c5 100644 --- a/docs/tutorials/access_data/framedata.md +++ b/docs/tutorials/access_data/framedata.md @@ -37,7 +37,7 @@ config = runtime.set_configuration(config) ``` ## Working with `AvailableData` objects -During Acquisition, the `AvailableData` object is the streaming interface, and this class has a `frames` method which iterates over the `VideoFrame` objects in `AvailableData`. Once we start acquisition, we'll utilize this iterator method to list the frames. To increase the likelihood of `AvailableData` containing data, we'll utilize the time python package to introduce a delay before we create our `AvailableData` object +During Acquisition, the `AvailableData` object is the streaming interface, and this class has a `frames` method which iterates over the `VideoFrame` objects in `AvailableData`. Once we start acquisition, we'll utilize this iterator method to list the frames. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object ```python @@ -76,7 +76,7 @@ else: del available_data ``` -`video_frames` is a list with each element being an instance of the `VideoFrame` class. `VideoFrame` has a `data` method which provides the frame as an `NDArray`. The shape of this NDArray corresponds to the image dimensions used internally by Acquire with (planes, height, width, channels). Since we have a single channel, both the first and the last dimensions will be 1. The interior dimensions are height and width, respectively. +`video_frames` is a list with each element being an instance of the `VideoFrame` class. `VideoFrame` has a `data` method which provides the frame as an `NDArray`. The shape of this NDArray corresponds to the image dimensions used internally by Acquire namely [planes, height, width, channels]. Since we have a single channel, both the first and the last dimensions will be 1. The interior dimensions are height and width, respectively. ```python From 2dfc1cf4512f129686380e7a05c588c74133bc0b Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 13:06:28 -0700 Subject: [PATCH 06/36] Update storage.md Adding Zarr V3 --- docs/tutorials/setup_acquisition/storage.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/storage.md b/docs/tutorials/setup_acquisition/storage.md index 35994ae1..deaa8530 100644 --- a/docs/tutorials/setup_acquisition/storage.md +++ b/docs/tutorials/setup_acquisition/storage.md @@ -30,9 +30,12 @@ The output of that script will be: + + + ``` -`Acquire` supports streaming data to [bigtiff](http://bigtiff.org/) and [Zarr V2](https://zarr.readthedocs.io/en/stable/spec/v2.html). +`Acquire` supports streaming data to [bigtiff](http://bigtiff.org/), [Zarr V2](https://zarr-specs.readthedocs.io/en/latest/v2/v2.0.html), and [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html). Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _compression_, and _multiscale storage_. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr/blob/main/README.md). @@ -50,6 +53,12 @@ Zarr has additional capabilities relative to the basic storage devices, namely _ - **ZarrBlosc1Lz4ByteShuffle** - Streams compressed data (_lz4_ codec) to a [Zarr V2](https://zarr.readthedocs.io/en/stable/spec/v2.html) file with associated metadata. +- - **ZarrV3** - Streams data to a [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html) file with associated metadata. + +- **ZarrV3Blosc1ZstdByteShuffle** - Streams compressed data (_zstd_ codec) to a [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html) file with associated metadata. + +- **ZarrV3Blosc1Lz4ByteShuffle** - Streams compressed data (_lz4_ codec) to a [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html) file with associated metadata. + ## Configure the Storage Device In the example below, the the `tiff` storage device is selected, and the data from one video source will be streamed to a file `out.tif`. From 35719b1adcb1a52e35879318247f74165c6f4d01 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 13:07:35 -0700 Subject: [PATCH 07/36] Update drivers.md --- docs/tutorials/setup_acquisition/drivers.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/tutorials/setup_acquisition/drivers.md b/docs/tutorials/setup_acquisition/drivers.md index 2109abed..8122e00a 100644 --- a/docs/tutorials/setup_acquisition/drivers.md +++ b/docs/tutorials/setup_acquisition/drivers.md @@ -47,6 +47,9 @@ The output of this code is below. All discovered devices, both cameras and stora + + + ``` For cameras that weren't discovered you will see an error like the one below. These errors will not affect performance and can be ignored. From 9b71e060a0bbb295d0df7f5f0e46c91e9ea1e58c Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 13:09:42 -0700 Subject: [PATCH 08/36] Update select.md --- docs/tutorials/setup_acquisition/select.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/select.md b/docs/tutorials/setup_acquisition/select.md index b1e35da4..386d8282 100644 --- a/docs/tutorials/setup_acquisition/select.md +++ b/docs/tutorials/setup_acquisition/select.md @@ -31,9 +31,12 @@ Output of the above code is below: + + + ``` -All identified devices will be listed, and in the case of this tutorial, no cameras were connected to the machine, so only simulated cameras were found. Note that discovered storage devices will also print. +All identified devices will be listed, and in the case of this tutorial, no camera drivers were installed on the machine, so only simulated cameras were found. Note that discovered storage devices will also print. The order of those printed devices matters. Below are two examples of how the `select` method works. In the first, without a specific device name provided, `select` will choose the first device of the specified kind in the list of discovered devices. In the second example, a specific device name is provided, so `select` will grab that device if it was discovered by `Runtime`. From d882ba7f27a1617762ec78ee2844f90c2f0dfc6c Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 7 Aug 2024 13:12:52 -0700 Subject: [PATCH 09/36] Update sample_props.json updating --- docs/examples/sample_props.json | 370 +++++++++++++++----------------- 1 file changed, 178 insertions(+), 192 deletions(-) diff --git a/docs/examples/sample_props.json b/docs/examples/sample_props.json index 69ae862f..abef590f 100644 --- a/docs/examples/sample_props.json +++ b/docs/examples/sample_props.json @@ -1,200 +1,186 @@ { - "video": [ - { - "camera": { - "identifier": { - "id": [ - 0, - 1 - ], - "kind": "Camera", - "name": "simulated: radial sin" - }, - "settings": { - "binning": 1, - "exposure_time_us": 0.0, - "input_triggers": { - "acquisition_start": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - }, - "exposure": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - }, - "frame_start": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - } - }, - "line_interval_us": 0.0, - "offset": [ - 0, - 0 - ], - "output_triggers": { - "exposure": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 + "video": [ + { + "camera": { + "identifier": { + "id": [ + 0, + 1 + ], + "kind": "Camera", + "name": "simulated: radial sin" + }, + "settings": { + "binning": 1, + "exposure_time_us": 0.0, + "input_triggers": { + "acquisition_start": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "exposure": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "frame_start": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + } + }, + "line_interval_us": 0.0, + "offset": [ + 0, + 0 + ], + "output_triggers": { + "exposure": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "frame_start": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "trigger_wait": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + } + }, + "pixel_type": "U16", + "readout_direction": "Forward", + "shape": [ + 1, + 1 + ] + } }, - "frame_start": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - }, - "trigger_wait": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - } - }, - "pixel_type": "U16", - "readout_direction": "Forward", - "shape": [ - 1, - 1 - ] - } - }, - "frame_average_count": 0, - "max_frame_count": 18446744073709551615, - "storage": { - "identifier": { - "id": [ - 0, - 0 - ], - "kind": "NONE", - "name": "" - }, - "settings": { - "chunking": { - "max_bytes_per_chunk": 16777216, - "tile": { - "height": 0, - "planes": 0, - "width": 0 + "frame_average_count": 0, + "max_frame_count": 18446744073709551615, + "storage": { + "identifier": { + "id": [ + 0, + 5 + ], + "kind": "Storage", + "name": "trash" + }, + "settings": { + "acquisition_dimensions": [], + "enable_multiscale": false, + "external_metadata_json": "", + "filename": "", + "first_frame_id": 0, + "pixel_scale_um": [ + 0.0, + 0.0 + ] + }, + "write_delay_ms": 0.0 } - }, - "enable_multiscale": false, - "external_metadata_json": "", - "filename": "", - "first_frame_id": 0, - "pixel_scale_um": [ - 0.0, - 0.0 - ] }, - "write_delay_ms": 0.0 - } - }, - { - "camera": { - "identifier": { - "id": [ - 0, - 0 - ], - "kind": "NONE", - "name": "" - }, - "settings": { - "binning": 1, - "exposure_time_us": 0.0, - "input_triggers": { - "acquisition_start": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - }, - "exposure": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - }, - "frame_start": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - } - }, - "line_interval_us": 0.0, - "offset": [ - 0, - 0 - ], - "output_triggers": { - "exposure": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 + { + "camera": { + "identifier": { + "id": [ + 0, + 0 + ], + "kind": "NONE", + "name": "" + }, + "settings": { + "binning": 1, + "exposure_time_us": 0.0, + "input_triggers": { + "acquisition_start": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "exposure": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "frame_start": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + } + }, + "line_interval_us": 0.0, + "offset": [ + 0, + 0 + ], + "output_triggers": { + "exposure": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "frame_start": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + }, + "trigger_wait": { + "edge": "Rising", + "enable": false, + "kind": "Input", + "line": 0 + } + }, + "pixel_type": "U16", + "readout_direction": "Forward", + "shape": [ + 0, + 0 + ] + } }, - "frame_start": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 - }, - "trigger_wait": { - "edge": "Rising", - "enable": false, - "kind": "Input", - "line": 0 + "frame_average_count": 0, + "max_frame_count": 18446744073709551615, + "storage": { + "identifier": { + "id": [ + 0, + 0 + ], + "kind": "NONE", + "name": "" + }, + "settings": { + "acquisition_dimensions": [], + "enable_multiscale": false, + "external_metadata_json": "", + "filename": "", + "first_frame_id": 0, + "pixel_scale_um": [ + 0.0, + 0.0 + ] + }, + "write_delay_ms": 0.0 } - }, - "pixel_type": "U16", - "readout_direction": "Forward", - "shape": [ - 0, - 0 - ] } - }, - "frame_average_count": 0, - "max_frame_count": 18446744073709551615, - "storage": { - "identifier": { - "id": [ - 0, - 0 - ], - "kind": "NONE", - "name": "" - }, - "settings": { - "chunking": { - "max_bytes_per_chunk": 16777216, - "tile": { - "height": 0, - "planes": 0, - "width": 0 - } - }, - "enable_multiscale": false, - "external_metadata_json": "", - "filename": "", - "first_frame_id": 0, - "pixel_scale_um": [ - 0.0, - 0.0 - ] - }, - "write_delay_ms": 0.0 - } - } - ] + ] } From 482106a7940d05427b6d9ad03eb5b76a2cf9fe74 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 14:53:03 -0700 Subject: [PATCH 10/36] Update get_started.md Updating with Alan's changes --- docs/get_started.md | 73 +++++++++++++++++++++------------------------ 1 file changed, 34 insertions(+), 39 deletions(-) diff --git a/docs/get_started.md b/docs/get_started.md index 3898653d..18350ff3 100644 --- a/docs/get_started.md +++ b/docs/get_started.md @@ -49,31 +49,27 @@ Acquire also supports the following output file formats: - [Tiff](https://en.wikipedia.org/wiki/TIFF) - [Zarr](https://zarr.dev/) -For testing and demonstration purposes, Acquire provides a few simulated cameras, as well as raw and trash output devices. -To see all the devices that Acquire supports, you can run the following script: - -```python -import acquire - -for device in acquire.Runtime().device_manager().devices(): - print(device) -``` +Acquire also provides a few simulated cameras, as well as raw byte storage and "trash," which discards all data written to it. ## Tutorial Prerequisites -We will be writing to and reading from the [Zarr format](https://zarr.readthedocs.io/en/stable/), using the [Dask library](https://www.dask.org/) to load and inspect the data, and visualizing the data using [napari](https://napari.org/stable/). +We will be streaming to [TIFF](http://bigtiff.org/), using [scikit-image](https://scikit-image.org/) to load and inspect the data, and visualizing the data using [napari](https://napari.org/stable/). -You can install these prerequisites with: +You can install the prerequisites with: ``` -python -m pip install dask "napari[all]" zarr +python -m pip install "napari[all]" scikit-image ``` ## Setup for Acquisition -We will use one of Acquire's simulated cameras to generate data and use Zarr for our output file format, which is called "storage device" in `Acquire`. +In Acquire parlance, the combination of a source (camera), filter, and sink (output) is called a **video stream**. +We will generate data using simulated cameras (our source) and output to TIFF on the filesystem (our sink). +(For this tutorial, we will not use a filter.) +Acquire supports up to two such video streams. -To begin, instantiate `Runtime` and `DeviceManager` and list the currently supported devices. +Sources are implemented as **Camera** devices, and sinks are implemented as **Storage** devices. +We'll start by seeing all the devices that Acquire supports: ```python import acquire @@ -84,22 +80,23 @@ dm = runtime.device_manager() for device in dm.devices(): print(device) ``` + The **runtime** is the main entry point in Acquire. Through the runtime, you configure your devices, start acquisition, check acquisition status, inspect data as it streams from your cameras, and terminate acquisition. Let's configure our devices now. To do this, we'll get a copy of the current runtime configuration. -We can update the configuration with identifiers from the the runtime's **device manager**, but these devices won't instantiate until we start acquisition. +We can update the configuration with identifiers from the runtime's **device manager**, but these devices won't be created until we start the acquisition. -Acquire supports up to two video streams. -These streams consist of a **source** (i.e., a camera), optionally a **filter**, and a **sink** (an output, like a Zarr dataset or a Tiff file). Before configuring the streams, grab the current configuration of the `Runtime` object with: ```python config = runtime.get_configuration() ``` -Video streams are configured independently. Configure the first video stream by setting properties on `config.video[0]` and the second video stream with `config.video[1]`. We'll be using simulated cameras, one generating a radial sine pattern and one generating a random pattern. +Video streams are configured independently. +Configure the first video stream by setting properties on `config.video[0]` and the second video stream with `config.video[1]`. +We'll be using simulated cameras, one generating a radial sine pattern and one generating a random pattern. ```python config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simulated: radial sin") @@ -137,28 +134,28 @@ config.video[1].camera.settings.shape = (1280, 720) ``` Now we'll configure each output, or sink device. -For both simulated cameras, we'll be writing to Zarr, a format which supports chunked arrays. +For both simulated cameras, we'll be writing to [TIFF](http://bigtiff.org/), a well-known format for storing image data. For now, we'll simply specify the output file name. For more information about setting up chunking, check out the tutorial [Chunking Data for Zarr Storage](./tutorials/chunked.md) ```python -config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") +config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Tiff") # what file or directory to write the data to -config.video[0].storage.settings.filename = "output1.zarr" +config.video[0].storage.settings.filename = "output1.tif" ``` ```python -config.video[1].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") +config.video[1].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Tiff") # what file or directory to write the data to -config.video[1].storage.settings.filename = "output2.zarr" +config.video[1].storage.settings.filename = "output2.tif" ``` Finally, let's specify how many frames to generate for each camera before stopping our simulated acquisition. We also need to register our configuration with the runtime using the `set_configuration` method. -If you want to let the runtime just keep acquiring effectively forever, you can set `max_frame_count` to `2**64 - 1`. +If you want to let the runtime acquire effectively forever, you can set `max_frame_count` to `2**64 - 1`. ```python config.video[0].max_frame_count = 100 # collect 100 frames @@ -172,19 +169,15 @@ config = runtime.set_configuration(config) If you run this tutorial multiple times, you can clear output from previous runs with: ```python - import os - import shutil - - if config.video[0].storage.settings.filename in os.listdir("."): - shutil.rmtree(config.video[0].storage.settings.filename) + from pathlib import Path - if config.video[1].storage.settings.filename in os.listdir("."): - shutil.rmtree(config.video[1].storage.settings.filename) + Path(config.video[0].storage.settings.uri).unlink(missing_ok=True) + Path(config.video[1].storage.settings.uri).unlink(missing_ok=True) ``` ## Acquire Data -To start aquiring data: +To start acquiring data: ```python runtime.start() @@ -192,7 +185,6 @@ runtime.start() Acquisition happens in a separate thread, so at any point we can check on the status by calling the `get_state` method. - ```python runtime.get_state() ``` @@ -204,19 +196,17 @@ This method will wait until you've reached the number of frames to collect speci runtime.stop() ``` -## Visualizing the Data with napari +## Visualizing the data with napari Let's take a look at what we've written. We'll load each Zarr dataset as a Dask array and inspect its dimensions, then we'll use napari to view it. ```python -import dask.array as da +from skimage.io import imread import napari -data1 = da.from_zarr(config.video[0].storage.settings.filename, component="0") -data1 - -data2 = da.from_zarr(config.video[1].storage.settings.filename, component="0") +data1 = imread(config.video[0].storage.settings.filename) +data2 = imread(config.video[1].storage.settings.filename) viewer1 = napari.view_image(data1) @@ -226,3 +216,8 @@ viewer2 = napari.view_image(data2) ## Conclusion For more examples of using Acquire, check out our [tutorials page](tutorials/index.md). + +References: +[Tiff]: https://en.wikipedia.org/wiki/TIFF +[scikit-image]: https://scikit-image.org/ +[napari]: https://napari.org/stable/ From 37f4bbedb8570f241fe6707ca9c7ae3d55deedde Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 15:03:10 -0700 Subject: [PATCH 11/36] Update chunked.md Adding Alan's edits --- docs/tutorials/zarr/chunked.md | 128 ++++++++++++++++++++++++++------- 1 file changed, 104 insertions(+), 24 deletions(-) diff --git a/docs/tutorials/zarr/chunked.md b/docs/tutorials/zarr/chunked.md index 69f1cfc6..c89c056d 100644 --- a/docs/tutorials/zarr/chunked.md +++ b/docs/tutorials/zarr/chunked.md @@ -1,10 +1,12 @@ -# Chunking Data for Zarr Storage +# Configuring Zarr storage with chunking This tutorial will provide an example of writing chunked data to a Zarr storage device. -Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _compression_, and _multiscale storage_. To enable _chunking_, set the attributes in an instance of the `ChunkingProperties` class. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr/blob/main/README.md). +Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _sharding_ (in the case of Zarr V3)_, _compression_, and _multiscale storage_. +To enable _chunking_, set the attributes in an instance of the `ChunkingProperties` class. +You can learn more about the Zarr capabilities in Acquire in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr). -## Configure `Runtime` +## Configure the acquisition To start, we'll create a `Runtime` object and configure the streaming process, selecting `Zarr` as the storage device to enable chunking the data. ```python @@ -22,38 +24,113 @@ config = runtime.get_configuration() # Select the radial sine simulated camera as the video source config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simulated: radial sin") -# Set the storage to Zarr to take advantage of chunking +# Use a storage device that supports chunking, in this case, Zarr config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") -# Set the time for collecting data for a each frame -config.video[0].camera.settings.exposure_time_us = 5e4 # 50 ms +# Delay between each frame +config.video[0].camera.settings.exposure_time_us = 7e4 # 70 ms -# size of image region of interest on the camera (x, y) +# Size of image region of interest on the camera (x, y) config.video[0].camera.settings.shape = (1920, 1080) -# specify the pixel datatype as a uint8 +# Specify the pixel datatype as uint8 config.video[0].camera.settings.pixel_type = acquire.SampleType.U8 -# Set the max frame count -config.video[0].max_frame_count = 10 # collect 10 frames - # Set the output file to out.zarr config.video[0].storage.settings.filename = "out.zarr" ``` -Below we'll configure the chunking specific settings and update all settings with the `set_configuration` method. + +### Storage dimensions + +Because Zarr supports n-dimensional arrays, we need to describe how the data we stream should be interpreted. +We do this by specifying *storage dimensions*, which correspond to the dimensionality of the output array. + +Acquire requires at least 3 dimensions: frame width, frame height, and an append dimension. +The first 2 dimensions are required and may be followed by optional internal dimensions. +The final "append" dimension is also required. + +Each dimension must have a type, for example, space, channel, time, or other. +This tutorial will use 5 dimensions following the [OME-NGFF specification](https://ngff.openmicroscopy.org/latest/#multiscale-md). +That is, we will use TCZYX order, with T corresponding to time, C to channel, Z to depth, Y to height, and X to width. ```python -# Chunk size may need to be optimized for each acquisition. -# See Zarr documentation for further guidance: -# https://zarr.readthedocs.io/en/stable/tutorial.html#chunk-optimizations -config.video[0].storage.settings.chunking.max_bytes_per_chunk = 32 * 2**20 # 32 MB +dimension_x = acquire.StorageDimension( + name="x", + kind="Space", + array_size_px=1920, + chunk_size_px=960 +) + +dimension_y = acquire.StorageDimension( + name="y", + kind="Space", + array_size_px=1080, + chunk_size_px=540 +) + +dimension_z = acquire.StorageDimension( + name="z", + kind="Space", + array_size_px=10, + chunk_size_px=5 +) + +dimension_c = acquire.StorageDimension( + name="c", + kind="Channel", + array_size_px=3, + chunk_size_px=1 +) + +dimension_t = acquire.StorageDimension( + name="t", + kind="Time", + array_size_px=0, + chunk_size_px=10 +) + +config.video[0].storage.settings.acquisition_dimensions = [ + dimension_x, + dimension_y, + dimension_z, + dimension_c, + dimension_t +] +``` + +Notice that each `StorageDimension` object has several attributes. +- `name` is the name of the dimension. It is used to identify the dimension in the output array and must be unique. +- `kind` is the type of dimension. It can be `Space`, `Channel`, `Time`, or `Other`. +- `array_size_px` is the size of the dimension in pixels. It is the total size of the dimension. +- `chunk_size_px` is the size of the chunks in pixels. It is the size of the chunks in which the data will be stored. -# x, y dimensions of each chunk -# 1/2 of the width and height of the image, generating 4 chunks -config.video[0].storage.settings.chunking.tile.width = 1920 // 2 -config.video[0].storage.settings.chunking.tile.height = 1080 // 2 +There is an additional field, `shard_size_chunks`, which is used to specify the number of chunks per shard, but it is +only used in Zarr V3, which we will discuss in a future tutorial. + +The order in which we specify the dimensions is important. +The order of the dimensions in the `acquisition_dimensions` list determines the order of the dimensions in the output array. +In this case, the order is `x`, `y`, `z`, `c`, `t`, which corresponds to the order `TCZYX`. + +Notice that the first two dimensions' `array_size_px` is the same as the camera's shape. +This is because the first two dimensions correspond to the spatial dimensions of the camera. + +Another thing to notice is that the final dimension, `t`, has an `array_size_px` of 0. +This is because the size of the append dimension is not known in advance. +At most, we can say that the size of the append dimension is no larger than the number of frames we want to collect, but +because acquisition may terminate at any point before reaching the maximum frame count, we set the `array_size_px` to 0. + +The number of frames to collect will now depend on the sizes of the internal dimensions. +For our example, to fill just one chunk of the `c` dimension, we will need to collect +`dimension_z.array_size_px * dimension_t.chunk_size_px` frames, or in other words, 10 frames. +To fill a single chunk of the `t` dimension, we will need to collect +`dimension_z.array_size_px * dimension_c.array_size_px * dimension_t.chunk_size_px` frames, or in other words, 300 +frames. + +Below we'll configure the max frame count and update all settings with the `set_configuration` method. + +```python +config.video[0].max_frame_count = dimension_z.array_size_px * dimension_c.array_size_px * dimension_t.chunk_size_px # 300 -# Update the configuration with the chosen parameters config = runtime.set_configuration(config) ``` @@ -78,13 +155,16 @@ group = zarr.open(config.video[0].storage.settings.filename) assert len(group) == 1 # inspect the characteristics of the data -group["0"] +print(group["0"]) ``` The output will be: ``` - + ``` -As expected, we have only 1 top level directory, corresponding to the single array in the group. We would expect more than 1 array only if we were writing [multiscale data](multiscale.md). The overall array shape is (10, 1, 1080, 1920), corresponding to 10 frames, 1 channel, and a height and width of 1080 and 1920, respectively, per frame. +As expected, we have only 1 top level directory, corresponding to the single array in the group. +We would expect more than 1 array only if we were writing [multiscale data](multiscale.md). +The overall array shape is (10, 1, 1080, 1920), corresponding to 10 frames, 1 channel, and a height and width of 1080 +and 1920, respectively, per frame. [Download this tutorial as a Python script](chunked.py){ .md-button .md-button-center } From 10d960a39730d8b0220586a1b704f0d1d90d8b9e Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 15:07:48 -0700 Subject: [PATCH 12/36] Update compressed.md --- docs/tutorials/zarr/compressed.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/tutorials/zarr/compressed.md b/docs/tutorials/zarr/compressed.md index f79b4cbc..1504f5b5 100644 --- a/docs/tutorials/zarr/compressed.md +++ b/docs/tutorials/zarr/compressed.md @@ -28,14 +28,15 @@ config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simula config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "ZarrBlosc1ZstdByteShuffle") # Set the time for collecting data for a each frame -config.video[0].camera.settings.exposure_time_us = 5e4 # 50 ms +config.video[0].camera.settings.exposure_time_us = 7e4 # 70 ms +# Set the size in pixels of the image region of interest on the camera config.video[0].camera.settings.shape = (1024, 768) # Set the max frame count config.video[0].max_frame_count = 100 # collect 100 frames -# Set the output file to out.zarr +# Set the output location to out.zarr config.video[0].storage.settings.filename = "out.zarr" # Update the configuration with the chosen parameters @@ -52,10 +53,10 @@ runtime.start() runtime.stop() ``` -We'll use the [Zarr Python package](https://zarr.readthedocs.io/en/stable/) to read the data in `out.zarr` file. +We'll use the [zarr-python package](https://zarr.readthedocs.io/en/stable/) to read the data in `out.zarr` directory. ```python -# We'll utilize the Zarr python package to read the data +# We'll utilize the zarr-python package to read the data import zarr # load from Zarr From 853d0e52979a0782471544a2e921f20e3d8a625c Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 15:08:22 -0700 Subject: [PATCH 13/36] Update configure.md --- docs/tutorials/setup_acquisition/configure.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tutorials/setup_acquisition/configure.md b/docs/tutorials/setup_acquisition/configure.md index 2d2a87a0..239eeca6 100644 --- a/docs/tutorials/setup_acquisition/configure.md +++ b/docs/tutorials/setup_acquisition/configure.md @@ -58,8 +58,8 @@ config.video[0].camera.settings.exposure_time_us = 5e4 # 50 ms # (x, y) size of the image in pixels config.video[0].camera.settings.shape = (1024, 768) -# Specify the pixel type as Uint32 -config.video[0].camera.settings.pixel_type = acquire.SampleType.U32 +# Specify the pixel type as uint16 +config.video[0].camera.settings.pixel_type = acquire.SampleType.U16 ``` ## Configure `Storage` From df5282fd3887401f038876a748b69db93bd14ea4 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 15:24:13 -0700 Subject: [PATCH 14/36] Update multiscale.md Updating multiscale --- docs/tutorials/zarr/multiscale.md | 71 ++++++++++++++++++++++++------- 1 file changed, 55 insertions(+), 16 deletions(-) diff --git a/docs/tutorials/zarr/multiscale.md b/docs/tutorials/zarr/multiscale.md index ef189889..f87378e5 100644 --- a/docs/tutorials/zarr/multiscale.md +++ b/docs/tutorials/zarr/multiscale.md @@ -1,8 +1,8 @@ -# Multiscale Data Acqusition +# Configuring Zarr multiscale storage This tutorial will provide an example of writing multiscale data to a Zarr file. -Zarr has additional capabilities relative to Acquire's basic storage devices, namely _chunking_, _compression_, and _multiscale storage_. To enable _chunking_ and _multiscale storage_, set those attributes in instances of the `ChunkingProperties` and `StorageProperties` classes, respectively. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr/blob/main/README.md). +Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _sharding_ (in the case of Zarr V3)_, _compression_, and _multiscale storage_. To enable _multiscale storage_, set the `enable_multiscale` attribute of the `StorageProperties` class to true. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr). ## Configure `Runtime` To start, we'll create a `Runtime` object and begin to configure the streaming process, selecting `Zarr` as the storage device so that writing multiscale data is possible. @@ -26,14 +26,11 @@ config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simula config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") # Set the time for collecting data for a each frame -config.video[0].camera.settings.exposure_time_us = 5e4 # 50 ms +config.video[0].camera.settings.exposure_time_us = 7e4 # 70 ms # Set the size of image region of interest on the camera (x, y) config.video[0].camera.settings.shape = (1920, 1080) -# Set the max frame count -config.video[0].max_frame_count = 5 # collect 5 frames - # Set the image data type as a Uint8 config.video[0].camera.settings.pixel_type = acquire.SampleType.U8 @@ -44,19 +41,61 @@ config.video[0].storage.settings.pixel_scale_um = (1, 1) # 1 micron by 1 micron config.video[0].storage.settings.filename = "out.zarr" ``` -To complete configuration, we'll configure the multiscale specific settings and update all settings with the `set_configuration` method. +To complete configuration, we'll configure the chunking and multiscale specific settings and update all settings with the `set_configuration` method. For a more detailed explanation of configuring Zarr storage with chunking, check out [this tutorial](./chunked.md). + +To start, we'll configure the `acquisition_dimensions` attribute of the `StorageProperties` class. `acquisition_dimensions` is a list of `StorageDimension` objects, one for each acquisition dimension. ```python -# Chunk size may need to be optimized for each acquisition. -# See Zarr documentation for further guidance: -# https://zarr.readthedocs.io/en/stable/tutorial.html#chunk-optimizations -config.video[0].storage.settings.chunking.max_bytes_per_chunk = 16 * 2**20 # 16 MB +dimension_x = acquire.StorageDimension( + name="x", + kind="Space", + array_size_px=1920, + chunk_size_px=960 +) + +dimension_y = acquire.StorageDimension( + name="y", + kind="Space", + array_size_px=1080, + chunk_size_px=540 +) + +dimension_z = acquire.StorageDimension( + name="z", + kind="Space", + array_size_px=10, + chunk_size_px=5 +) + +dimension_c = acquire.StorageDimension( + name="c", + kind="Channel", + array_size_px=3, + chunk_size_px=1 +) + +dimension_t = acquire.StorageDimension( + name="t", + kind="Time", + array_size_px=0, + chunk_size_px=10 +) + +config.video[0].storage.settings.acquisition_dimensions = [ + dimension_x, + dimension_y, + dimension_z, + dimension_c, + dimension_t +] + +# Set the max frame count based on the storage dimensions +config.video[0].max_frame_count = dimension_z.array_size_px * dimension_c.array_size_px * dimension_t.chunk_size_px # 300 +``` -# x, y dimensions of each chunk -# 1/3 of the width and height of the image, generating 9 chunks -config.video[0].storage.settings.chunking.tile.width = (config.video[0].camera.settings.shape[0] // 3) -config.video[0].storage.settings.chunking.tile.height = (config.video[0].camera.settings.shape[1] // 3) +Finally, turn on multiscale and update all the settings. +```python # turn on multiscale mode config.video[0].storage.settings.enable_multiscale = True @@ -84,7 +123,7 @@ group = zarr.open("out.zarr") With multiscale mode enabled, an image pyramid will be formed by rescaling the data by a factor of 2 progressively until the rescaled image is smaller than the specified zarr chunk size in both dimensions. In this example, the original image dimensions are (1920, 1080), and we chunked the data using tiles 1/3 of the size of the image, namely (640, 360). To illustrate this point, we'll inspect the sizes of the various levels in the multiscale data and compare it to our specified chunk size. ```python -group["0"], group["1"], group["2"] +print(group["0"], group["1"], group["2"]) ``` The output will be: From 5bccad4fd5357783726fad87604c685a3042c302 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 15:27:32 -0700 Subject: [PATCH 15/36] Update chunked.md --- docs/tutorials/zarr/chunked.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/zarr/chunked.md b/docs/tutorials/zarr/chunked.md index c89c056d..35a202db 100644 --- a/docs/tutorials/zarr/chunked.md +++ b/docs/tutorials/zarr/chunked.md @@ -3,7 +3,7 @@ This tutorial will provide an example of writing chunked data to a Zarr storage device. Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _sharding_ (in the case of Zarr V3)_, _compression_, and _multiscale storage_. -To enable _chunking_, set the attributes in an instance of the `ChunkingProperties` class. +To enable _chunking_, in the `StorageDimension` class, set the `chunk_size_px` attribute, which is size of a chunk along this dimension in pixels, to a number greater than 1 for each acquisition dimension. You can learn more about the Zarr capabilities in Acquire in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr). ## Configure the acquisition From b0e8f7936c24a36aedd7e131873dc5fa72dcd046 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 15:42:41 -0700 Subject: [PATCH 16/36] Update chunked.md --- docs/tutorials/zarr/chunked.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/zarr/chunked.md b/docs/tutorials/zarr/chunked.md index 35a202db..fb531e6f 100644 --- a/docs/tutorials/zarr/chunked.md +++ b/docs/tutorials/zarr/chunked.md @@ -160,7 +160,7 @@ print(group["0"]) The output will be: ``` - + ``` As expected, we have only 1 top level directory, corresponding to the single array in the group. We would expect more than 1 array only if we were writing [multiscale data](multiscale.md). From e4e1912e099ffc2b5189473f89cf2c62af184971 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 16:04:38 -0700 Subject: [PATCH 17/36] Update framedata.md --- docs/tutorials/access_data/framedata.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tutorials/access_data/framedata.md b/docs/tutorials/access_data/framedata.md index b12743c5..b88ab509 100644 --- a/docs/tutorials/access_data/framedata.md +++ b/docs/tutorials/access_data/framedata.md @@ -51,8 +51,8 @@ runtime.start() time.sleep(0.5) # grab the packet of data available on disk for video stream 0. -# This is an AvailableData object. -available_data = runtime.get_available_data(0) +# This is an AvailableData object. Note that the get_available_data returns AvailableDataContext, so we use the __enter__ method to return an AvailableData object. +available_data = runtime.get_available_data(0).__enter__() ``` There may not be data available, in which case our variable `available_data` would be `None`. To avoid errors associated with this circumstance, we'll only grab data if `available_data` is not `None`. From 9fcf0637bfe1c5a79005f9357dea50cd4939d764 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 16:07:41 -0700 Subject: [PATCH 18/36] Update livestream_napari.py Updating for AvailableDataContext --- docs/examples/livestream_napari.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/examples/livestream_napari.py b/docs/examples/livestream_napari.py index ef4b1f9e..589cf698 100644 --- a/docs/examples/livestream_napari.py +++ b/docs/examples/livestream_napari.py @@ -64,7 +64,7 @@ def is_not_done() -> bool: def next_frame(): #-> Optional[npt.NDArray[Any]]: """Get the next frame from the current stream.""" if nframes[stream_id] < config.video[stream_id].max_frame_count: - if packet := runtime.get_available_data(stream_id): + if packet := runtime.get_available_data(stream_id).__enter__(): n = packet.get_frame_count() nframes[stream_id] += n f = next(packet.frames()) From 3b3043bb71f543d0649b07368c1c3be323d4c6de Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 16:30:11 -0700 Subject: [PATCH 19/36] Update multiscale.md --- docs/tutorials/zarr/multiscale.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/tutorials/zarr/multiscale.md b/docs/tutorials/zarr/multiscale.md index f87378e5..0d2156d7 100644 --- a/docs/tutorials/zarr/multiscale.md +++ b/docs/tutorials/zarr/multiscale.md @@ -50,14 +50,14 @@ dimension_x = acquire.StorageDimension( name="x", kind="Space", array_size_px=1920, - chunk_size_px=960 + chunk_size_px=640 ) dimension_y = acquire.StorageDimension( name="y", kind="Space", array_size_px=1080, - chunk_size_px=540 + chunk_size_px=360 ) dimension_z = acquire.StorageDimension( @@ -123,12 +123,16 @@ group = zarr.open("out.zarr") With multiscale mode enabled, an image pyramid will be formed by rescaling the data by a factor of 2 progressively until the rescaled image is smaller than the specified zarr chunk size in both dimensions. In this example, the original image dimensions are (1920, 1080), and we chunked the data using tiles 1/3 of the size of the image, namely (640, 360). To illustrate this point, we'll inspect the sizes of the various levels in the multiscale data and compare it to our specified chunk size. ```python -print(group["0"], group["1"], group["2"]) +print(group["0"]) ``` The output will be: ``` + +``` + +TO BE UPDATED: (, , ) From 78f5b5cf364bdf2ab9c7eb611cdfb43564741b90 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Mon, 26 Aug 2024 16:30:36 -0700 Subject: [PATCH 20/36] Update multiscale.md --- docs/tutorials/zarr/multiscale.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/tutorials/zarr/multiscale.md b/docs/tutorials/zarr/multiscale.md index 0d2156d7..9155ecff 100644 --- a/docs/tutorials/zarr/multiscale.md +++ b/docs/tutorials/zarr/multiscale.md @@ -133,6 +133,7 @@ The output will be: ``` TO BE UPDATED: +``` (, , ) From eab0bc36fa37db84688dc24120c6d356ebf92144 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 28 Aug 2024 11:17:04 -0700 Subject: [PATCH 21/36] Update framedata.md --- docs/tutorials/access_data/framedata.md | 87 +++++++++---------------- 1 file changed, 32 insertions(+), 55 deletions(-) diff --git a/docs/tutorials/access_data/framedata.md b/docs/tutorials/access_data/framedata.md index b88ab509..eb3519df 100644 --- a/docs/tutorials/access_data/framedata.md +++ b/docs/tutorials/access_data/framedata.md @@ -27,6 +27,7 @@ config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Tras # Set the time for collecting data for a each frame config.video[0].camera.settings.exposure_time_us = 5e4 # 50 ms +# Set the shape of the region of interest on the camera chip config.video[0].camera.settings.shape = (1024, 768) # Set the max frame count to 2**(64-1) the largest number supported by Uint64 for essentially infinite acquisition @@ -37,8 +38,13 @@ config = runtime.set_configuration(config) ``` ## Working with `AvailableData` objects -During Acquisition, the `AvailableData` object is the streaming interface, and this class has a `frames` method which iterates over the `VideoFrame` objects in `AvailableData`. Once we start acquisition, we'll utilize this iterator method to list the frames. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object +During Acquisition, the `AvailableData` object is the streaming interface. By calling `get_available_data`, you create an `AvailableDataContext` object which manages memory by deleting the `AvailableData` object when it is no longer in use to free up memory. We can create an `AvailableData` object by using a `with` statement, and work with the `AvailableData` object while it exists inside of the `with` loop. +There may not be data available, in which case our `AvailableData` object would be `None`. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object. + +If it is not `None`, we'll use the `AvailableData` `frames` method, which iterates over the `VideoFrame` objects in `AvailableData`, and the python `list` method to create a variable `video_frames`, a list of the `VideoFrame` objects one for each stream. + +`VideoFrame` has a `data` method which provides the frame as an `NDArray`. The shape of this NDArray corresponds to the image dimensions used internally by Acquire namely [planes, height, width, channels]. Since we have a single channel, both the first and the last dimensions will be 1. The interior dimensions are height and width, respectively. We can use the `numpy.squeeze` method to grab the desired NDArray image data since the other dimensions are 1. This is equivalent to `image = first_frame[0][:, :, 0]`. ```python # package for introducing time delays @@ -51,66 +57,37 @@ runtime.start() time.sleep(0.5) # grab the packet of data available on disk for video stream 0. -# This is an AvailableData object. Note that the get_available_data returns AvailableDataContext, so we use the __enter__ method to return an AvailableData object. -available_data = runtime.get_available_data(0).__enter__() -``` - -There may not be data available, in which case our variable `available_data` would be `None`. To avoid errors associated with this circumstance, we'll only grab data if `available_data` is not `None`. - -Once `get_available_data()` is called the `AvailableData` object will be locked into memory, so the circular buffer that stores the available data will overflow if `AvailableData` isn’t released, so we'll delete the object with `del available_data` if there is no data available. - - -```python -# NoneType if there is no available data. -# We can only grab frames if data is available. -if available_data is not None: - - - # frames is an iterator over available_data - # we'll use this iterator to make a list of the frames - video_frames = list(available_data.frames()) - -else: - # delete the available_data variable - # if there is no data in the packet to free up RAM - del available_data +# This is an AvailableData object. +with runtime.get_available_data(0) as available_data: + + # NoneType if there is no available data. + # We can only grab frames if data is available. + if available_data is not None: + + # frames is an iterator over available_data + # we'll use this iterator to make a list of the frames + video_frames = list(available_data.frames()) + + # grab the first VideoStream object in frames and convert it to an NDArray + first_frame = video_frames[0].data() + + #inspect the dimensions of the first_frame + print(first_frame.shape) + + # Selecting the image data. Equivalent to image = first_frame[0][:, :, 0] + image = first_frame.squeeze() + + # inspect the dimensions of the squeezed first_frame + print(image.shape) +# stop runtime +runtime.stop() ``` -`video_frames` is a list with each element being an instance of the `VideoFrame` class. `VideoFrame` has a `data` method which provides the frame as an `NDArray`. The shape of this NDArray corresponds to the image dimensions used internally by Acquire namely [planes, height, width, channels]. Since we have a single channel, both the first and the last dimensions will be 1. The interior dimensions are height and width, respectively. - - -```python -# grab the first VideoStream object in frames and convert it to an NDArray -first_frame = video_frames[0].data() -print(first_frame.shape) -``` -Output: +The output will be: ``` (1, 768, 1024, 1) -``` - -We can use the `numpy.squeeze` method to grab the desired NDArray image data from `first_frame` since the other dimensions are 1. This is equivalent to `image = first_frame[0][:, :, 0]`. - -```python -image = first_frame.squeeze() - - -print(image.shape) -``` -Output: -``` (768, 1024) ``` -Finally, delete the `available_data` to unlock the region in the circular buffer. - - -```python -# delete the available_data to free up disk space -del available_data - -# stop runtime -runtime.stop() -``` [Download this tutorial as a Python script](framedata.py){ .md-button .md-button-center } From 67f132608efe0b5a3b0dbaf608d2c5ceb438e656 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Wed, 28 Aug 2024 11:26:08 -0700 Subject: [PATCH 22/36] Update livestream_napari.py --- docs/examples/livestream_napari.py | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/examples/livestream_napari.py b/docs/examples/livestream_napari.py index 589cf698..0b8d461f 100644 --- a/docs/examples/livestream_napari.py +++ b/docs/examples/livestream_napari.py @@ -64,11 +64,12 @@ def is_not_done() -> bool: def next_frame(): #-> Optional[npt.NDArray[Any]]: """Get the next frame from the current stream.""" if nframes[stream_id] < config.video[stream_id].max_frame_count: - if packet := runtime.get_available_data(stream_id).__enter__(): - n = packet.get_frame_count() - nframes[stream_id] += n - f = next(packet.frames()) - return f.data().squeeze().copy() + with runtime.get_available_data(stream_id) as data: + if packet := data: + n = packet.get_frame_count() + nframes[stream_id] += n + f = next(packet.frames()) + return f.data().squeeze().copy() return None stream = 1 From f96a12a41e26247bb5449966dd0f82d7997f189b Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 11:23:34 -0700 Subject: [PATCH 23/36] Update get_started.md --- docs/get_started.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/get_started.md b/docs/get_started.md index 18350ff3..eba7d64b 100644 --- a/docs/get_started.md +++ b/docs/get_started.md @@ -135,7 +135,7 @@ config.video[1].camera.settings.shape = (1280, 720) Now we'll configure each output, or sink device. For both simulated cameras, we'll be writing to [TIFF](http://bigtiff.org/), a well-known format for storing image data. -For now, we'll simply specify the output file name. For more information about setting up chunking, check out the tutorial [Chunking Data for Zarr Storage](./tutorials/chunked.md) +For now, we'll simply specify the output file name. ```python config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Tiff") From ea562cb8d90c04d22ba5e55f0b35f5ead053b6c0 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 11:30:20 -0700 Subject: [PATCH 24/36] Update framedata.md --- docs/tutorials/access_data/framedata.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/tutorials/access_data/framedata.md b/docs/tutorials/access_data/framedata.md index eb3519df..b9f2ee0b 100644 --- a/docs/tutorials/access_data/framedata.md +++ b/docs/tutorials/access_data/framedata.md @@ -38,11 +38,11 @@ config = runtime.set_configuration(config) ``` ## Working with `AvailableData` objects -During Acquisition, the `AvailableData` object is the streaming interface. By calling `get_available_data`, you create an `AvailableDataContext` object which manages memory by deleting the `AvailableData` object when it is no longer in use to free up memory. We can create an `AvailableData` object by using a `with` statement, and work with the `AvailableData` object while it exists inside of the `with` loop. +During Acquisition, the `AvailableData` object is the streaming interface. We can create an `AvailableData` object by calling `get_available_data` in a `with` statement, and work with the `AvailableData` object while it exists inside of the `with` loop. The data is invalidated after exiting the `with` block, so make a copy of the `AvailableData` object to work with the data outside of the `with` block. In this example, we'll simply use the `AvailableData` object inside of the `with` block. -There may not be data available, in which case our `AvailableData` object would be `None`. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object. +There may not be data available, in which case our `AvailableData` object would return 0. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object. -If it is not `None`, we'll use the `AvailableData` `frames` method, which iterates over the `VideoFrame` objects in `AvailableData`, and the python `list` method to create a variable `video_frames`, a list of the `VideoFrame` objects one for each stream. +If there is data, we'll use the `AvailableData` `frames` method, which iterates over the `VideoFrame` objects in `AvailableData`, and the python `list` method to create a variable `video_frames`, a list of the `VideoFrame` objects one for each stream. `VideoFrame` has a `data` method which provides the frame as an `NDArray`. The shape of this NDArray corresponds to the image dimensions used internally by Acquire namely [planes, height, width, channels]. Since we have a single channel, both the first and the last dimensions will be 1. The interior dimensions are height and width, respectively. We can use the `numpy.squeeze` method to grab the desired NDArray image data since the other dimensions are 1. This is equivalent to `image = first_frame[0][:, :, 0]`. From c49a7a557084806703f9990e49c7bdc541a89483 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 11:30:36 -0700 Subject: [PATCH 25/36] Update docs/tutorials/access_data/framedata.md Co-authored-by: Alan Liddell --- docs/tutorials/access_data/framedata.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/access_data/framedata.md b/docs/tutorials/access_data/framedata.md index b9f2ee0b..fb9cdd5f 100644 --- a/docs/tutorials/access_data/framedata.md +++ b/docs/tutorials/access_data/framedata.md @@ -62,7 +62,7 @@ with runtime.get_available_data(0) as available_data: # NoneType if there is no available data. # We can only grab frames if data is available. - if available_data is not None: + if available_data.get_frame_count() > 0: # frames is an iterator over available_data # we'll use this iterator to make a list of the frames From fe7a917d6c98fcce970f58e30faf194e34fcec18 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:48:17 -0700 Subject: [PATCH 26/36] Update docs/tutorials/setup_acquisition/configure.md Co-authored-by: Alan Liddell --- docs/tutorials/setup_acquisition/configure.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/configure.md b/docs/tutorials/setup_acquisition/configure.md index 239eeca6..88e9c136 100644 --- a/docs/tutorials/setup_acquisition/configure.md +++ b/docs/tutorials/setup_acquisition/configure.md @@ -65,7 +65,7 @@ config.video[0].camera.settings.pixel_type = acquire.SampleType.U16 ## Configure `Storage` `Storage` objects have 2 attributes, `settings`, a `StorageProperties` object, and an optional attribute `identifier`, which is an instance of the `DeviceIdentifier` class described above. -`StorageProperties` has 2 attributes `external_metadata_json` and `filename` which are strings of the filename or filetree of the output metadata in JSON format and image data in whatever format corresponds to the selected storage device, respectively. `first_frame_id` is an integer ID that corresponds to the first frame of the current acquisition and is typically 0. `pixel_scale_um` is the camera pixel size in microns. `acquisition_dimensions` is a list of instances of the `StorageDimension` class, one for each acquisition dimension with the fastest changing dimension listed first and the append dimension listed last. For more information on using the `StorageDimension` class, check out [Chunking Data for Zarr Storage](../zarr/chunked.md). `enable_multiscale` is a boolean used to specify if the data should be saved as an image pyramid. See the [Multiscale tutorial](../zarr/multiscale.md) for more information. +`StorageProperties` has 2 attributes `external_metadata_json` and `filename` which are strings of the filename or filetree of the output metadata in JSON format and image data in whatever format corresponds to the selected storage device, respectively. `first_frame_id` is an integer ID that corresponds to the first frame of the current acquisition and is typically 0. `pixel_scale_um` is the camera pixel size in microns. `acquisition_dimensions` is a list of `StorageDimension`, one for each acquisition dimension, ordered from fastest changing to slowest changing. For more information on using the `StorageDimension` class, check out [Chunking Data for Zarr Storage](../zarr/chunked.md). `enable_multiscale` is a boolean used to specify if the data should be saved as an image pyramid. See the [Multiscale tutorial](../zarr/multiscale.md) for more information. We'll specify the name of the output image file below. From 510bbe26b348ee6f4836d56e0deea117a92a911c Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:50:10 -0700 Subject: [PATCH 27/36] Update select.md --- docs/tutorials/setup_acquisition/select.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/select.md b/docs/tutorials/setup_acquisition/select.md index 386d8282..11d6ee71 100644 --- a/docs/tutorials/setup_acquisition/select.md +++ b/docs/tutorials/setup_acquisition/select.md @@ -36,7 +36,7 @@ Output of the above code is below: ``` -All identified devices will be listed, and in the case of this tutorial, no camera drivers were installed on the machine, so only simulated cameras were found. Note that discovered storage devices will also print. +All identified devices will be listed, and in the case of this tutorial, none of the vendor provided camera drivers were installed on the machine, so only simulated cameras were found. Note that discovered storage devices will also print. The order of those printed devices matters. Below are two examples of how the `select` method works. In the first, without a specific device name provided, `select` will choose the first device of the specified kind in the list of discovered devices. In the second example, a specific device name is provided, so `select` will grab that device if it was discovered by `Runtime`. From ee07d43d41bea632830d3bf15e68b95bc3e10eb8 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:51:54 -0700 Subject: [PATCH 28/36] Update storage.md --- docs/tutorials/setup_acquisition/storage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/storage.md b/docs/tutorials/setup_acquisition/storage.md index deaa8530..b0cd225e 100644 --- a/docs/tutorials/setup_acquisition/storage.md +++ b/docs/tutorials/setup_acquisition/storage.md @@ -35,7 +35,7 @@ The output of that script will be: ``` -`Acquire` supports streaming data to [bigtiff](http://bigtiff.org/), [Zarr V2](https://zarr-specs.readthedocs.io/en/latest/v2/v2.0.html), and [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html). +`Acquire` supports streaming data to [bigtiff](http://bigtiff.org/), [Zarr V2](https://zarr-specs.readthedocs.io/en/latest/v2/v2.0.html), [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html), and [OME-Zarr](https://ngff.openmicroscopy.org/latest/) for Zarr V2 and V3. Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _compression_, and _multiscale storage_. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr/blob/main/README.md). From 4f5014750aebc7e3fbcec8ad99c98fa55b9c70e6 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:53:58 -0700 Subject: [PATCH 29/36] Delete docs/tutorials/zarr/chunked.md --- docs/tutorials/zarr/chunked.md | 170 --------------------------------- 1 file changed, 170 deletions(-) delete mode 100644 docs/tutorials/zarr/chunked.md diff --git a/docs/tutorials/zarr/chunked.md b/docs/tutorials/zarr/chunked.md deleted file mode 100644 index fb531e6f..00000000 --- a/docs/tutorials/zarr/chunked.md +++ /dev/null @@ -1,170 +0,0 @@ -# Configuring Zarr storage with chunking - -This tutorial will provide an example of writing chunked data to a Zarr storage device. - -Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _sharding_ (in the case of Zarr V3)_, _compression_, and _multiscale storage_. -To enable _chunking_, in the `StorageDimension` class, set the `chunk_size_px` attribute, which is size of a chunk along this dimension in pixels, to a number greater than 1 for each acquisition dimension. -You can learn more about the Zarr capabilities in Acquire in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr). - -## Configure the acquisition -To start, we'll create a `Runtime` object and configure the streaming process, selecting `Zarr` as the storage device to enable chunking the data. - -```python -import acquire - -# Initialize a Runtime object -runtime = acquire.Runtime() - -# Initialize the device manager -dm = runtime.device_manager() - -# Grab the current configuration -config = runtime.get_configuration() - -# Select the radial sine simulated camera as the video source -config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simulated: radial sin") - -# Use a storage device that supports chunking, in this case, Zarr -config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") - -# Delay between each frame -config.video[0].camera.settings.exposure_time_us = 7e4 # 70 ms - -# Size of image region of interest on the camera (x, y) -config.video[0].camera.settings.shape = (1920, 1080) - -# Specify the pixel datatype as uint8 -config.video[0].camera.settings.pixel_type = acquire.SampleType.U8 - -# Set the output file to out.zarr -config.video[0].storage.settings.filename = "out.zarr" -``` - -### Storage dimensions - -Because Zarr supports n-dimensional arrays, we need to describe how the data we stream should be interpreted. -We do this by specifying *storage dimensions*, which correspond to the dimensionality of the output array. - -Acquire requires at least 3 dimensions: frame width, frame height, and an append dimension. -The first 2 dimensions are required and may be followed by optional internal dimensions. -The final "append" dimension is also required. - -Each dimension must have a type, for example, space, channel, time, or other. -This tutorial will use 5 dimensions following the [OME-NGFF specification](https://ngff.openmicroscopy.org/latest/#multiscale-md). -That is, we will use TCZYX order, with T corresponding to time, C to channel, Z to depth, Y to height, and X to width. - -```python -dimension_x = acquire.StorageDimension( - name="x", - kind="Space", - array_size_px=1920, - chunk_size_px=960 -) - -dimension_y = acquire.StorageDimension( - name="y", - kind="Space", - array_size_px=1080, - chunk_size_px=540 -) - -dimension_z = acquire.StorageDimension( - name="z", - kind="Space", - array_size_px=10, - chunk_size_px=5 -) - -dimension_c = acquire.StorageDimension( - name="c", - kind="Channel", - array_size_px=3, - chunk_size_px=1 -) - -dimension_t = acquire.StorageDimension( - name="t", - kind="Time", - array_size_px=0, - chunk_size_px=10 -) - -config.video[0].storage.settings.acquisition_dimensions = [ - dimension_x, - dimension_y, - dimension_z, - dimension_c, - dimension_t -] -``` - -Notice that each `StorageDimension` object has several attributes. -- `name` is the name of the dimension. It is used to identify the dimension in the output array and must be unique. -- `kind` is the type of dimension. It can be `Space`, `Channel`, `Time`, or `Other`. -- `array_size_px` is the size of the dimension in pixels. It is the total size of the dimension. -- `chunk_size_px` is the size of the chunks in pixels. It is the size of the chunks in which the data will be stored. - -There is an additional field, `shard_size_chunks`, which is used to specify the number of chunks per shard, but it is -only used in Zarr V3, which we will discuss in a future tutorial. - -The order in which we specify the dimensions is important. -The order of the dimensions in the `acquisition_dimensions` list determines the order of the dimensions in the output array. -In this case, the order is `x`, `y`, `z`, `c`, `t`, which corresponds to the order `TCZYX`. - -Notice that the first two dimensions' `array_size_px` is the same as the camera's shape. -This is because the first two dimensions correspond to the spatial dimensions of the camera. - -Another thing to notice is that the final dimension, `t`, has an `array_size_px` of 0. -This is because the size of the append dimension is not known in advance. -At most, we can say that the size of the append dimension is no larger than the number of frames we want to collect, but -because acquisition may terminate at any point before reaching the maximum frame count, we set the `array_size_px` to 0. - -The number of frames to collect will now depend on the sizes of the internal dimensions. -For our example, to fill just one chunk of the `c` dimension, we will need to collect -`dimension_z.array_size_px * dimension_t.chunk_size_px` frames, or in other words, 10 frames. -To fill a single chunk of the `t` dimension, we will need to collect -`dimension_z.array_size_px * dimension_c.array_size_px * dimension_t.chunk_size_px` frames, or in other words, 300 -frames. - -Below we'll configure the max frame count and update all settings with the `set_configuration` method. - -```python -config.video[0].max_frame_count = dimension_z.array_size_px * dimension_c.array_size_px * dimension_t.chunk_size_px # 300 - -config = runtime.set_configuration(config) -``` - -## Collect and Inspect the Data - -```python -# collect data -runtime.start() -runtime.stop() -``` - -You can inspect the Zarr file directory to check that the data saved as expected. Alternatively, you can inspect the data programmatically with: - -```python -# Utilize the zarr library to open the data -import zarr - -# create a zarr Group object -group = zarr.open(config.video[0].storage.settings.filename) - -# check for the expected # of directories in the zarr container -assert len(group) == 1 - -# inspect the characteristics of the data -print(group["0"]) -``` - -The output will be: -``` - -``` -As expected, we have only 1 top level directory, corresponding to the single array in the group. -We would expect more than 1 array only if we were writing [multiscale data](multiscale.md). -The overall array shape is (10, 1, 1080, 1920), corresponding to 10 frames, 1 channel, and a height and width of 1080 -and 1920, respectively, per frame. - -[Download this tutorial as a Python script](chunked.py){ .md-button .md-button-center } From 8e7bc78eec5ed63c6cd4fb2eb5f46de6d4fbda77 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:54:09 -0700 Subject: [PATCH 30/36] Delete docs/tutorials/zarr/multiscale.md --- docs/tutorials/zarr/multiscale.md | 146 ------------------------------ 1 file changed, 146 deletions(-) delete mode 100644 docs/tutorials/zarr/multiscale.md diff --git a/docs/tutorials/zarr/multiscale.md b/docs/tutorials/zarr/multiscale.md deleted file mode 100644 index 9155ecff..00000000 --- a/docs/tutorials/zarr/multiscale.md +++ /dev/null @@ -1,146 +0,0 @@ -# Configuring Zarr multiscale storage - -This tutorial will provide an example of writing multiscale data to a Zarr file. - -Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _sharding_ (in the case of Zarr V3)_, _compression_, and _multiscale storage_. To enable _multiscale storage_, set the `enable_multiscale` attribute of the `StorageProperties` class to true. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr). - -## Configure `Runtime` -To start, we'll create a `Runtime` object and begin to configure the streaming process, selecting `Zarr` as the storage device so that writing multiscale data is possible. - -```python -import acquire - -# Initialize a Runtime object -runtime = acquire.Runtime() - -# Initialize the device manager -dm = runtime.device_manager() - -# Grab the current configuration -config = runtime.get_configuration() - -# Select the radial sine simulated camera as the video source -config.video[0].camera.identifier = dm.select(acquire.DeviceKind.Camera, "simulated: radial sin") - -# Set the storage to Zarr to have the option to save multiscale data -config.video[0].storage.identifier = dm.select(acquire.DeviceKind.Storage, "Zarr") - -# Set the time for collecting data for a each frame -config.video[0].camera.settings.exposure_time_us = 7e4 # 70 ms - -# Set the size of image region of interest on the camera (x, y) -config.video[0].camera.settings.shape = (1920, 1080) - -# Set the image data type as a Uint8 -config.video[0].camera.settings.pixel_type = acquire.SampleType.U8 - -# Set the scale of the pixels -config.video[0].storage.settings.pixel_scale_um = (1, 1) # 1 micron by 1 micron - -# Set the output file to out.zarr -config.video[0].storage.settings.filename = "out.zarr" -``` - -To complete configuration, we'll configure the chunking and multiscale specific settings and update all settings with the `set_configuration` method. For a more detailed explanation of configuring Zarr storage with chunking, check out [this tutorial](./chunked.md). - -To start, we'll configure the `acquisition_dimensions` attribute of the `StorageProperties` class. `acquisition_dimensions` is a list of `StorageDimension` objects, one for each acquisition dimension. - -```python -dimension_x = acquire.StorageDimension( - name="x", - kind="Space", - array_size_px=1920, - chunk_size_px=640 -) - -dimension_y = acquire.StorageDimension( - name="y", - kind="Space", - array_size_px=1080, - chunk_size_px=360 -) - -dimension_z = acquire.StorageDimension( - name="z", - kind="Space", - array_size_px=10, - chunk_size_px=5 -) - -dimension_c = acquire.StorageDimension( - name="c", - kind="Channel", - array_size_px=3, - chunk_size_px=1 -) - -dimension_t = acquire.StorageDimension( - name="t", - kind="Time", - array_size_px=0, - chunk_size_px=10 -) - -config.video[0].storage.settings.acquisition_dimensions = [ - dimension_x, - dimension_y, - dimension_z, - dimension_c, - dimension_t -] - -# Set the max frame count based on the storage dimensions -config.video[0].max_frame_count = dimension_z.array_size_px * dimension_c.array_size_px * dimension_t.chunk_size_px # 300 -``` - -Finally, turn on multiscale and update all the settings. - -```python -# turn on multiscale mode -config.video[0].storage.settings.enable_multiscale = True - -# Update the configuration with the chosen parameters -config = runtime.set_configuration(config) -``` -## Collect and Inspect the Data - -```python - -# collect data -runtime.start() -runtime.stop() -``` - -You can inspect the Zarr file directory to check that the data saved as expected. This zarr file should have multiple subdirectories, one for each resolution in the multiscale data. Alternatively, you can inspect the data programmatically with: - -```python -# Utilize the zarr python library to read the data -import zarr - -# Open the data to create a zarr Group -group = zarr.open("out.zarr") -``` -With multiscale mode enabled, an image pyramid will be formed by rescaling the data by a factor of 2 progressively until the rescaled image is smaller than the specified zarr chunk size in both dimensions. In this example, the original image dimensions are (1920, 1080), and we chunked the data using tiles 1/3 of the size of the image, namely (640, 360). To illustrate this point, we'll inspect the sizes of the various levels in the multiscale data and compare it to our specified chunk size. - -```python -print(group["0"]) -``` - -The output will be: - -``` - -``` - -TO BE UPDATED: -``` -(, - , - ) -``` - -Here, the `"0"` directory contains the full-resolution array of frames of size 1920 x 1080, with a single channel, saving all 10 frames. -The `"1"` directory contains the first rescaled array of frames of size 960 x 540, averaging every two frames, taking the frame count from 10 to 5. -The `"2"` directory contains a further rescaled array of frames of size 480 x 270, averaging every four frames, taking the frame count from 10 to 2. Notice that both the frame width and frame height are now smaller than the chunk width and chunk height of 640 and 360, respectively, so this should be the last array in the group. - -[Download this tutorial as a Python script](multiscale.py){ .md-button .md-button-center } From 16c1381c968d6e663e6a04470832529b09ee7354 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:55:05 -0700 Subject: [PATCH 31/36] Update index.md --- docs/tutorials/zarr/index.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/tutorials/zarr/index.md b/docs/tutorials/zarr/index.md index 613abcfa..dedec464 100644 --- a/docs/tutorials/zarr/index.md +++ b/docs/tutorials/zarr/index.md @@ -1,7 +1,9 @@ # Zarr -These tutorials will help you learn about using OME-Zarr with Acquire. Please +These tutorials will help you learn about using Zarr with Acquire. Please [submit an issue on GitHub](https://github.com/acquire-project/acquire-docs/issues/new) if you'd like to request a tutorial. If you are also interested in contributing to this documentation, please visit our [contribution guide](https://acquire-project.github.io/acquire-docs/dev/for_contributors/). + +- [Writing to Compressed Zarr Files](./compressed.md) From 7eb3556f3d913f41c35ff7289e50276f3731edef Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:56:18 -0700 Subject: [PATCH 32/36] Update index.md --- docs/tutorials/using_json/index.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/tutorials/using_json/index.md b/docs/tutorials/using_json/index.md index 736c8772..4e1336e9 100644 --- a/docs/tutorials/using_json/index.md +++ b/docs/tutorials/using_json/index.md @@ -5,3 +5,6 @@ settings. Please [submit an issue on GitHub](https://github.com/acquire-project/ if you'd like to request a tutorial. If you are also interested in contributing to this documentation, please visit our [contribution guide](https://acquire-project.github.io/acquire-docs/dev/for_contributors/). + +- [Loading Properties from a JSON file](./props_json.md) +- [Loading Triggers from a JSON file](./trig_json.md) From 6aafabaf6ea8c2b07acf9e8f5452f8c4851391ac Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:58:26 -0700 Subject: [PATCH 33/36] Update index.md --- docs/tutorials/setup_acquisition/index.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/tutorials/setup_acquisition/index.md b/docs/tutorials/setup_acquisition/index.md index 8d231115..4817b223 100644 --- a/docs/tutorials/setup_acquisition/index.md +++ b/docs/tutorials/setup_acquisition/index.md @@ -5,3 +5,11 @@ Please [submit an issue on GitHub](https://github.com/acquire-project/acquire-do if you'd like to request a tutorial. If you are also interested in contributing to this documentation, please visit our [contribution guide](https://acquire-project.github.io/acquire-docs/dev/for_contributors/). + +- [Configure an Acquisition](./configure.md) +- [Test Camera Drivers](./drivers.md) +- [Device Selection](./select.md) +- [Utilizing the Setup Method](./setup.md) +- [Multiple Acquisitions](./start_stop.md) +- [Storage Device Selection](./storage.md) +- [Finite Triggered Acquisition](./trigger.md) From d75b236325fbb3a3368031bc359b3c6fe131c16c Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 12:59:15 -0700 Subject: [PATCH 34/36] Update index.md --- docs/tutorials/access_data/index.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/tutorials/access_data/index.md b/docs/tutorials/access_data/index.md index 5fad37b4..e4cd7020 100644 --- a/docs/tutorials/access_data/index.md +++ b/docs/tutorials/access_data/index.md @@ -5,3 +5,6 @@ Please [submit an issue on GitHub](https://github.com/acquire-project/acquire-do if you'd like to request a tutorial. If you are also interested in contributing to this documentation, please visit our [contribution guide](https://acquire-project.github.io/acquire-docs/dev/for_contributors/). + +- [Accessing Data during Acquisition](./framedata.md) +- [Livestream to napari](./livestream.md) From 343176f167a1defef8ecc0e4d2b4f75c1eb8781d Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:08:51 -0700 Subject: [PATCH 35/36] Update docs/tutorials/access_data/framedata.md Co-authored-by: Alan Liddell --- docs/tutorials/access_data/framedata.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/access_data/framedata.md b/docs/tutorials/access_data/framedata.md index fb9cdd5f..129bce7c 100644 --- a/docs/tutorials/access_data/framedata.md +++ b/docs/tutorials/access_data/framedata.md @@ -40,7 +40,7 @@ config = runtime.set_configuration(config) During Acquisition, the `AvailableData` object is the streaming interface. We can create an `AvailableData` object by calling `get_available_data` in a `with` statement, and work with the `AvailableData` object while it exists inside of the `with` loop. The data is invalidated after exiting the `with` block, so make a copy of the `AvailableData` object to work with the data outside of the `with` block. In this example, we'll simply use the `AvailableData` object inside of the `with` block. -There may not be data available, in which case our `AvailableData` object would return 0. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object. +There may not be data available. To increase the likelihood of `AvailableData` containing data, we'll utilize the `time` python package to introduce a delay before we create our `AvailableData` object. If there is data, we'll use the `AvailableData` `frames` method, which iterates over the `VideoFrame` objects in `AvailableData`, and the python `list` method to create a variable `video_frames`, a list of the `VideoFrame` objects one for each stream. From 9744b73c45ce2a8f091ce30cc8203df959229755 Mon Sep 17 00:00:00 2001 From: dgmccart <92180364+dgmccart@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:09:03 -0700 Subject: [PATCH 36/36] Update docs/tutorials/setup_acquisition/storage.md Co-authored-by: Alan Liddell --- docs/tutorials/setup_acquisition/storage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/setup_acquisition/storage.md b/docs/tutorials/setup_acquisition/storage.md index b0cd225e..52aab9ee 100644 --- a/docs/tutorials/setup_acquisition/storage.md +++ b/docs/tutorials/setup_acquisition/storage.md @@ -35,7 +35,7 @@ The output of that script will be: ``` -`Acquire` supports streaming data to [bigtiff](http://bigtiff.org/), [Zarr V2](https://zarr-specs.readthedocs.io/en/latest/v2/v2.0.html), [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html), and [OME-Zarr](https://ngff.openmicroscopy.org/latest/) for Zarr V2 and V3. +`Acquire` supports streaming data to [bigtiff](http://bigtiff.org/), [Zarr V2](https://zarr-specs.readthedocs.io/en/latest/v2/v2.0.html), [Zarr V3](https://zarr-specs.readthedocs.io/en/latest/specs.html). For both Zarr V2 and Zarr V3, Acquire provides OME metadata. Zarr has additional capabilities relative to the basic storage devices, namely _chunking_, _compression_, and _multiscale storage_. You can learn more about the Zarr capabilities in `Acquire` in [the Acquire Zarr documentation](https://github.com/acquire-project/acquire-driver-zarr/blob/main/README.md).