Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
bendichter committed Oct 21, 2024
2 parents c26e451 + 52cd6aa commit 782d5ac
Show file tree
Hide file tree
Showing 6 changed files with 48 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
uses: docker/build-push-action@v5
with:
push: true # Push is a shorthand for --output=type=registry
tags: ghcr.io/catalystneuro/neuroconv:yaml_variable
tags: ghcr.io/catalystneuro/neuroconv_yaml_variable:latest
context: .
file: dockerfiles/neuroconv_latest_yaml_variable
provenance: false
6 changes: 3 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
rev: v5.0.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/psf/black
rev: 24.8.0
rev: 24.10.0
hooks:
- id: black
exclude: ^docs/

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.5
rev: v0.6.9
hooks:
- id: ruff
args: [ --fix ]
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
## Features
* Using in-house `GenericDataChunkIterator` [PR #1068](https://github.com/catalystneuro/neuroconv/pull/1068)
* Data interfaces now perform source (argument inputs) validation with the json schema [PR #1020](https://github.com/catalystneuro/neuroconv/pull/1020)
* Added `channels_to_skip` to `EDFRecordingInterface` so the user can skip non-neural channels [PR #1110](https://github.com/catalystneuro/neuroconv/pull/1110)

## Improvements
* Remove dev test from PR [PR #1092](https://github.com/catalystneuro/neuroconv/pull/1092)
Expand Down
5 changes: 3 additions & 2 deletions dockerfiles/neuroconv_latest_yaml_variable
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
FROM ghcr.io/catalystneuro/neuroconv:latest
# TODO: make this neuroconv:latest once optional installations are working again
FROM ghcr.io/catalystneuro/neuroconv:dev
LABEL org.opencontainers.image.source=https://github.com/catalystneuro/neuroconv
LABEL org.opencontainers.image.description="A docker image for the most recent official release of the NeuroConv package. Modified to take in environment variables for the YAML conversion specification and other command line arguments."
CMD echo "$NEUROCONV_YAML" > run.yml && python -m neuroconv run.yml --data-folder-path "$NEUROCONV_DATA_PATH" --output-folder-path "$NEUROCONV_OUTPUT_PATH" --overwrite
CMD printf "$NEUROCONV_YAML" > ./run.yml && neuroconv run.yml --data-folder-path "$NEUROCONV_DATA_PATH" --output-folder-path "$NEUROCONV_OUTPUT_PATH" --overwrite
14 changes: 11 additions & 3 deletions docs/user_guide/docker_demo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ It relies on some of the GIN data from the main testing suite, see :ref:`example

3. Create a file in this folder named ``demo_neuroconv_docker_yaml.yml`` with the following content...

.. code::
.. code-block:: yaml
metadata:
NWBFile:
Expand Down Expand Up @@ -102,7 +102,11 @@ It relies on some of the GIN data from the main testing suite, see :ref:`example

.. code::
docker run -t --volume /home/user/demo_neuroconv_docker:/demo_neuroconv_docker ghcr.io/catalystneuro/neuroconv:latest neuroconv /demo_neuroconv_docker/demo_neuroconv_docker_yaml.yml --output-folder-path /demo_neuroconv_docker/demo_output
docker run -t \
--volume /home/user/demo_neuroconv_docker:/demo_neuroconv_docker \
ghcr.io/catalystneuro/neuroconv:latest \
neuroconv /demo_neuroconv_docker/demo_neuroconv_docker_yaml.yml \
--output-folder-path /demo_neuroconv_docker/demo_output
Voilà! If everything occurred successfully, you should see...

Expand Down Expand Up @@ -142,6 +146,10 @@ Then, you can use the following command to run the Rclone Docker image:

.. code::
docker run -t --volume destination_folder:destination_folder -e RCLONE_CONFIG="$RCLONE_CONFIG" -e RCLONE_COMMAND="$RCLONE_COMMAND" ghcr.io/catalystneuro/rclone_with_config:latest
docker run -t \
--volume destination_folder:destination_folder \
-e RCLONE_CONFIG="$RCLONE_CONFIG" \
-e RCLONE_COMMAND="$RCLONE_COMMAND" \
ghcr.io/catalystneuro/rclone_with_config:latest
This image is particularly designed for convenience with AWS Batch (EC2) tools that rely heavily on atomic Docker operations. Alternative AWS approaches would have relied on transferring the Rclone configuration file to the EC2 instances using separate transfer protocols or dependent steps, both of which add complexity to the workflow.
32 changes: 29 additions & 3 deletions src/neuroconv/datainterfaces/ecephys/edf/edfdatainterface.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from typing import Optional

from pydantic import FilePath

from ..baserecordingextractorinterface import BaseRecordingExtractorInterface
Expand All @@ -23,7 +25,22 @@ def get_source_schema(cls) -> dict:
source_schema["properties"]["file_path"]["description"] = "Path to the .edf file."
return source_schema

def __init__(self, file_path: FilePath, verbose: bool = True, es_key: str = "ElectricalSeries"):
def _source_data_to_extractor_kwargs(self, source_data: dict) -> dict:

extractor_kwargs = source_data.copy()
extractor_kwargs.pop("channels_to_skip")
extractor_kwargs["all_annotations"] = True
extractor_kwargs["use_names_as_ids"] = True

return extractor_kwargs

def __init__(
self,
file_path: FilePath,
verbose: bool = True,
es_key: str = "ElectricalSeries",
channels_to_skip: Optional[list] = None,
):
"""
Load and prepare data for EDF.
Currently, only continuous EDF+ files (EDF+C) and original EDF files (EDF) are supported
Expand All @@ -36,15 +53,24 @@ def __init__(self, file_path: FilePath, verbose: bool = True, es_key: str = "Ele
verbose : bool, default: True
Allows verbose.
es_key : str, default: "ElectricalSeries"
Key for the ElectricalSeries metadata
channels_to_skip : list, default: None
Channels to skip when adding the data to the nwbfile. These parameter can be used to skip non-neural
channels that are present in the EDF file.
"""
get_package(
package_name="pyedflib",
excluded_platforms_and_python_versions=dict(darwin=dict(arm=["3.8", "3.9"])),
excluded_platforms_and_python_versions=dict(darwin=dict(arm=["3.9"])),
)

super().__init__(file_path=file_path, verbose=verbose, es_key=es_key)
super().__init__(file_path=file_path, verbose=verbose, es_key=es_key, channels_to_skip=channels_to_skip)
self.edf_header = self.recording_extractor.neo_reader.edf_header

# We remove the channels that are not neural
if channels_to_skip:
self.recording_extractor = self.recording_extractor.remove_channels(remove_channel_ids=channels_to_skip)

def extract_nwb_file_metadata(self) -> dict:
nwbfile_metadata = dict(
session_start_time=self.edf_header["startdate"],
Expand Down

0 comments on commit 782d5ac

Please sign in to comment.