Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve documentation #32

Merged
merged 3 commits into from
Jun 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 72 additions & 48 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,39 +3,24 @@ NWB conversion scripts for Dombeck lab data to the [Neurodata Without Borders](h


## Installation
## Basic installation

You can install the latest release of the package with pip:

```
pip install dombeck-lab-to-nwb
```

We recommend that you install the package inside a [virtual environment](https://docs.python.org/3/tutorial/venv.html). A simple way of doing this is to use a [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) from the `conda` package manager ([installation instructions](https://docs.conda.io/en/latest/miniconda.html)). Detailed instructions on how to use conda environments can be found in their [documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html).

### Running a specific conversion
Once you have installed the package with pip, you can run any of the conversion scripts in a notebook or a python file:

https://github.com/catalystneuro/dombeck-lab-to-nwb//tree/main/src/azcorra2023/azcorra2023_convert_session.py




## Installation from Github
Another option is to install the package directly from Github. This option has the advantage that the source code can be modifed if you need to amend some of the code we originally provided to adapt to future experimental differences. To install the conversion from GitHub you will need to use `git` ([installation instructions](https://github.com/git-guides/install-git)). We also recommend the installation of `conda` ([installation instructions](https://docs.conda.io/en/latest/miniconda.html)) as it contains all the required machinery in a single and simple instal
The package can be installed directly from GitHub, which has the advantage that the source code can be modified if you need to amend some of the code we originally provided to adapt to future experimental differences.
To install the conversion from GitHub you will need to use `git` ([installation instructions](https://github.com/git-guides/install-git)). We also recommend the installation of `conda` ([installation instructions](https://docs.conda.io/en/latest/miniconda.html)) as it contains
all the required machinery in a single and simple install.

From a terminal (note that conda should install one in your system) you can do the following:

```
git clone https://github.com/catalystneuro/dombeck-lab-to-nwb
cd dombeck-lab-to-nwb
conda env create --file make_env.yml
conda activate dombeck-lab-to-nwb-env
conda activate dombeck_lab_to_nwb_env
```

This creates a [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) which isolates the conversion code from your system libraries. We recommend that you run all your conversion related tasks and analysis from the created environment in order to minimize issues related to package dependencies.
This creates a [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) which isolates the conversion code from your system libraries.
We recommend that you run all your conversion related tasks and analysis from the created environment in order to minimize issues related to package dependencies.

Alternatively, if you want to avoid conda altogether (for example if you use another virtual environment tool) you can install the repository with the following commands using only pip:
Alternatively, if you want to avoid conda altogether (for example if you use another virtual environment tool)
you can install the repository with the following commands using only pip:

```
git clone https://github.com/catalystneuro/dombeck-lab-to-nwb
Expand All @@ -46,17 +31,6 @@ pip install -e .
Note:
both of the methods above install the repository in [editable mode](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs).

### Running a specific conversion
To run a specific conversion, you might need to install first some conversion specific dependencies that are located in each conversion directory:
```
pip install -r src/dombeck_lab_to_nwb/azcorra2023/azcorra2023_requirements.txt
```

You can run a specific conversion with the following command:
```
python src/dombeck_lab_to_nwb/azcorra2023/azcorra2023_convert_session.py
```

## Repository structure
Each conversion is organized in a directory of its own in the `src` directory:

Expand All @@ -69,27 +43,77 @@ Each conversion is organized in a directory of its own in the `src` directory:
├── setup.py
└── src
├── dombeck_lab_to_nwb
│ ├── conversion_directory_1
│ └── azcorra2023
│ ├── azcorra2023behaviorinterface.py
│ ├── extractors
│ │ ├── __init__.py
│ │ └── picoscope_recordingextractor.py
│ ├── interfaces
│ │ ├── __init__.py
│ │ ├── azcorra2023_fiberphotometryinterface.py
│ │ ├── azcorra2023_processedfiberphotometryinterface.py
│ │ ├── picoscope_eventinterface.py
│ │ └── picoscope_timeseriesinterface.py
│ ├── matlab_utils
│ │ ├── convert_data6.m
│ │ ├── resave_mat_files.m
│ ├── metadata
│ │ ├── azcorra2023_fiber_photometry_metadata.yaml
│ │ ├── azcorra2023_nwbfile_metadata.yaml
│ │ ├── azcorra2023_subjects_metadata.yaml
│ ├── photometry_utils
│ │ ├── __init__.py
│ │ ├── add_fiber_photometry.py
│ │ ├── process_extra_metadata.py
│ ├──tutorials
│ │ └── azcorra2023_demo.ipynb
│ ├── azcorra2023_convert_all_sessions.py
│ ├── azcorra2023_convert_session.py
│ ├── azcorra2023_metadata.yml
│ ├── azcorra2023nwbconverter.py
│ ├── azcorra2023_requirements.txt
│ ├── azcorra2023_notes.md

│ ├── azcorra2023_requirements.txt
│ ├── azcorra2023nwbconverter.py
│ └── __init__.py
│ ├── conversion_directory_b

└── __init__.py

For example, for the conversion `azcorra2023` you can find a directory located in `src/dombeck-lab-to-nwb/azcorra2023`. Inside each conversion directory you can find the following files:
For the conversion `azcorra2023` you can find a directory located in `src/dombeck-lab-to-nwb/azcorra2023`.
Inside the conversion directory you can find the following files:

* `azcorra2023_convert_all_sessions.py`: this script defines the function to convert all sessions of the conversion.
* `azcorra2023_convert_sesion.py`: this script defines the function to convert one full session of the conversion.
* `azcorra2023_requirements.txt`: dependencies specific to this conversion.
* `azcorra2023_metadata.yml`: metadata in yaml format for this specific conversion.
* `azcorra2023behaviorinterface.py`: the behavior interface. Usually ad-hoc for each conversion.
* `azcorra2023_requirements.txt`: the dependencies specific to this conversion.
* `azcorra2023_notes.md`: the notes and comments concerning this specific conversion.
* `azcorra2023nwbconverter.py`: the place where the `NWBConverter` class is defined.
* `azcorra2023_notes.md`: notes and comments concerning this specific conversion.
* `extractors/`: a directory containing the extractor for the PicoScope format.
* `interfaces/`: a directory containing the interfaces for the raw and processed fiber photometry data.
* `metadata/`: a directory containing the editable metadata for the conversion.
* `tutorials/`: a directory containing a jupyter notebook that demonstrates how to access each data stream in an NWB file.
* `matlab_utils/`: a directory containing the matlab scripts used to resave the .mat files to be readable in Python.
* `photometry_utils/`: a directory containing the utility functions to add the fiber photometry data to the NWB file.

### Notes on the conversion

The conversion [notes](https://github.com/catalystneuro/dombeck-lab-to-nwb/blob/main/src/dombeck_lab_to_nwb/azcorra2023/azcorra2023_notes.md)
contain information about the expected folder structure and the conversion process.

### Running a specific conversion
To run a specific conversion, you might need to install first some conversion specific dependencies that are located in each conversion directory:

```bash
conda activate dombeck_lab_to_nwb_env
pip install -r src/dombeck_lab_to_nwb/azcorra2023/azcorra2023_requirements.txt
```

The directory might contain other files that are necessary for the conversion but those are the central ones.
You can run a specific conversion with the following command:
```bash
python src/dombeck_lab_to_nwb/azcorra2023/azcorra2023_convert_session.py
```

## NWB tutorials

The `tutorials` directory contains jupyter notebooks that demonstrate how to access the data in the NWB files.
You might need to install `jupyter` before running the notebooks:

```bash
pip install jupyter
cd src/dombeck_lab_to_nwb/azcorra2023/tutorials
jupyter lab
```
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def session_to_nwb(
)

# Update default metadata with the editable in the corresponding yaml file
editable_metadata_path = Path(__file__).parent / "azcorra2023_metadata.yaml"
editable_metadata_path = Path(__file__).parent / "metadata" / "azcorra2023_nwbfile_metadata.yaml"
editable_metadata = load_dict_from_file(editable_metadata_path)
metadata = dict_deep_update(metadata, editable_metadata)

Expand Down
12 changes: 8 additions & 4 deletions src/dombeck_lab_to_nwb/azcorra2023/azcorra2023_notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,13 @@ The variables in Picoscope (e.g. `20200129-0001_01.mat`) are as follows:
- `B` = fiber 2 fluorescence (from here on called channel red - chR)
- `C` = fiber 1 fluorescence (channel green - chG)
- `D` = light stimulus trigger
- `E` = waveforem generator output indicating illumination wavelength (1 = 470nm, 0 = 405nm)
- `E` = waveform generator output indicating illumination wavelength (1 = 470nm, 0 = 405nm)
- `F` = reward delivery trigger
- `G` = licking sensor ourput
- `G` = licking sensor output
- `H` = air puff delivery trigger

Variables `D`, `F`, `G`, `H` are binary signals (threshold 0.05), `E` is used to separate the fluorescence due to 405 vs 470 nm illumination.

### Concatenated recordings

Based on this preprocessing script, they are concatenating the raw recordings (fluorescence and behavior) and
Expand All @@ -63,7 +65,9 @@ dict(
H="AirPuff",
)
```
The separated fluorescence is saved to `"chRed405"` and `"chGreen405"` variables.
Note:
The data from the binned files "ChRed", "ChGreen" are added as raw fluorescence traces, and the data from
"ChRed405", "ChGreen405" are added as 405 nm control traces.

### Processed recordings

Expand Down Expand Up @@ -121,7 +125,7 @@ fibers = {'chGreen' 'chRed'};

### NWB mapping

The following table describes the initial mapping between the source data and the NWB file:
The following table describes the mapping between the source data and the NWB file:

![Alt text](azcorra2023_uml.png)

Expand Down
Loading