Skip to content

Computational-Plant-Science/ml_smart_release

Repository files navigation

SMART: Speedy Measurement of Arabidopsis Rosette Traits

Speedy Measurement of Arabidopsis Rossette Traits for time-course monitoring morphological and physiological traits

Plant phenotyping using computer vision and machine learning

Compute both morphological and physiological traits with unique plant surface color analysis ability

Author: Suxing Liu

Note: This is a release version for paper publication purpose only, there are other repositories which contains updated development and bug fix.

Optional Text

Robust and parameter-free plant image segmentation and trait extraction.

  1. Input: Image data of plant image top views in the same folder (jpg or png format).

  2. Output: Image results and excel file contains all traits computation in pixel units.

  3. Pre-trained ML model was applied to aid the plant object segmeneation from background process along with the color clustering based segmenation method.

  4. Unsupervised dominant color clustering method was applied to compute the color distribution of the plant surface area.

Sample workflow

Pipeline

Color analysis

Monitor plant growth

Contents

Input

Individual plant tray image from top view, captured by ANY modern digital camera.

Output

trait.xlsx: excel file which contains trait computation values.

Traits summary

Pipeline

Pipeline

Usage in the local environment by cloning the whole GitHub repository

sample test

Input: Plant top view images, in jpg or png format

Output: Image results and trait.xlsx (a summary of trait computation values in pixel units).

Example input can be downloaded from the "/sample_test/" folder, which contains top-view images of the same Arabidopsis plant from different time points.

  1. Download the repo into the local host PC to the $host_path:
    git clone https://github.com/Computational-Plant-Science/ml_smart_release.git

Now you should have a clone of the SMART pipeline source code in your local PC, the relative folder path was:

   $host_path/ml_smart_release/
   
  1. Prepare your input image folder path and output path

    here we use the sample images inside the repository as input image data, the path was:

   /$host_path/ml_smart_release/sample_test/
   
  1. compute traits:

    please define the input path which coantains image data (png or jpg formats) and create an output folder path to save all the image results and an excel file result:

   mkdir /$host_path/ml_smart_release/result/

   python3 /$host_path/ml_smart_release/smart_release.py -i /$host_path/SMART/ml_smart_release/sample_test/ -o /$host_path/SMART/ml_smart_release/sample_test/result/`

If no output folder path was specified, default output path will be the same as input path.

Usage for Docker container (Suggested)

Docker is suggested to run this project in a Unix environment.

  1. Download prebuilt docker container from DockerHub
    docker pull computationalplantscience/smart
  1. Build your local container
    docker build -t smart_container -f Dockerfile .
  1. Run the docker container with your test images

    Run the prebuilt docker container from DockerHub

    docker run -v /$path_to_test_image:/images -it computationalplantscience/ml_smart_release
    
or Run the docker container build locally
    docker run -v  /home/suxing/SMART/sample_test:/images -it smart_container

Note: The "/" at the end of the path was NOT needed when mounting a host directory into a Docker container. Above command mount the local directory "/$path_to_test_image" inside the container path "/images"
Reference: https://docs.docker.com/storage/bind-mounts/
For example, to run the sample test inside this repo, under the folder "sample_test", first locate the local path 
    docker run -v /$path_to_ml_smart_release_repo/ml_smart_release/sample_test:/images -it computationalplantscience/ml_smart_release
then run the pipeline inside the container with mounted input images:
    python3 /opt/code/smart_release.py -p /images/ -o /images/result/ -ai 1
    
or 
    /$path_to_ml_smart_release_repo/ml_smart_release/sample_test:/images -it computationalplantscience/ml_smart_release  python3 /opt/code/smart_release.py -p /images/ -o /images/results/ -ai 1

Results will be generated in the same input folder, trait.xlsx and trait.csv, which contains trait computation results.

The other folder with the same name of input images contains all related image results for visualization purposes.

They are processed copies of the original images, all the image content information was processed and extracted as traits information.

Collaboration

The SMART pipeline has been integrated into CyVerse cloud computing-based website: PlantIT (https://plantit.cyverse.org/)

CyVerse users can upload data and run the SMART pipeline for free.

The SMART pipeline has also been applied in collaboration with following research institutes and companies:

  1. Dr. David G. Mendoza-Cozatl at University of Missouri

  2. Dr. Kranthi Varala at Purdue University

  3. Dr. Filipe Matias at Syngenta

  4. Dr. Tara Enders at Hofstra University

  5. Briony Parker at Rothamsted Research

  6. Dr. Fiona L. Goggin at University of Arkansas



Imaging protocol for SMART

Input image requirement:

Plant top view image captured by any RGB camera, prefer a black background with stable illumination environment.

Optional Text

Setting up plants

1. Place one plant in one tray.
2. Use black color mesh to cover the soil.
3. Place the maker object on the left corner of the tray.
4. Prefer the plant did not grow out of the boundaries of the tray.

Optional Text Setting up camera

1. The camera lens should be parallel to the plant surface to capture an undistorted top view. 
2. The plant object should be in the center of the image and within the focus of the camera lens.
3. The depth of field should cover the different layers of the plant leaves. 
4. Higher resolution (e.g., an 8-megapixel camera produces a file size that is 2448 x 3264 PPI) is suggested to acquire clear and sharp image data.

Optional Text Setting up the lighting environment

1. Diffuse lighting is suggested. 
2. Reduce shadow as much as possible.
3. Keep the illumination environment constant between imaging different plants. 
4. Avoid overexposure and reflection effects.