Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
michal-lightly authored Nov 10, 2023
1 parent 7d6487d commit f78a4cb
Showing 1 changed file with 56 additions and 4 deletions.
60 changes: 56 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ detection labels, and it generates a static HTML webpage with metrics and plots.

#### Features

- Supports all object detection label formats that can be read with [labelformat](https://github.com/lightly-ai/labelformat) package. That includes YOLO, COCO, KITTI, PascalVOC, Lightly and Labelbox.
- Supports all object detection label formats that can be read with [Labelformat](https://github.com/lightly-ai/labelformat) package. That includes YOLO, COCO, KITTI, PascalVOC, Lightly and Labelbox.
- Shows the image, object and class counts
- Analyzes how many images have no labels, and provides their filenames.
- Shows image samples
Expand All @@ -25,19 +25,71 @@ detection labels, and it generates a static HTML webpage with metrics and plots.

See a [live example report](https://lightly-ai.github.io/lightly-insights-preview/) for a small dataset.

## Screenshots
#### Screenshots

<p float="left">
<img src="screenshot0.png" width="32%" alt="Lightly Insights Screenshot"/>
<img src="screenshot1.png" width="32%" alt="Lightly Insights Screenshot"/>
<img src="screenshot2.png" width="32%" alt="Lightly Insights Screenshot"/>
</p>

## Installation

```
pip install lightly-insights
```

## Usage

[TODO]
Lightly Insights report is generated by a python script. This is necessary to support different label formats.

The example below uses [PascalVOC 2007](http://host.robots.ox.ac.uk/pascal/VOC/voc2007/index.html) dataset.
You can follow the example by downloading it (~450MB):

```
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
tar -xvf VOCtrainval_06-Nov-2007.tar
```

To run Lightly Insights, we need to provide:

* Image folder. In our case that is `./VOCdevkit/VOC2007/JPEGImages`.
* Object detection labels. They are ingested as [Labelformat](https://github.com/lightly-ai/labelformat)
ObjectDetectionInput class. For PascalVOC the constructor needs the folder with annotations
`./VOCdevkit/VOC2007/Annotations` and the list of classes. For other format classes, please
refer to [Labelformat formats](https://github.com/lightly-ai/labelformat/blob/main/src/labelformat/formats/__init__.py).

```py
from pathlib import Path
from labelformat.formats import PascalVOCObjectDetectionInput
from lightly_insights import analyze, present

# Analyze an image folder.
image_analysis = analyze.analyze_images(
image_folder=Path("./VOCdevkit/VOC2007/JPEGImages")
)

# Analyze object detections.
label_input = PascalVOCObjectDetectionInput(
input_folder=Path("./VOCdevkit/VOC2007/Annotations"),
category_names=(
"person,bird,cat,cow,dog,horse,sheep,aeroplane,bicycle,boat,bus,car,"
+ "motorbike,train,bottle,chair,diningtable,pottedplant,sofa,tvmonitor"
)
)
od_analysis = analyze.analyze_object_detections(label_input=label_input)

# Create HTML report.
present.create_html_report(
output_folder=Path("./html_report"),
image_analysis=image_analysis,
od_analysis=od_analysis,
)
```

To view the report, open `./html_report/index.html`.

### Development
## Development

The library targets python 3.7 and higher. We use poetry to manage the development environment.

Expand Down

0 comments on commit f78a4cb

Please sign in to comment.