Skip to content

Commit

Permalink
Merge branch 'main' into dependabot/npm_and_yarn/docs/babel/traverse-…
Browse files Browse the repository at this point in the history
…7.23.2
  • Loading branch information
mjvogelsong committed Nov 26, 2023
2 parents 4255618 + a1eb551 commit 2f9f31f
Show file tree
Hide file tree
Showing 31 changed files with 6,236 additions and 3,592 deletions.
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ docs-comprehensive: apidocs
cd docs && npm run build

apidocs:
cd docs && npm install
poetry run make html

html:
Expand Down
29 changes: 29 additions & 0 deletions docs/docs/building-applications/1-sample-applications.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Sample Applications

Explore these GitHub repositories to see examples of Groundlight-powered applications:

## Groundlight Stream Processor

Repository: [https://github.com/groundlight/stream](https://github.com/groundlight/stream)

The Groundlight Stream Processor is an easy-to-use Docker container for analyzing RTSP streams or common USB-based cameras. You can run it with a single Docker command, such as:

```bash
docker run stream:local --help
```

## Arduino ESP32 Camera Sample App

Repository: [https://github.com/groundlight/esp32cam](https://github.com/groundlight/esp32cam)

This sample application allows you to build a working AI vision detector using an inexpensive WiFi camera. With a cost of under $10, you can create a powerful and affordable AI vision system.

## Raspberry Pi

Repository: [https://github.com/groundlight/raspberry-pi-door-lock](https://github.com/groundlight/raspberry-pi-door-lock)

This sample application demonstrates how to set up a Raspberry Pi-based door lock system. The application monitors a door and sends a notification if the door is observed to be unlocked during non-standard business hours.

## Industrial and Manufacturing Applications

Groundlight can be used to [apply modern natural-language-based computer vision to industrial and manufacturing applications](/docs/building-applications/industrial).
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 1
---

# Grabbing Images

Groundlight's SDK accepts images in many popular formats, including PIL, OpenCV, and numpy arrays.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 2
---

# Working with Detectors
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
sidebar_position: 3
---
# Confidence Levels

Groundlight gives you a simple way to control the trade-off of latency against accuracy. The longer you can wait for an answer to your image query, the better accuracy you can get. In particular, if the ML models are unsure of the best response, they will escalate the image query to more intensive analysis with more complex models and real-time human monitors as needed. Your code can easily wait for this delayed response. Either way, these new results are automatically trained into your models so your next queries will get better results faster.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 4
---

# Handling Server Errors

When building applications with the Groundlight SDK, you may encounter server errors during API calls. This page covers how to handle such errors and build robust code that can gracefully handle exceptions.
Expand Down
73 changes: 73 additions & 0 deletions docs/docs/building-applications/6-async-queries.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
sidebar_position: 5
---

# Asynchronous Queries

Groundlight provides a simple interface for submitting asynchronous queries. This is useful for times in which the thread or process or machine submitting image queries is not the same thread or machine that will be retrieving and using the results. For example, you might have a forward deployed robot or camera that submits image queries to Groundlight, and a separate server that retrieves the results and takes action based on them. We will refer to these two machines as the **submitting machine** and the **retrieving machine**.

## Setup Submitting Machine
On the **submitting machine**, you will need to install the Groundlight Python SDK. Then you can submit image queries asynchronously using the `ask_async` interface (read the full documentation [here](pathname:///python-sdk/api-reference-docs/#groundlight.client.Groundlight.ask_async)). `ask_async` submits your query and returns as soon as the query is submitted. It does not wait for an answer to be available prior to returning to minimize the time your program spends interacting with Groundlight. As a result, the `ImageQuery` object `ask_async` returns lacks a `result` (the `result` field will be `None`). This is acceptable for this use case as the **submitting machine** is not interested in the result. Instead, the **submitting machine** just needs to communicate the `ImageQuery.id`s to the **retrieving machine** - this might be done via a database, a message queue, or some other mechanism. For this example, we assume you are using a database where you save the `ImageQuery.id` to it via `db.save(image_query.id)`.

```python notest
from groundlight import Groundlight
import cv2
from time import sleep

detector = gl.get_or_create_detector(name="your_detector_name", query="your_query")

cam = cv2.VideoCapture(0) # Initialize camera (0 is the default index)

while True:
_, image = cam.read() # Capture one frame from the camera
image_query = gl.ask_async(detector=detector, image=image) # Submit the frame to Groundlight
db.save(image_query.id) # Save the image_query.id to a database for the retrieving machine to use
sleep(10) # Sleep for 10 seconds before submitting the next query

cam.release() # Release the camera

```

## Setup Retrieving Machine
On the **retrieving machine** you will need to install the Groundlight Python SDK. Then you can retrieve the results of the image queries submitted by another machine using `get_image_query`. The **retrieving machine** can then use the `ImageQuery.result` to take action based on the result for whatever application you are building. For this example, we assume your application looks up the next image query to process from a database via `db.get_next_image_query_id()` and that this function returns `None` once all `ImageQuery`s are processed.
```python notest
from groundlight import Groundlight

detector = gl.get_or_create_detector(name="your_detector_name", query="your_query")

image_query_id = db.get_next_image_query_id()

while image_query_id is not None:
image_query = gl.get_image_query(id=image_query_id) # retrieve the image query from Groundlight
result = image_query.result

# take action based on the result of the image query
if result.label == 'YES':
pass # TODO: do something based on your application
elif result.label == 'NO':
pass # TODO: do something based on your application
elif result.label == 'UNCLEAR':
pass # TODO: do something based on your application

# update image_query_id for next iteration of the loop
image_query_id = db.get_next_image_query_id()
```

## Important Considerations
When you submit an image query asynchronously, ML prediction on your query is **not** instant. So attempting to retrieve the result immediately after submitting an async query will likely result in an `UNCLEAR` result as Groundlight is still processing your query. Instead, if your code needs a `result` synchronously we recommend using one of our methods with a polling mechanism to retrieve the result. You can see all of the interfaces available in the documentation [here](pathname:///python-sdk/api-reference-docs/#groundlight.client.Groundlight).

```python notest
from groundlight import Groundlight
from PIL import Image

detector = gl.get_or_create_detector(name="your_detector_name", query="your_query")
image = Image.open("/path/to/your/image.jpg")
image_query = gl.ask_async(detector=detector, image=image) # Submit async query to Groundlight
result = image_query.result # This will always be 'None' as you asked asynchronously

image_query = gl.get_image_query(id=image_query.id) # Immediately retrieve the image query from Groundlight
result = image_query.result # This will likely be 'UNCLEAR' as Groundlight is still processing your query

image_query = gl.wait_for_confident_result(id=image_query.id) # Poll for a confident result from Groundlight
result = image_query.result
```
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# Using Groundlight on the edge
---
sidebar_position: 6
---

# Using Groundlight on the Edge

If your account has access to edge models, you can download and install them to your edge devices.
This allows you to run your model evaluations on the edge, reducing latency, cost, network bandwidth, and energy.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 7
---

# Industrial and Manufacturing Applications

Modern natural language-based computer vision is transforming industrial and manufacturing applications by enabling more intuitive interaction with automation systems. Groundlight offers cutting-edge computer vision technology that can be seamlessly integrated into various industrial processes, enhancing efficiency, productivity, and quality control.
Expand Down
42 changes: 9 additions & 33 deletions docs/docs/building-applications/building-applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,43 +5,19 @@ Groundlight provides a powerful "computer vision powered by natural language" sy
In this page, we'll introduce you to some sample applications built using Groundlight and provide links to more detailed guides on various topics.

## Sample Applications

Explore these GitHub repositories to see examples of Groundlight-powered applications:

### Groundlight Stream Processor

Repository: [https://github.com/groundlight/stream](https://github.com/groundlight/stream)

The Groundlight Stream Processor is an easy-to-use Docker container for analyzing RTSP streams or common USB-based cameras. You can run it with a single Docker command, such as:

```bash
docker run stream:local --help
```

### Arduino ESP32 Camera Sample App

Repository: [https://github.com/groundlight/esp32cam](https://github.com/groundlight/esp32cam)

This sample application allows you to build a working AI vision detector using an inexpensive WiFi camera. With a cost of under $10, you can create a powerful and affordable AI vision system.

### Raspberry Pi

Repository: [https://github.com/groundlight/raspberry-pi-door-lock](https://github.com/groundlight/raspberry-pi-door-lock)

This sample application demonstrates how to set up a Raspberry Pi-based door lock system. The application monitors a door and sends a notification if the door is observed to be unlocked during non-standard business hours.

### Industrial and Manufacturing Applications

Groundlight can be used to [apply modern natural-language-based computer vision to industrial and manufacturing applications](/docs/building-applications/industrial).
- **[Sample Applications](1-sample-applications.md)**: Find repositories with examples of applications built with Groundlight

## Further Reading

For more in-depth guides on various aspects of building applications with Groundlight, check out the following pages:

- [Working with Detectors](working-with-detectors.md): Learn how to create, configure, and use detectors in your Groundlight-powered applications.
- [Using Groundlight on the edge](edge.md): Discover how to deploy Groundlight in edge computing environments for improved performance and reduced latency.
- [Handling HTTP errors](handling-errors.md): Understand how to handle and troubleshoot HTTP errors that may occur while using Groundlight.

- **[Grabbing images](2-grabbing-images.md)**: Understand the intricacies of how to submit images from various input sources to Groundlight.
- **[Working with detectors](3-working-with-detectors.md)**: Learn how to create, configure, and use detectors in your Groundlight-powered applications.
- **[Confidence levels](4-managing-confidence.md)**: Master how to control the trade-off of latency against accuracy by configuring the desired confidence level for your detectors.
- **[Handling server errors](5-handling-errors.md)**: Understand how to handle and troubleshoot HTTP errors that may occur while using Groundlight.
- **[Asynchronous queries](6-async-queries.md)**: Groundlight makes it easy to submit asynchronous queries. Learn how to submit queries asynchronously and retrieve the results later.
- **[Using Groundlight on the edge](7-edge.md)**: Discover how to deploy Groundlight in edge computing environments for improved performance and reduced latency.
- **[Industrial applications](8-industrial.md)**: Learn how to apply modern natural-language-based computer vision to your industrial and manufacturing applications.

By exploring these resources and sample applications, you'll be well on your way to building powerful visual applications using Groundlight's computer vision and natural language capabilities.


74 changes: 74 additions & 0 deletions docs/docs/getting-started/5-streaming.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# A Quick Example: Live Stream Alert

A quick example to get used to setting up detectors and asking good questions: set up a monitor on a live stream.

## Requirements

- [Groundlight SDK](/docs/installation/) with Python 3.7 or higher
- The video ID of a YouTube live stream you'd like to monitor

## Installation

Ensure you have Python 3.7 or higher installed, and then install the Groundlight SDK and OpenCV library:

```bash
pip install groundlight pillow ffmpeg yt-dlp typer
```

## Creating the Application

1. Save this command as a shell script `get_latest_frame.sh`:

```
#!/bin/bash
ffmpeg -i "$(yt-dlp -g $1 | head -n 1)" -vframes 1 last.jpg -y
```

This will download the most recent frame from a YouTube live stream and save it to a local file `last.jpg`.

2. Log in to the [Groundlight application](https://app.groundlight.ai) and get an [API Token](api-tokens).

3. Next, we'll write the Python script for the application.

```python notest
import os
import subprocess
import typer
from groundlight import Groundlight
from PIL import Image


def main(*, video_id: str = None, detector_name: str = None, query: str = None, confidence: float = 0.75, wait: int = 60):
"""
Run the script to get the stream's last frame as a subprocess, and submit result as an image query to a Groundlight detector
:param video_id: Video ID of the YouTube live stream (the URLs have the form https://www.youtube.com/watch?v=<VIDEO_ID>)
:param detector_name: Name for your Groundlight detector
:param query: Question you want to ask of the stream (we will alert on the answer of NO)
"""
gl = Groundlight()
detector = gl.create_detector(name=detector_name, query=query, confidence_threshold=confidence)

while True:
p = subprocess.run(["./get_latest_frame.sh", video_id])
if p.returncode != 0:
raise RuntimeError(f"Could not get image from video ID: {video_id}. Process exited with return code {p.returncode}.")

image = Image.open("last.jpg").convert("RGB")
response = gl.submit_image_query(detector=detector, image=image, wait=wait)

if response.result.label == "NO":
os.system("say 'Alert!'") # this may not work on all operating systems


if __name__ == "__main__":
typer.run(main)

```

4. Save the script as `streaming_alert.py` in the same directory as `get_latest_frame.sh` above and run it:

```bash
python streaming_alert.py <VIDEO_ID> --detector_name <DETECTOR_NAME> --query <QUERY IN QUOTATION MARKS>
```

2 changes: 1 addition & 1 deletion docs/docs/getting-started/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ _Note: The SDK is currently in "beta" phase. Interfaces are subject to change in

### How does it work?

Your images are first analyzed by machine learning (ML) models which are automatically trained on your data. If those models have high enough [confidence](docs/building-applications/managing-confidence), that's your answer. But if the models are unsure, then the images are progressively escalated to more resource-intensive analysis methods up to real-time human review. So what you get is a computer vision system that starts working right away without even needing to first gather and label a dataset. At first it will operate with high latency, because people need to review the image queries. But over time, the ML systems will learn and improve so queries come back faster with higher confidence.
Your images are first analyzed by machine learning (ML) models which are automatically trained on your data. If those models have high enough [confidence](docs/building-applications/4-managing-confidence.md), that's your answer. But if the models are unsure, then the images are progressively escalated to more resource-intensive analysis methods up to real-time human review. So what you get is a computer vision system that starts working right away without even needing to first gather and label a dataset. At first it will operate with high latency, because people need to review the image queries. But over time, the ML systems will learn and improve so queries come back faster with higher confidence.

### Escalation Technology

Expand Down
6 changes: 3 additions & 3 deletions docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
// Note: type annotations allow type checking and IDEs autocompletion

// Options: https://github.com/FormidableLabs/prism-react-renderer/tree/master/src/themes
const lightCodeTheme = require("prism-react-renderer/themes/github");
const darkCodeTheme = require("prism-react-renderer/themes/vsDark");
const lightCodeTheme = require("prism-react-renderer").themes.github;
const darkCodeTheme = require("prism-react-renderer").themes.vsDark;

/** @type {import('@docusaurus/types').Config} */
const config = {
Expand Down Expand Up @@ -100,7 +100,7 @@ const config = {
},
{
href: "pathname:///python-sdk/api-reference-docs/",
label: 'API Reference',
label: "API Reference",
position: "left",
},
{
Expand Down
Loading

0 comments on commit 2f9f31f

Please sign in to comment.