Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(Instructor): introduce Instructor Hub with tutorials, examples, and new CLI #439

Merged
merged 31 commits into from
Feb 18, 2024
Merged
Show file tree
Hide file tree
Changes from 30 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions docs/hub/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Instructor Hub

Welcome to instructor hub, the goal of this project is to provide a set of tutorials and examples to help you get started, and allow you to pull in the code you need to get started with `instructor`

Make sure you're using the latest version of `instructor` by running:

```bash
pip install -U instructor
```

## Contributing

We welcome contributions to the instructor hub, if you have a tutorial or example you'd like to add, please open a pull request in `docs/hub` and we'll review it.

1. The code must be in a single file
2. Make sure that its referenced in the `mkdocs.yml`
3. Make sure that the code is unit tested.

### Using pytest_examples

By running the following command you can run the tests and update the examples. This ensures that the examples are always up to date.
Linted correctly and that the examples are working, make sure to include a `if __name__ == "__main__":` block in your code and add some asserts to ensure that the code is working.

```bash
poetry run pytest tests/openai/docs/test_hub.py --update-examples
```

## CLI Usage

Instructor hub comes with a command line interface (CLI) that allows you to view and interact with the tutorials and examples and allows you to pull in the code you need to get started with the API.

### List Cookbooks

By running `instructor hub list` you can see all the available tutorials and examples. By clickony (doc) you can see the full tutorial back on this website.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The term 'clickony' seems to be a typo. Please correct it to 'clicking on'.


```bash
$ instructor hub list --sort
```

| hub_id | slug | title | n_downloads |
| ------ | ----------------------------- | ----------------------------- | ----------- |
| 2 | multiple_classification (doc) | Multiple Classification Model | 24 |
| 1 | single_classification (doc) | Single Classification Model | 2 |

### Searching for Cookbooks

You can search for a tutorial by running `instructor hub list -q <QUERY>`. This will return a list of tutorials that match the query.

```bash
$ instructor hub list -q multi
```

| hub_id | slug | title | n_downloads |
| ------ | ----------------------------- | ----------------------------- | ----------- |
| 2 | multiple_classification (doc) | Multiple Classification Model | 24 |

### Reading a Cookbook

To read a tutorial, you can run `instructor hub pull --id <hub_id> --page` to see the full tutorial in the terminal. You can use `j,k` to scroll up and down, and `q` to quit. You can also run it without `--page` to print the tutorial to the terminal.

```bash
$ instructor hub pull --id 2 --page
```

### Pulling in Code

You can pull in the code with `--py --output=<filename>` to save the code to a file, or you cal also run it without `--output` to print the code to the terminal.

```bash
$ instructor hub pull --id 2 --py --output=run.py
$ instructor hub pull --id 2 --py > run.py
```

You can run the code instantly if you `|` it to `python`:

```bash
$ instructor hub pull --id 2 --py | python
```

## Call for Contributions

We're looking for a bunch more hub examples, if you have a tutorial or example you'd like to add, please open a pull request in `docs/hub` and we'll review it.

- [ ] Converting the cookbooks to the new format
- [ ] Validator examples
- [ ] Data extraction examples
- [ ] Streaming examples (Iterable and Partial)
- [ ] Batch Parsing examples
- [ ] Open Examples, together, anyscale, ollama, llama-cpp, etc
- [ ] Query Expansion examples
- [ ] Batch Data Processing examples
- [ ] Batch Data Processing examples with Cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seems to be a repetition in the 'Call for Contributions' section. The last two points are both 'Batch Data Processing examples'. Could you please clarify the difference between these two points? If they are the same, one of them should be removed to avoid confusion.

51 changes: 51 additions & 0 deletions docs/hub/multiple_classification.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
For multi-label classification, we introduce a new enum class and a different Pydantic model to handle multiple labels.

```python
import openai
import instructor

from typing import List, Literal
from pydantic import BaseModel, Field

# Apply the patch to the OpenAI client
# enables response_model keyword
client = instructor.patch(openai.OpenAI())

LABELS = Literal["ACCOUNT", "BILLING", "GENERAL_QUERY"]


class MultiClassPrediction(BaseModel):
labels: List[LABELS] = Field(
...,
description="Only select the labels that apply to the support ticket.",
)


def multi_classify(data: str) -> MultiClassPrediction:
return client.chat.completions.create(
model="gpt-4-turbo-preview", # gpt-3.5-turbo fails
response_model=MultiClassPrediction,
messages=[
{
"role": "system",
"content": f"You are a support agent at a tech company. Only select the labels that apply to the support ticket.",
},
{
"role": "user",
"content": f"Classify the following support ticket: {data}",
},
],
) # type: ignore


if __name__ == "__main__":
ticket = "My account is locked and I can't access my billing info."
prediction = multi_classify(ticket)
assert {"ACCOUNT", "BILLING"} == {label for label in prediction.labels}
print("input:", ticket)
#> input: My account is locked and I can't access my billing info.
print("labels:", LABELS)
#> labels: typing.Literal['ACCOUNT', 'BILLING', 'GENERAL_QUERY']
print("prediction:", prediction)
#> prediction: labels=['ACCOUNT', 'BILLING']
```
47 changes: 47 additions & 0 deletions docs/hub/single_classification.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Single-Label Classification

This example demonstrates how to perform single-label classification using the OpenAI API. The example uses the `gpt-3.5-turbo` model to classify text as either `SPAM` or `NOT_SPAM`.

```python
from pydantic import BaseModel, Field
from typing import Literal
from openai import OpenAI
import instructor

# Apply the patch to the OpenAI client
# enables response_model keyword
client = instructor.patch(OpenAI())


class ClassificationResponse(BaseModel):
label: Literal["SPAM", "NOT_SPAM"] = Field(
...,
description="The predicted class label.",
)


def classify(data: str) -> ClassificationResponse:
"""Perform single-label classification on the input text."""
return client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=ClassificationResponse,
messages=[
{
"role": "user",
"content": f"Classify the following text: {data}",
},
],
)


if __name__ == "__main__":
for text, label in [
("Hey Jason! You're awesome", "NOT_SPAM"),
("I am a nigerian prince and I need your help.", "SPAM"),
]:
prediction = classify(text)
assert prediction.label == label
print(f"Text: {text}, Predicted Label: {prediction.label}")
#> Text: Hey Jason! You're awesome, Predicted Label: NOT_SPAM
#> Text: I am a nigerian prince and I need your help., Predicted Label: SPAM
```
13 changes: 13 additions & 0 deletions instructor-hub-proxy/.editorconfig
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# http://editorconfig.org
root = true

[*]
indent_style = tab
tab_width = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

[*.yml]
indent_style = space
172 changes: 172 additions & 0 deletions instructor-hub-proxy/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
# Logs

logs
_.log
npm-debug.log_
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
.pnpm-debug.log*

# Diagnostic reports (https://nodejs.org/api/report.html)

report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json

# Runtime data

pids
_.pid
_.seed
\*.pid.lock

# Directory for instrumented libs generated by jscoverage/JSCover

lib-cov

# Coverage directory used by tools like istanbul

coverage
\*.lcov

# nyc test coverage

.nyc_output

# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)

.grunt

# Bower dependency directory (https://bower.io/)

bower_components

# node-waf configuration

.lock-wscript

# Compiled binary addons (https://nodejs.org/api/addons.html)

build/Release

# Dependency directories

node_modules/
jspm_packages/

# Snowpack dependency directory (https://snowpack.dev/)

web_modules/

# TypeScript cache

\*.tsbuildinfo

# Optional npm cache directory

.npm

# Optional eslint cache

.eslintcache

# Optional stylelint cache

.stylelintcache

# Microbundle cache

.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/

# Optional REPL history

.node_repl_history

# Output of 'npm pack'

\*.tgz

# Yarn Integrity file

.yarn-integrity

# dotenv environment variable files

.env
.env.development.local
.env.test.local
.env.production.local
.env.local

# parcel-bundler cache (https://parceljs.org/)

.cache
.parcel-cache

# Next.js build output

.next
out

# Nuxt.js build / generate output

.nuxt
dist

# Gatsby files

.cache/

# Comment in the public line in if your project uses Gatsby and not Next.js

# https://nextjs.org/blog/next-9-1#public-directory-support

# public

# vuepress build output

.vuepress/dist

# vuepress v2.x temp and cache directory

.temp
.cache

# Docusaurus cache and generated files

.docusaurus

# Serverless directories

.serverless/

# FuseBox cache

.fusebox/

# DynamoDB Local files

.dynamodb/

# TernJS port file

.tern-port

# Stores VSCode versions used for testing VSCode extensions

.vscode-test

# yarn v2

.yarn/cache
.yarn/unplugged
.yarn/build-state.yml
.yarn/install-state.gz
.pnp.\*

# wrangler project

.dev.vars
.wrangler/
6 changes: 6 additions & 0 deletions instructor-hub-proxy/.prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"printWidth": 140,
"singleQuote": true,
"semi": true,
"useTabs": true
}
10 changes: 10 additions & 0 deletions instructor-hub-proxy/create.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
CREATE TABLE hub_analytics (
id SERIAL PRIMARY KEY,
event_type VARCHAR(255) NOT NULL,
user_agent VARCHAR(255) NOT NULL,
request_ip VARCHAR(100) NOT NULL,
request_time TIMESTAMP WITH TIME ZONE NOT NULL,
branch VARCHAR(255) NOT NULL,
slug VARCHAR(255) NOT NULL
);

Loading
Loading