Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add load tests #12

Merged
merged 14 commits into from
Oct 8, 2023
3 changes: 3 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ jobs:
- name: Lint and typecheck
run: |
hatch run lint-check
- name: Install Playwright browser(s)
run: |
hatch run playwright install chromium
- name: Test
run: |
hatch run test-cov-xml
Expand Down
8 changes: 8 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@
exclude: (\.min\.js$|\.svg$|\.html$)
default_stages: [commit]
repos:
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.9.1
hooks:
- id: black
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
Expand Down
141 changes: 141 additions & 0 deletions DEVELOPER_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
# Development

**Welcome. Thanks for your interest in Panel-Chat-Examples ❤️**

You can contribute in many ways, for example, by

- Giving our project a ⭐ on [Github](https://github.com/holoviz-topics/panel-chat-examples).
- Sharing knowledge about Panel-Chat-Examples on social media.
- Contributing very clear and easily reproducible [Bug Reports or Feature Requests](https://github.com/holoviz-topics/panel-chat-examples/issues).
- Improving our README, docs and developer infrastructure.
- Improving our collection of [examples](docs/examples).

Before you start contributing to our code base or documentation, please make sure your contribution is well described and discussed in a [Github Issue](https://github.com/holoviz-topics/panel-chat-examples/issues).

If you need help to get started, please reach out via [Discord](https://discord.gg/rb6gPXbdAr).

## Getting Installed

Start by cloning the repository

```bash
git clone https://github.com/holoviz-topics/panel-chat-examples
cd panel-chat-examples
```

If you are not a core contributor you will have to work with your own fork too. See the Github [Fork a Repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo) guide for more details.

We use [Hatch](https://hatch.pypa.io/latest/install/) to manage the development environment and production build.

Please ensure it's installed on your system with

```bash
pip install hatch
```

Please ensure [Playwright](https://playwright.dev/python/) browsers are installed

```bash
hatch run playwright install chromium
```

The first time `hatch run ...` is run, it will install the required dependencies.

Please ensure `pre-commit` is installed by running

```bash
hatch run pre-commit run --all
```

You will also need to set the below environment variables

```bash
export OPENAI_API_KEY=...
```

Please note that you will be incurring costs from OPENAI when you run the tests or serve the apps!

## Format, lint and type check the code

Execute the following command to apply autoformatting, linting and check typing:

```bash
hatch run lint
```

## Run all tests

You can run all the tests with:

```bash
hatch run test
```

## Run UI tests

To run the Playwright tests in *headed* mode (i.e. show the browser) you can run

```bash
hatch run pytest -m ui --headed
```

You can take screenshots via

```bash
SCREENSHOT=true hatch run pytest -m ui
```

The screenshots can be found in [tests/ui/screenshots](tests/ui/screenshots)

## Run Load tests

To ensure the apps can be deployed for example to Hugging Face spaces we need them to load fast.
We can test the loading time with [Locust](https://docs.locust.io/en/stable/index.html).

First you need to serve the examples

```bash
hatch run panel-serve
```

Then you should run

```bash
hatch run loadtest
```

Finally you can open [http://localhost:8089/](http://localhost:8089/) and click "Start swarming"

You should make sure the RPS (Request per seconds) stay above 1. In the image below its 2.3.

![Locust](assets/images/panel-chat-examples-locust.png)

## Serve the documentation

You can serve the Mkdocs documentation with livereload via:

```bash
hatch run docs-serve
```

It'll automatically watch for changes in your code.

## Publish a new version

You can bump the version, create a commit and associated tag with one command:

```bash
hatch version patch
```

```bash
hatch version minor
```

```bash
hatch version major
```

Your default Git text editor will open so you can add information about the release.

When you push the tag on GitHub, the workflow will automatically publish it on PyPi and a GitHub release will be created as draft.
68 changes: 11 additions & 57 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,82 +12,36 @@ THIS PROJECT IS IN EARLY STAGE AND WILL CHANGE!

The examples are based on the next generation of chat features being developed in [PR #5333](https://github.com/holoviz/panel/pull/5333)

To run the examples:
To install and serve all examples:

```bash
hatch run panel serve docs/examples/**/*.py --static-dirs thumbnails=docs/assets/thumbnails --autoreload
git clone https://github.com/holoviz-topics/panel-chat-examples
cd panel-chat-examples
pip install hatch
# Set the OPENAI_API_KEY environment variable
hatch run panel-serve # or equivalently panel serve docs/examples/**/*.py --static-dirs thumbnails=docs/assets/thumbnails --autoreload
```

Note the default installation is not optimized for GPU usage. To enable GPU support for local
models (i.e. not OpenAI), install `ctransformers` with the [proper backend](https://github.com/marella/ctransformers#gpu) and modify the scripts configs' accordingly, e.g. `n_gpu_layers=1` for a single GPU.

CUDA:

```bash
pip install ctransformers[cuda]
```

Mac M1/2:
```bash
CT_METAL=1 pip install ctransformers --no-binary ctransformers # for m1
```
---

## Development

### Clone repository

`git clone https://github.com/holoviz-topics/panel-chat-examples.git`

### Setup environment

We use [Hatch](https://hatch.pypa.io/latest/install/) to manage the development environment and production build. Ensure it's installed on your system with `pip install hatch`

### Run unit tests

You can run all the tests with:

```bash
hatch run test
```

### Format the code

Execute the following command to apply linting and check typing:

```bash
hatch run lint
```

### Publish a new version

You can bump the version, create a commit and associated tag with one command:

```bash
hatch version patch
```

```bash
hatch version minor
```

```bash
hatch version major
CT_METAL=1 pip install ctransformers --no-binary ctransformers # for m1
```

Your default Git text editor will open so you can add information about the release.

When you push the tag on GitHub, the workflow will automatically publish it on PyPi and a GitHub release will be created as draft.

## Serve the documentation

You can serve the Mkdocs documentation with:
---

```bash
python scripts/generate_gallery.py
hatch run docs-serve
```
## Contributing

It'll automatically watch for changes in your code.
We would ❤️ to collaborate with you. Check out the [DEVELOPER GUIDE](DEVELOPER_GUIDE.md) for to get started.

## License

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 4 additions & 2 deletions docs/examples/basics/echo_stream.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
"""
Demonstrates how to use the `ChatInterface` and a `callback` function to stream back responses.
Demonstrates how to use the `ChatInterface` and a `callback` function to stream back
responses.

The chatbot Assistant echoes back the message entered by the User in a *streaming* fashion.
The chatbot Assistant echoes back the message entered by the User in a *streaming*
fashion.
"""


Expand Down
1 change: 0 additions & 1 deletion docs/examples/features/delayed_placeholder.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
"""

from asyncio import sleep
from random import choice

import panel as pn

Expand Down
4 changes: 2 additions & 2 deletions docs/examples/langchain/llama_and_mistral.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
"""

import panel as pn

from langchain.chains import LLMChain
from langchain.llms import CTransformers
from langchain.prompts import PromptTemplate
Expand All @@ -23,7 +22,8 @@
}
llm_chains = {}

TEMPLATE = """<s>[INST] You are a friendly chat bot who's willing to help answer the user:
TEMPLATE = """<s>[INST] You are a friendly chat bot who's willing to help answer the
user:
{user_input} [/INST] </s>
"""

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/openai/authentication.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def add_key_to_env(key):
os.environ["OPENAI_API_KEY"] = key
chat_interface.send(
"Your OpenAI key has been set. Feel free to minimize the sidebar.",
**SYSTEM_KWARGS
**SYSTEM_KWARGS,
)
chat_interface.disabled = False

Expand Down
7 changes: 3 additions & 4 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ To run all of these examples locally:
git clone https://github.com/holoviz-topics/panel-chat-examples
cd panel-chat-examples
pip install hatch
hatch run panel serve docs/examples/**/*.py --static-dirs thumbnails=docs/assets/thumbnails --autoreload
hatch run panel-serve
```

Note the default installation is not optimized for GPU usage. To enable GPU support for local
Expand Down Expand Up @@ -157,7 +157,6 @@ Demonstrates how to delay the display of the placeholder.
"""

from asyncio import sleep
from random import choice

import panel as pn

Expand Down Expand Up @@ -448,7 +447,6 @@ Llama2.
"""

import panel as pn

from langchain.chains import LLMChain
from langchain.llms import CTransformers
from langchain.prompts import PromptTemplate
Expand Down Expand Up @@ -574,7 +572,7 @@ def add_key_to_env(key):
os.environ["OPENAI_API_KEY"] = key
chat_interface.send(
"Your OpenAI key has been set. Feel free to minimize the sidebar.",
**SYSTEM_KWARGS
**SYSTEM_KWARGS,
)
chat_interface.disabled = False

Expand Down Expand Up @@ -882,3 +880,4 @@ chat_interface.send(
chat_interface.servable()
```
</details>

Loading
Loading