Skip to content

Commit

Permalink
add new model tinyllama.
Browse files Browse the repository at this point in the history
add download model feature.
  • Loading branch information
smalltong02 committed Jan 3, 2024
1 parent c861729 commit 1e3fc49
Show file tree
Hide file tree
Showing 6 changed files with 316 additions and 88 deletions.
10 changes: 10 additions & 0 deletions WebUI/configs/webuiconfig.json
Original file line number Diff line number Diff line change
Expand Up @@ -411,6 +411,16 @@
"LocalModel": {
"LLM Model": {
"3B Model": {
"TinyLlama-1.1B-Chat-v1.0": {
"path": "models/llm/TinyLlama-1.1B-Chat-v1.0",
"device": "auto",
"maxmemory": 20,
"cputhreads": 4,
"loadbits": 16,
"preset": "default",
"load_type": "fastchat",
"Huggingface": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
},
"fastchat-t5-3b-v1.0": {
"path": "models/llm/fastchat-t5-3b-v1.0",
"device": "auto",
Expand Down
2 changes: 1 addition & 1 deletion WebUI/webui_pages/model_configuration/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ def configuration_page(api: ApiRequest, is_lite: bool = False):
download_path = st.button(
"Download",
use_container_width=True,
disabled=True
disabled=disabled
)
if download_path:
with st.spinner(f"Model downloading..., Please do not perform any actions or refresh the page."):
Expand Down
18 changes: 5 additions & 13 deletions __webgui_server__.py
Original file line number Diff line number Diff line change
Expand Up @@ -430,19 +430,11 @@ def download_llm_model(
hugg_path: str = Body("", description="huggingface path"),
local_path: str = Body("", description="local path"),
) -> Dict:
# import gc
# from transformers import AutoModel, AutoTokenizer, AutoConfig
# try:
# tokenizer = AutoTokenizer.from_pretrained(hugg_path)
# tokenizer.save_pretrained(local_path)
# config = AutoConfig.from_pretrained(hugg_path)
# config.save_pretrained(local_path)
# model = AutoModel.from_pretrained(hugg_path)
# model.save_pretrained(local_path)
# del model
# gc.collect()
# return {"code": 200, "msg": f'Success download LLM model {model_name} to local path {local_path}.'}
# except Exception as e:
from huggingface_hub import snapshot_download
try:
path = snapshot_download(repo_id=hugg_path, local_dir=local_path, local_dir_use_symlinks=False)
return {"code": 200, "msg": f'Success download LLM model {model_name} to local path {local_path}.'}
except Exception as e:
return {"code": 500, "msg": f'failed to download LLM model {model_name} to local path {local_path}.'}

host = FSCHAT_CONTROLLER["host"]
Expand Down
61 changes: 61 additions & 0 deletions models/llm/TinyLlama-1.1B-Chat-v1.0/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- text: "<|system|>\nYou are a chatbot who can help code!</s>\n<|user|>\nWrite me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>\n<|assistant|>\n"
---
<div align="center">

# TinyLlama-1.1B
</div>

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.


We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.

#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."


#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.

```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
Loading

0 comments on commit 1e3fc49

Please sign in to comment.