Join our 30-minute workshop to explore the specialized use of Llama2-70B.
- Specialized LLM Application: Learn how a fine-tuned Llama2-70B model is applied in medical record analysis.
- Practical Demonstration: Experience live setup and usage of this LLM (CPU and GPU instance), analyzing different scenarios to utilize LLM (getting keywords, topic modeling).
Ideal for IT professionals, this workshop underlines the importance of using fine-tuned LLM for specific industry needs.
Before starting the workshop, please download the "Medical Transcriptions" data from Kaggle.
The workshop includes three Jupyter notebooks, which you can access and view directly in this repository:
- llm_cpu.ipynb: Can be run on a standard laptop. Demonstrates basic usage of Mistral-7B on CPU.
- llm_gpu.ipynb: Optimized for running Mistral-7B on Google Colab with a T4 GPU. Focuses on utilizing GPU acceleration.
- adv_llm_gpu.ipynb: Intended for running fine-tuned Llama2-70B at cloud environments like GCP. Requires a machine with specifications like n1-standard-16 (16 vCPUs, 60 GB RAM) and 4 NVIDIA T4 GPUs (USD 1,645 hourly, ~0.005 USD per record -> 5k records = 25 USD) for advanced processing and analysis.
This project is open-sourced under the MIT License.