Welcome to the Scott Logic prompt injection open source project! As generative AI and LLMs become more prevalent, it becomes more important to learn about the dangers of prompt injection. This project aims to teach people about prompt injection attacks that can be used on generative AI, and how to defend against such attacks.
This project is presented in two modes:
Go undercover and use prompt injection attacks on ScottBrewBot, a clever but flawed generative AI bot. Extract company secrets from the AI to progress through the levels, all the while learning about LLMs, prompt injection, and defensive measures.
Activate and configure a number of different prompt injection defence measures to create your own security system. Then talk to the AI and try to crack it!
Ensure you have Node v18+ installed. Clone this repo and run
npm ci
- Copy the example environment file .env.example in the backend directory and rename it to
.env
. - Replace the
OPENAI_API_KEY
value in the.env
file with your OpenAI API key. - Replace the
SESSION_SECRET
value with a random UUID.
- Copy the example environment file .env.example in the frontend directory and rename it to
.env
. - Replace the
VITE_BACKEND_URL
value with the backend endpoint.
npm run start:api
npm run start:ui
Note that this project also includes a VS Code launch file, to allow running API and UI directly from the IDE.
The project is configured to be linted and formatted on both the backend and frontend.
If you are using VS Code, we recommend doing the following:
- Get the prettier-eslint extension.
- Set the default formatter to the prettier-eslint one.
- Configure VS Code to format your documents on save.
To manually lint and format, use the following:
npm run lint
npm run format
in both the backend and frontend directories.
cd backend/
npm run test
cd frontend/
npm run test
Thank you for considering contributing to this open source project!
Please read the our contributing guide and our code of conduct before contributing.