This project demonstrates how to build a video conferencing app with Next.js and AssemblyAI.
It shows how to set up live transcription services as well as a fun LLM integration to have a personal AI assistant ready by simply saying a trigger word.
To get a detailed walk-through of the project, you can follow the blog post.
Features:
- Video calling, powered by Stream
- Realtime transcriptions powered by AssemblyAI
- LLM integration by calling a trigger word (using AssemblyAI's LeMUR integration)
- Access the history of the meeting when asking the LLM questions
Follow these steps to get the project up and running for you.
This project requires us to have Node.js 18.17 or later installed to build with Next.js.
Head to the Stream Dashboard and create an account. Create a new project to build up your application (all handled and managed by Stream).
This is necessary because you need two properties from this.
- Your API key
- Your Secret
See the red rectangle in the screenshot below on where you can retrieve this information from the Dashboard.
Create a .env.local
file at the project's root and add the API key and the secret. A template file (.env.template
) is available to view. Ensure you follow the correct naming conventions.
Inside app/page.tsx
, you must update the values of userId
and userName
to be actual values instead of undefined
.
If you forget to do this, your app will show an error, displaying what you have missed.
Follow the link to create an AssemblyAI Account for the real-time transcription and LLM functionality.
First, install all the dependencies for the project:
npm install
# or
yarn
You're ready to run the app with the command:
npm run dev
# or
yarn dev
If you want to learn more you can also check out these links: