Skip to content

Stream video calling combined with transcription and LLM services from AssemblyAI

Notifications You must be signed in to change notification settings

GetStream/assemblyai-transcription-llm-assistant

Repository files navigation

Blog-BuildAI-VideoCalling-2400x1350px

Build an AI-powered video conferencing app with Next.js and AssemblyAI

This project demonstrates how to build a video conferencing app with Next.js and AssemblyAI.

It shows how to set up live transcription services as well as a fun LLM integration to have a personal AI assistant ready by simply saying a trigger word.

To get a detailed walk-through of the project, you can follow the blog post.

Features:

  • Video calling, powered by Stream
  • Realtime transcriptions powered by AssemblyAI
  • LLM integration by calling a trigger word (using AssemblyAI's LeMUR integration)
  • Access the history of the meeting when asking the LLM questions

Running the project locally

Follow these steps to get the project up and running for you.

Step 0: Project requirements

This project requires us to have Node.js 18.17 or later installed to build with Next.js.

Step 1: Setup access to a Stream backend

Head to the Stream Dashboard and create an account. Create a new project to build up your application (all handled and managed by Stream).

This is necessary because you need two properties from this.

  1. Your API key
  2. Your Secret

See the red rectangle in the screenshot below on where you can retrieve this information from the Dashboard.

stream-apikey-and-secret

Create a .env.local file at the project's root and add the API key and the secret. A template file (.env.template) is available to view. Ensure you follow the correct naming conventions.

Inside app/page.tsx, you must update the values of userId and userName to be actual values instead of undefined.

If you forget to do this, your app will show an error, displaying what you have missed.

Step 2: Setup an AssemblyAI account

Follow the link to create an AssemblyAI Account for the real-time transcription and LLM functionality.

Step 3: Run the project

First, install all the dependencies for the project:

npm install
# or
yarn

You're ready to run the app with the command:

npm run dev
# or
yarn dev

If you want to learn more you can also check out these links:

About

Stream video calling combined with transcription and LLM services from AssemblyAI

Topics

Resources

Stars

Watchers

Forks