Welcome to the "OpenAI and Prompt Engineering for React Developers" workshop! In this workshop, we will explore how to leverage OpenAI's capabilities within a React application, focusing particularly on prompt engineering techniques. Our project will be built using Next.js 13.
Before we get started, you'll need to set up an account with OpenAI and generate an API key:
- Visit OpenAI’s API keys page.
- Sign up for an account if you don’t have one already.
- Once logged in, navigate to the API keys section and generate a new API key.
- Securely save your API key; you will need it for the workshop.
Follow these steps to set up your development environment:
This project requires Node.js version 18. Use nvm
to switch to the correct version:
nvm use
If you don’t have Node.js version 18 installed, nvm
will prompt you to install it:
nvm install 18
Navigate to the project directory in your terminal, then run:
npm install
This command will install all the required dependencies.
Create a .env.local file at the root of your project, and add your OpenAI API key:
OPENAI_API_KEY=your-api-key-here
Make sure to replace your-api-key-here with the actual API key you generated.
Start the development server with:
npm run dev
- Check your API keys are working, go to http://localhost:3000/api/example and you should see the available models in the data returned. TIP: use JSON formatter extension if you don't already!
- In /pages/api/example.ts We are currently sending the list of models to the client, change the code so that we are sending the text completion. You should see the data at lhttp://localhost:3000/api/example
- Craft a new prompt(s) that generates startup ideas. Try getting the API to return a completion that includes:
- Product Name
- Idea
- Mission
- Unique Selling Points (USPs)
- Remember, you can also use the playground
- Analyze the API response:
- Note how many tokens your prompt is using? What is the cost?
- Explore ways to improve the response with best practice prompting we talked about.
- Investigate whether changing the model (gpt-3.5-turbo) and the hyperparameters (currently default!) makes a significant difference. Check the docs
- Remember, you can also use the playground, then copy across your prompt and settings.
- Bonus: In /pages/api/generateImage start experimenting with the images endpoint in the OpenAI API. Use your prompting knowledge to generate an image for a given startup idea. For now you can simply return the url and click to view.
- Navigate to http://localhost:3000/ideaGenerator. This is the page we'll implement
- Utilize
pages/api/generateText.ts
for server-side implementation. You should expect an input to be passed on the request from the client and then use this in your prompt. HINT: check out the code comments in that file! - Implement your frontend in
src/app/ideaGenerator/page.tsx
, you'll need to make a POST request to the endpoint atpages/api/generateText
that contains the user input to be passed into the prompt.
- Further develop a prompt for generating an image based on the user input in
pages/api/generateImage.ts
. - You will receive a URL in response, where you can view the generated image.
- Integrate the image generation functionality into your startup idea generator in
src/app/ideaGenerator/page.tsx
.
- In
pages/api/GenerateData
Implement your existing prompt along with a shape (schema) for the JSON you want the LLM to return. The data should include a product, idea, mission and an array of unique selling points, feel free to add more things! Check the code comments for hints. - Complete the implemetation of the frontend in
src/app/ideaGeneratorStructured/page.tsx
. - Bonus: Explore using Zod for your schema creation and validation with zodToJsonSchema, passing the LLM a JSON schema works well. This is also the approach taken by the output parser from langchain. You could try implementing your own or try using their output parser.
Note: We will use the Next13 app router for this section
- Go to
src/app/api/completion/route.ts
. Implement text streaming on the server to generate a startup idea to enhance the user experience. Vercel has made it super simple, follow the docs: https://sdk.vercel.ai/docs/api-reference/openai-stream - Go to
src/app/streamingText/page.ts
. Implement the utility hook useCompletion in the UI and ensure it's working, see the code comments and use the docs: https://sdk.vercel.ai/docs/api-reference/use-completion#usecompletion
- Assuming your streaming text is working, make a refactor so that it returns a JSON object as we did in the previous exercise.
- Now we would like to stream that JSON for better UX, luckily as this is a common problem there is another little helper libary we can use called http-streaming-request. Explore and implement JSON streaming using the http-streaming-request library. There is a mostly compelte template you can use to try this in
src/app/streamingJSON/page.ts
you can see the result at http://localhost:3000/streamingJSON
- Have a look at the example data in scripts/fineTuneData.jsonl. Have a go at preaparing your own.
- Have a look at the examples of uploading and fine tuning a model via the api in createModel.mjs. Have a go at uploading and listing files and training a model. You can run this script with
node
, don't forget to update the .env with your api key.
Feel free to reach out for support or clarification as you work through these exercises. Happy coding!