diff --git a/.ipynb_checkpoints/Untitled-checkpoint.ipynb b/.ipynb_checkpoints/Untitled-checkpoint.ipynb new file mode 100644 index 0000000000..363fcab7ed --- /dev/null +++ b/.ipynb_checkpoints/Untitled-checkpoint.ipynb @@ -0,0 +1,6 @@ +{ + "cells": [], + "metadata": {}, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/Lab 1/README.md b/Lab 1/README.md index 1b81c96b31..6fd7c83a28 100644 --- a/Lab 1/README.md +++ b/Lab 1/README.md @@ -1,8 +1,8 @@ - # Staging Interaction -\*\***NAME OF COLLABORATORS HERE**\*\* +<<<<<<< HEAD +\*\***Crystal Chong**\*\* In the original stage production of Peter Pan, Tinker Bell was represented by a darting light created by a small handheld mirror off-stage, reflecting a little circle of light from a powerful lamp. Tinkerbell communicates her presence through this light to the other characters. See more info [here](https://en.wikipedia.org/wiki/Tinker_Bell). @@ -39,9 +39,11 @@ _Make sure you read all the instructions and understand the whole of the laborat ### The Report This README.md page in your own repository should be edited to include the work you have done (the deliverables mentioned above). Following the format below, you can delete everything but the headers and the sections between the **stars**. Write the answers to the questions under the starred sentences. Include any material that explains what you did in this lab hub folder, and link it in your README.md for the lab. +======= +\*\* Qianxin(Carl) Gan (qg72) & Mingze Gao (mg2454) \*\* +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b ## Lab Overview -For this assignment, you are going to: A) [Plan](#part-a-plan) @@ -55,101 +57,390 @@ E) [Costume the device](#part-e-costume-the-device) F) [Record the interaction](#part-f-record) -Labs are due on Mondays. Make sure this page is linked to on your main class hub page. - ## Part A. Plan -To stage an interaction with your interactive device, think about: +<<<<<<< HEAD +# Introducing the Pet Companion Device: Your Furry Friend's Perfect Pal + +Welcome to a groundbreaking addition for every household – the **Pet Companion Device**. Engineered to seamlessly fit into bustling city apartments or tranquil countryside abodes, this revolutionary device is always on standby, day or night. A loyal companion that remains by your pet's side, even when you're away. + +![](https://hackmd.io/_uploads/B1ifz1iph.jpg) + + +\*\***Describe your setting, players, activity and goals here.**\*\* + +### Tailored for Every Member of the Family + +Catering to pet owners, children, and their cherished animal companions, this device offers an array of dynamic interactions. Picture your pet cat engrossed in playful engagement with the device, igniting its screen with a symphony of vibrant colors that mirror its every move: + +- When your cat emits a soft meow, witness the screen morph into a soothing Blue hue. +- A gentle lick from your furry companion douses the screen in a playful Purple glow. +- Inquisitive paws dancing across the device conjure a cheerful Purple and Black radiance. +- Energetic punches and swats paint the screen in gleaming Black and White brilliance. +- When your pet cat finds repose on the device, the screen softly transitions to a tranquil Green, reflecting its serene moment of rest. +- As you draw near home, the screen emanates a warm Yellow, indicating your approach. +- Should unfamiliar faces approach, the screen bathes the surroundings in a calming Red and angry aura. -_Setting:_ Where is this interaction happening? (e.g., a jungle, the kitchen) When is it happening? +### Bridging the Gap for Pet Owners -_Players:_ Who is involved in the interaction? Who else is there? If you reflect on the design of current day interactive devices like the Amazon Alexa, it’s clear they didn’t take into account people who had roommates, or the presence of children. Think through all the people who are in the setting. +For pet owners, the primary objective revolves around nurturing an unbreakable bond and soothing their pets' solitude. Through the Pet Companion Device, they seamlessly provide unending entertainment and companionship, alleviating their pets' loneliness and ensuring their well-being, regardless of their physical absence. -_Activity:_ What is happening between the actors? +### A World of Interaction for Pets -_Goals:_ What are the goals of each player? (e.g., jumping to a tree, opening the fridge). +Conversely, the device opens up a world of interaction for pets, satisfying their innate curiosity and thirst for engagement. With playful barks, licks, taps, or swipes, pets set off captivating color changes and delightful sounds, bestowing a wellspring of amusement and a semblance of interaction – akin to spending quality time with their human caregivers. -The interactive device can be anything *except* a computer, a tablet computer or a smart phone, but the main way it interacts needs to be using light. +\*\***Include pictures of your storyboards here**\*\* + +![](https://hackmd.io/_uploads/B14mz1jTn.jpg) +![](https://hackmd.io/_uploads/Bk2QM1jTh.jpg) +![](https://hackmd.io/_uploads/rJcDHko62.jpg) +![](https://hackmd.io/_uploads/H107Hyo62.jpg) +![](https://hackmd.io/_uploads/SkMrr1ian.jpg) +![](https://hackmd.io/_uploads/SJ9IH1o6h.jpg) + +![6471693498781_ pic](https://github.com/crystalchong0058/Interactive-Lab-Hub/assets/78544539/ff839607-1f08-4894-8d96-cc6c4482b18f) +======= \*\***Describe your setting, players, activity and goals here.**\*\* -Storyboards are a tool for visually exploring a users interaction with a device. They are a fast and cheap method to understand user flow, and iterate on a design before attempting to build on it. Take some time to read through this explanation of [storyboarding in UX design](https://www.smashingmagazine.com/2017/10/storyboarding-ux-design/). Sketch seven storyboards of the interactions you are planning. **It does not need to be perfect**, but must get across the behavior of the interactive device and the other characters in the scene. +_Setting:_ In our design, we are implementing a scenario based ambience lighting that can correspond to different modes of interations. More specifically, in the most common spaces that one might be at: workstation, bedroom, personal vehicle, etc. + +_Players:_ The design is centered around one user, but is scalable to multiple users depending on the scenario (e.g. in shared spaces, etc.). For multiple users, it can be used as a status indicator. + +_Activity:_ The user is the primary actor who interacts with the ambiance lighting device. The device changes lighting modes based on the user's activities and preferences. + +_Goals:_ The goal of the player is to maximize their performance and stay in a good mood with the help of our ambience lighting system. For example, the ambience lighting system can help players to drive better according to their mood and their desired driving style. Another example is that our lighting system can reflect on the sleeping quality of the users and cheer them up if they are not in a good mood whenever they are in the room. \*\***Include pictures of your storyboards here**\*\* -Present your ideas to the other people in your breakout room (or in small groups). You can just get feedback from one another or you can work together on the other parts of the lab. +![Storyboard 1-3](media/storyboard-1.png) +![Storyboard 4-6](media/storyboard-2.png) +![Storyboard 7](media/storyboard-3.png) +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b + \*\***Summarize feedback you got here.**\*\* +Yifan Yu: I really like the cat your team draw in the story board. The cat looks cute. Also the internal mechanics are particularly detailed and I really like the concept of what you have designed for the internal structure. Your video is very creative, and the cardboard chosen would give us a better idea of the volume size of the product. But I would like to see more interaction between people and product. I also hope that this product will have more features in the future, especially the more practical ones making it more than just a cat with glowing eyes. There also should be some explanation in the video your team provide. +<<<<<<< HEAD +Ben: I liked the device a lot! My only negative comment would be that you didn’t really answer all the questions posed in the prompt. I thought your video was very clear and good. The interaction was well demonstrated. -## Part B. Act out the Interaction +Gilberto: I really liked the illustrations for your device, the only thing I can say is to reorganize the answers to how the professor wanted -Try physically acting out the interaction you planned. For now, you can just pretend the device is doing the things you’ve scripted for it. +Gloria: I really like your drawings! Your demo video was very cool and I liked how cat eyes changed as different external actions performed. One thing I would suggest is that adding explanations/notes next to your drawings would be very helpful for people to follow :) + +Kenneth: I really liked the simple but cute and effective drawings for your storyboard. It was very clear to me what your ideas were for your device. I also liked the video you made for showcasing the device, it was clear and visually effective. It also includes subtitles, which makes things even clearer. The concept was cute and creative, I liked the use of the stuffed elephant to show how an animal can interact with the device :). You could possibly add more features and have some form of human interaction not just with pets and the device. Also for lab1b, I think adding sound such as purring or growling when the pet does certain actions would be very cool. + +We got several useful feedbacks, first of all we accidently put the url of our two video in the opposite place, so we fixed that immediatly. We were also advise to use simplier language, readability is more important than being 'professional'. We were also advised to create a preview of our video, We take a look into that real quick. Lastly, We were advice to try to consider add funcationaility of device body reaction and also sound reaction in respons to pets action, to enhance the user experience, that will also be next on our todo list. +======= +Crystal: This is so cool! Great scenarios. I think this is a great companion device that would help out a lot in providing insights and keeping track of everyday activities. + +Yiming: I really like the sketches of devices you created, especially the mood-based design. I think it will be very useful in the real-word. + +Ben: The video looks great! Makes the interaction very clear. + +Gloria: I love the storyboards, especially the car one! It could also be a good inspiration for people who have road rage reminding them to be aware of their emotion control by using the light device. It’s just an expansion idea. Your video and demonstrations were very clear and easy to follow. + +Michael: Great video! One question I have about the RoomBuddy is the light feature for sleeping. In the video, the light turns green to indicate good sleep quality. However, I was wondering when this light turns on, as the video makes it seem like the light turns on while the person is still sleeping. Is this after they wake up? If not, I feel like the light might disturb someones sleep + +Mingzhe Sun: Very impressive device. I like the idea that the RoomBuddy can interact with the the users emotion. I think this interaction is very useful to regulate people’s emotion. One suggestion I have would be to provide positive support color when people are not in a good mood. + + +## Part B. Act out the Interaction \*\***Are there things that seemed better on paper than acted out?**\*\* +While acting, we might realize that timing could be one of the issues in sending messages to the device. For example, if the user's motion is too fast or too slow, it might cause the device to misinterpret the intended action. + \*\***Are there new ideas that occur to you or your collaborators that come up from the acting?**\*\* +We have an idea that helps users to interact with devices effectively. By adding a short User Guidance, it could contain simple animations or instructional prompts that guide users on how to interact with the device in different scenarios. In addition, we also thought of further blending in the system with the surroundings. + ## Part C. Prototype the device -You will be using your smartphone as a stand-in for the device you are prototyping. You will use the browser of your smart phone to act as a “light” and use a remote control interface to remotely change the light on that device. +\*\***Give us feedback on Tinkerbelle.**\*\* + +The overall process of using Tinkerbelle is very convenient and hassle-free. It would be nice if some more features can be added. For example, when using the phone as a lighting device, we would like to see a full-screen mode that displays only the color, instead of having buttons on the screen at the moment. In addition, it would be great if it can work without private network or even internet. + +## Part D. Wizard the device + +\*\***Include your first attempts at recording the set-up video here.**\*\* + +[YouTube Video for the Set-up](https://youtube.com/shorts/tdQswdd_Q1Q?feature=share) + +\*\***Show the follow-up work here.**\*\* + +After testing out the device in the physical environment, we further improved the design of the in-car display, and associated the weather forcasting feature with windbell. In such way, we can further incorporate our design into the natural surroundings of the prototyped settings, and provide our device with a more friendly touch. + + +## Part E. Costume the device + +\*\***Include sketches of what your devices might look like here.**\*\* + +![Costumed Designs](media/costumed_sketches.jpg) + +\*\***What concerns or opportunitities are influencing the way you've designed the device to look?**\*\* + +By designing the device, a few concerns might influence the way to design our product. Firstly, the device will be placed in multiple areas inside the vehicles or rooms; therefore, the installation is crucial. The design of the device should facilitate simple and easy installation. Secondly, the size and flexibility could also be one of the concerns. The places to install the device might be slightly different. For example, the device might be placed behind the vehicle's door handler or in the corner of the bedroom. By considering these cases, the device should be designed using bendable materials. + +## Part F. Record + +\*\***Take a video of your prototyped interaction.**\*\* + +[Youtube Video for the Prototyped Interactions](https://youtube.com/shorts/-T5DoJo7uYs?feature=share) + +In addition, here is a prototype for the weather/mood based windbell design: + +![Windbell Prototype](media/windbell_prototype.gif) + + +\*\***Please indicate anyone you collaborated with on this Lab.**\*\* + +Throughout this lab, Qianxin Gan and Mingze Gao collaborated together with equal contribution towards different parts of the design. In addition, we would also like to appreciate the generousity of Ziyang Wei. + +# Staging Interaction, Part 2 + +\*\***NAME OF COLLABORATORS HERE**\*\* +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) -Code for the "Tinkerbelle" tool, and instructions for setting up the server and your phone are [here](https://github.com/FAR-Lab/tinkerbelle). -We invented this tool for this lab! +## Part A. Plan +**_Setting:_** The interaction takes place in a kitchen, where the device is typically affixed to the refrigerator door, though it is also detachable. The interaction initiates when a user stands within the range where the device can detect their presence. The interaction can be triggered through either touch or voice commands. + +**_Players:_** Anyone standing in front of the device can interact with it. Voice commands are not tied to specific users; in other words, anyone saying the keyword 'Hi, ' can activate it. + +**_Activity:_** 7 different activities are provided by the device: + +**1. Dietary Guardian:** This device scans the food items provided by the user and assesses whether this combination is healthy or not, taking into account the user's health records, dietary restrictions, and current medications. It conveys this information through a color-coded system: red for unsafe and green for safe. Additionally, it provides further details through both audio and text projected onto the fridge. + +**2. Food Compatibility Advisor:** When the user is uncertain about a specific food combination, they can invoke the device through an audio command, and the device will respond with a color-coded system: red for unsafe and green for safe. Furthermore, it provides additional details through both audio and text projected onto the fridge. + +**3. Ingredient Checker:** Upload your grocery shopping list from your phone or let the device scan the receipt, allowing our system to check for potential food safety concerns before your purchase. Additionally, it will remind you of items already in the fridge, displaying their last purchase date to prevent duplicate purchases. + +**4. Interactive Educational Helper:** Users can engage with the device in quiz mode to test their knowledge of food combinations. + +**5. Fridge Inventory Condition Checker:** When the user brings in new groceries, they can activate the grocery loading mode to monitor the freshness of their food. As items are placed in the fridge, the device identifies them, verbally confirming the food name and storage date with the user. If the device detects that the food is likely spoiled, it turns red and emits a warning sound to alert the user when they approach the fridge. + +**6. Drink & Beverage Tracker:** For bartenders, keeping tabs on drink and beverage inventory is essential to avoid running out of supplies. Each time the user adds a new drink to the fridge, the device updates the inventory count accordingly. Users can also set a minimum threshold for each drink while loading them into the fridge. If the quantity of a drink falls below the set threshold, the device turns red and emits a warning sound. + +**7. Monsieur le Chef:** Stumped about dinner options? This device suggests potential meals based on your current fridge inventory and provides recipes to help you decide. + -If you run into technical issues with this tool, you can also use a light switch, dimmer, etc. that you can can manually or remotely control. +**_Goals:_** The goal of the player is to maintain a healthy diet. + +![](https://hackmd.io/_uploads/rJdaxf4R3.png) +![](https://hackmd.io/_uploads/HJJr-fN03.png) +![](https://hackmd.io/_uploads/Hyu6lfVC2.png) +![](https://hackmd.io/_uploads/ByC4-MECh.png) + +\*\***Summarize feedback you got here.**\*\* + +Overall, the feedback for our design concepts is very positive: our design concepts are well-thought-out and offer a variety of functionalities aimed at different needs and scenarios. At the same time, we were advised to use simpler language, as readability is also a key factor in addition to professionalism for our design. One of the suggestions is to provide users with a guide to what they can expect from the product. Lastly, We were advised to consider enhancing user experience by adding more feedback in addition to light. For example, we can incorporate sound cues and vibrations into our design to facilitate the users. In addition, it is possible to empower Learning Algorithms in the device that interfaces with a Smart Phone Application. This application holds the capability to assist the device in sending timely reminders and facilitating orders initiated by the user. It could also use data from the household to provide more personalized advice and further optimize the user experience. + +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b + +## Part B. Act out the Interaction + +Initially, we envisioned our device as a camera-like toy positioned atop the feeding machine, assuming it would effectively capture pet behavior. However, we came to recognize that this approach might limit the device's capabilities and hinder its ability to comprehensively gather the wide spectrum of information pets convey. + +<<<<<<< HEAD +This is especially crucial given that pets predominantly interact with the feeding machine when they're hungry. Our goal is to ensure our bot can gather a diverse array of data to truly grasp and cater to our lonely pets' needs. + +\*\***Are there new ideas that occur to you or your collaborators that come up from the acting?**\*\* + +During the implementation phase, a novel concept emerged in response to emergency scenarios, such as break-ins, fires, or floods. We considered the possibility of the device seamlessly transitioning between red and white colors while emitting a warning noise, effectively alerting both occupants and pets within the household to potential dangers. + +This innovative addition would significantly enhance the device's utility, expanding its role from a pet companion to a vital safety feature in the home. + +## Part C. Prototype the device \*\***Give us feedback on Tinkerbelle.**\*\* +======= + +\*\***Are there things that seemed better on paper than acted out?**\*\* +At first, we plan to use text to display all the information needed by the user. However, when we acted out displaying text on screen, we figured that text is not intuitive and hard for people to read on such a device. Therefore, we started to think about more interactive ways of showing that information by using light and voice. This finding guides our future design of the device. + +\*\***Are there new ideas that occur to you or your collaborators that come up from the acting?**\*\* +We realized that since the device is designed to be a food advisor, it should also have the capability to provide food recommendations. With this idea in mind, we expanded the device's functionality to turn it into an at-home chef and nutritionist. + +## Part C. Prototype the device + +\*\***Feedback on Tinkerbelle.**\*\* +The user experience of Tinkerbelle is user-friendly and trouble-free. It could be improved if additional functionalities could be incorporated. For instance, while using the phone as a lighting tool, a full-screen mode that exclusively showcases the color without any buttons would be desirable. In addition, adding the feature that users can manually input RGB numbers can increase the flexibility of Tinkerbelle. +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b +There were some issue connecting the device with a public wifi. But was definitly a great tool, we were also able to utilize the sound input box function. Thank you for designing this!!! ## Part D. Wizard the device -Take a little time to set up the wizarding set-up that allows for someone to remotely control the device while someone acts with it. Hint: You can use Zoom to record videos, and you can pin someone’s video feed if that is the scene which you want to record. \*\***Include your first attempts at recording the set-up video here.**\*\* +[](https://www.youtube.com/watch?v=TdZUB5HhaLk&ab_channel=AllenSun) + +[https://youtube.com/shorts/9iTrWCWAa60?si=6_9f6S8R_mHE0Dvx](https://youtu.be/j6D6OLSrnlQ?si=aAyYXWXkLJLpcdm0) Now, hange the goal within the same setting, and update the interaction with the paper prototype. \*\***Show the follow-up work here.**\*\* +See part F for details. +<<<<<<< HEAD ## Part E. Costume the device -Only now should you start worrying about what the device should look like. Develop three costumes so that you can use your phone as this device. +\*\***Include sketches of what your devices might look like here.**\*\* + +Minimalist design: + +For a device that seamlessly blends into modern interiors, envision a sleek, compact form resembling a polished stone. Its smooth, rounded edges and matte finish would not only be visually appealing but also practical, preventing overheating concerns. + +![](https://hackmd.io/_uploads/B192-kjp2.jpg) + + +Sporty Design: + +resembles a resilient rubber ball. This design would be resilient to accidental drops and impacts, and it could incorporate a water-resistant seal to safeguard against potential spills. +![](https://hackmd.io/_uploads/SJS_Q1jTh.jpg) + + +Pet-Friendly Companion Buddy Design: + +we craft the device to resemble a small, plush animal friend, akin to a soft toy. The exterior could be covered in a gentle, pet-friendly fabric that's comfortable to touch and cuddle. Incorporate embroidered features like eyes, a nose, and a smiling mouth to give the device a friendly expression. The overall size should be compact and lightweight for easy handling by your pet. + + +![](https://hackmd.io/_uploads/B1oRkJs62.jpg) -Think about the setting of the device: is the environment a place where the device could overheat? Is water a danger? Does it need to have bright colors in an emergency setting? -\*\***Include sketches of what your devices might look like here.**\*\* \*\***What concerns or opportunitities are influencing the way you've designed the device to look?**\*\* +Circular Form: The circular shape not only carries a symbolism of continuity and balance but also aligns with our safety considerations. By avoiding sharp corners, we're actively mitigating potential risks associated with accidental collisions involving pets or children. This approach serves to create a safe and understated presence in your home. + +Materials: To adhere to the safety guidelines around pets and kids, we'll carefully select materials that are both pet-friendly and child-safe. We'll prioritize non-toxic options that are also durable and easy to clean, given the device's proximity to these important members of your household. + +Thoughtful Interactions: We'll discreetly design touch-sensitive areas on the device's surface. These areas will respond to interactions from pets and children, triggering delicate reactions such as subtle vibrations or faint chimes. This approach fosters curiosity and engagement without causing any discomfort. + +In this design, our focus is on seamlessly incorporating the circular shape and employing unobtrusive design elements. ## Part F. Record \*\***Take a video of your prototyped interaction.**\*\* +https://youtube.com/shorts/l5hPg9sLFPg?si=iLrlx1Lnfxs2qKOj + \*\***Please indicate anyone you collaborated with on this Lab.**\*\* -Be generous in acknowledging their contributions! And also recognizing any other influences (e.g. from YouTube, Github, Twitter) that informed your design. +We were inspired by the example in class. Also, we got inspiration from some classmates. # Staging Interaction, Part 2 -This describes the second week's work for this lab activity. +\*\***NAME OF COLLABORATORS HERE**\*\* +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) + + +## Part A. Plan +**_Setting:_** The interaction takes place in a kitchen, where the device is typically affixed to the refrigerator door, though it is also detachable. The interaction initiates when a user stands within the range where the device can detect their presence. The interaction can be triggered through either touch or voice commands. + +**_Players:_** Anyone standing in front of the device can interact with it. Voice commands are not tied to specific users; in other words, anyone saying the keyword 'Hi, ' can activate it. + +**_Activity:_** 7 different activities are provided by the device: + +**1. Dietary Guardian:** This device scans the food items provided by the user and assesses whether this combination is healthy or not, taking into account the user's health records, dietary restrictions, and current medications. It conveys this information through a color-coded system: red for unsafe and green for safe. Additionally, it provides further details through both audio and text projected onto the fridge. + +**2. Food Compatibility Advisor:** When the user is uncertain about a specific food combination, they can invoke the device through an audio command, and the device will respond with a color-coded system: red for unsafe and green for safe. Furthermore, it provides additional details through both audio and text projected onto the fridge. + +**3. Ingredient Checker:** Upload your grocery shopping list from your phone or let the device scan the receipt, allowing our system to check for potential food safety concerns before your purchase. Additionally, it will remind you of items already in the fridge, displaying their last purchase date to prevent duplicate purchases. + +**4. Interactive Educational Helper:** Users can engage with the device in quiz mode to test their knowledge of food combinations. +**5. Fridge Inventory Condition Checker:** When the user brings in new groceries, they can activate the grocery loading mode to monitor the freshness of their food. As items are placed in the fridge, the device identifies them, verbally confirming the food name and storage date with the user. If the device detects that the food is likely spoiled, it turns red and emits a warning sound to alert the user when they approach the fridge. -## Prep (to be done before Lab on Wednesday) +**6. Drink & Beverage Tracker:** For bartenders, keeping tabs on drink and beverage inventory is essential to avoid running out of supplies. Each time the user adds a new drink to the fridge, the device updates the inventory count accordingly. Users can also set a minimum threshold for each drink while loading them into the fridge. If the quantity of a drink falls below the set threshold, the device turns red and emits a warning sound. -You will be assigned three partners from another group. Go to their github pages, view their videos, and provide them with reactions, suggestions & feedback: explain to them what you saw happening in their video. Guess the scene and the goals of the character. Ask them about anything that wasn’t clear. +**7. Monsieur le Chef:** Stumped about dinner options? This device suggests potential meals based on your current fridge inventory and provides recipes to help you decide. + + +**_Goals:_** The goal of the player is to maintain a healthy diet. + +![](https://hackmd.io/_uploads/rJdaxf4R3.png) +![](https://hackmd.io/_uploads/HJJr-fN03.png) +![](https://hackmd.io/_uploads/Hyu6lfVC2.png) +![](https://hackmd.io/_uploads/ByC4-MECh.png) + +\*\***Summarize feedback you got here.**\*\* -\*\***Summarize feedback from your partners here.**\*\* +Overall, the feedback for our design concepts is very positive: our design concepts are well-thought-out and offer a variety of functionalities aimed at different needs and scenarios. At the same time, we were advised to use simpler language, as readability is also a key factor in addition to professionalism for our design. One of the suggestions is to provide users with a guide to what they can expect from the product. Lastly, We were advised to consider enhancing user experience by adding more feedback in addition to light. For example, we can incorporate sound cues and vibrations into our design to facilitate the users. In addition, it is possible to empower Learning Algorithms in the device that interfaces with a Smart Phone Application. This application holds the capability to assist the device in sending timely reminders and facilitating orders initiated by the user. It could also use data from the household to provide more personalized advice and further optimize the user experience. -## Make it your own +## Part B. Act out the Interaction + +Try physically acting out the interaction you planned. For now, you can just pretend the device is doing the things you’ve scripted for it. + + +\*\***Are there things that seemed better on paper than acted out?**\*\* +At first, we plan to use text to display all the information needed by the user. However, when we acted out displaying text on screen, we figured that text is not intuitive and hard for people to read on such a device. Therefore, we started to think about more interactive ways of showing that information by using light and voice. This finding guides our future design of the device. + +\*\***Are there new ideas that occur to you or your collaborators that come up from the acting?**\*\* +We realized that since the device is designed to be a food advisor, it should also have the capability to provide food recommendations. With this idea in mind, we expanded the device's functionality to turn it into an at-home chef and nutritionist. + +## Part C. Prototype the device + +\*\***Feedback on Tinkerbelle.**\*\* +The user experience of Tinkerbelle is user-friendly and trouble-free. It could be improved if additional functionalities could be incorporated. For instance, while using the phone as a lighting tool, a full-screen mode that exclusively showcases the color without any buttons would be desirable. In addition, adding the feature that users can manually input RGB numbers can increase the flexibility of Tinkerbelle. + + +## Part D. Wizard the device + +\*\***Include your first attempts at recording the set-up video here.**\*\* +[](https://www.youtube.com/watch?v=TdZUB5HhaLk&ab_channel=AllenSun) + +Now, hange the goal within the same setting, and update the interaction with the paper prototype. + +\*\***Show the follow-up work here.**\*\* +See part F for details. + + +======= +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b +## Part E. Costume the device + +\*\***Include sketches of what your devices might look like here.**\*\* + +Minimalism Design + +Our first design is a coin-sized, portable device designed for effortless convenience. This tiny attachment seamlessly affixes to your phone, you can carry this device around and make evaluation of the food. +![](https://hackmd.io/_uploads/BkwmD2XRn.jpg) + +Wearable Design + +Our second design is a sleek, glass-like device that instantly scans food, analyzing its nutritional quality, freshness, and safety. It then illuminates the results in vibrant colors, making it effortless to distinguish between good and bad food choices, ensuring your health and well-being with just a glance. +![](https://hackmd.io/_uploads/rySJDhX0n.jpg) + +Seamlessly Household Design + +Our third is a fridge-sticker-sized smart device that blends into your kitchen. It identifies the quality of the food in your fridge, categorizing it as fresh, near expiry, or spoiled, and offers personalized meal suggestions based on what's available. It's like having a nutritionist right in your kitchen, ensuring you make the best food choices while minimizing waste. +![](https://hackmd.io/_uploads/SysRLnQ03.jpg) + +\*\***What concerns or opportunities are influencing the way you've designed the device to look?**\*\* + +By designing the device, a few concerns might influence the way to design our product. Firstly, the device can be carried to different places; therefore, accessibility and flexibility have to be carefully considered. The design of the device should be simple and portable. Secondly, the size of the product needs to be very reasonable to adapt to every situation. For example, we have designed three different formats of the product to provide alternative ways of accessing the device. + + +## Part F. Record + +\*\***Take a video of your prototyped interaction.**\*\* + +[](https://youtu.be/kUUe0X8RDBg?si=FGWo7o_-pEecAE_G) + +[](https://youtu.be/qtcV7ecHLRY?si=ChWoWfhIzqL5sroH) + +[](https://youtu.be/SsefpLb3oVo?si=knwxaKX2xwUP7Si2) + + +\*\***Please indicate anyone you collaborated with on this Lab.**\*\* +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) +<<<<<<< HEAD +======= -Do last week’s assignment again, but this time: -1) It doesn’t have to (just) use light, -2) You can use any modality (e.g., vibration, sound) to prototype the behaviors! Again, be creative! Feel free to fork and modify the tinkerbell code! -3) We will be grading with an emphasis on creativity. -\*\***Document everything here. (Particularly, we would like to see the storyboard and video, although photos of the prototype are also great.)**\*\* +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b diff --git a/Lab 1/costume1.jpg b/Lab 1/costume1.jpg new file mode 100644 index 0000000000..ae0a0f499b Binary files /dev/null and b/Lab 1/costume1.jpg differ diff --git a/Lab 1/costume2.jpg b/Lab 1/costume2.jpg new file mode 100644 index 0000000000..a79a9ecb95 Binary files /dev/null and b/Lab 1/costume2.jpg differ diff --git a/Lab 1/costume3.jpg b/Lab 1/costume3.jpg new file mode 100644 index 0000000000..f6118bf580 Binary files /dev/null and b/Lab 1/costume3.jpg differ diff --git a/Lab 1/media/costumed_sketches.jpg b/Lab 1/media/costumed_sketches.jpg new file mode 100644 index 0000000000..6ed5eb0119 Binary files /dev/null and b/Lab 1/media/costumed_sketches.jpg differ diff --git a/Lab 1/media/storyboard-1.png b/Lab 1/media/storyboard-1.png new file mode 100644 index 0000000000..914a262390 Binary files /dev/null and b/Lab 1/media/storyboard-1.png differ diff --git a/Lab 1/media/storyboard-2.png b/Lab 1/media/storyboard-2.png new file mode 100644 index 0000000000..d85c56d2e0 Binary files /dev/null and b/Lab 1/media/storyboard-2.png differ diff --git a/Lab 1/media/storyboard-3.png b/Lab 1/media/storyboard-3.png new file mode 100644 index 0000000000..5755e08480 Binary files /dev/null and b/Lab 1/media/storyboard-3.png differ diff --git a/Lab 1/media/windbell_prototype.gif b/Lab 1/media/windbell_prototype.gif new file mode 100644 index 0000000000..7d7275f55e Binary files /dev/null and b/Lab 1/media/windbell_prototype.gif differ diff --git a/Lab 1/part2_costume1.jpeg b/Lab 1/part2_costume1.jpeg new file mode 100644 index 0000000000..9b25a92f13 Binary files /dev/null and b/Lab 1/part2_costume1.jpeg differ diff --git a/Lab 1/part2_costume2.jpeg b/Lab 1/part2_costume2.jpeg new file mode 100644 index 0000000000..4dbb221c1f Binary files /dev/null and b/Lab 1/part2_costume2.jpeg differ diff --git a/Lab 1/part2_costume3.jpeg b/Lab 1/part2_costume3.jpeg new file mode 100644 index 0000000000..e0a3924300 Binary files /dev/null and b/Lab 1/part2_costume3.jpeg differ diff --git a/Lab 1/part2_storyboard1.png b/Lab 1/part2_storyboard1.png new file mode 100644 index 0000000000..07c7a59e83 Binary files /dev/null and b/Lab 1/part2_storyboard1.png differ diff --git a/Lab 1/part2_storyboard2.png b/Lab 1/part2_storyboard2.png new file mode 100644 index 0000000000..147236f581 Binary files /dev/null and b/Lab 1/part2_storyboard2.png differ diff --git a/Lab 1/part2_storyboard3.png b/Lab 1/part2_storyboard3.png new file mode 100644 index 0000000000..d0a7b96621 Binary files /dev/null and b/Lab 1/part2_storyboard3.png differ diff --git a/Lab 1/part2_storyboard4.png b/Lab 1/part2_storyboard4.png new file mode 100644 index 0000000000..cbd9b38d2d Binary files /dev/null and b/Lab 1/part2_storyboard4.png differ diff --git a/Lab 1/prototype1.jpg b/Lab 1/prototype1.jpg new file mode 100644 index 0000000000..7cb55d0ef3 Binary files /dev/null and b/Lab 1/prototype1.jpg differ diff --git a/Lab 1/prototype2.jpg b/Lab 1/prototype2.jpg new file mode 100644 index 0000000000..6501d20008 Binary files /dev/null and b/Lab 1/prototype2.jpg differ diff --git a/Lab 1/prototype3.jpg b/Lab 1/prototype3.jpg new file mode 100644 index 0000000000..3b81db0ce0 Binary files /dev/null and b/Lab 1/prototype3.jpg differ diff --git a/Lab 1/prototype_interaction.jpg b/Lab 1/prototype_interaction.jpg new file mode 100644 index 0000000000..86f3b18662 Binary files /dev/null and b/Lab 1/prototype_interaction.jpg differ diff --git a/Lab 1/setup.jpg b/Lab 1/setup.jpg new file mode 100644 index 0000000000..0c4e49e9a2 Binary files /dev/null and b/Lab 1/setup.jpg differ diff --git a/Lab 1/storyboard1.png b/Lab 1/storyboard1.png new file mode 100644 index 0000000000..185198ae6a Binary files /dev/null and b/Lab 1/storyboard1.png differ diff --git a/Lab 1/storyboard2.png b/Lab 1/storyboard2.png new file mode 100644 index 0000000000..9717437e8c Binary files /dev/null and b/Lab 1/storyboard2.png differ diff --git a/Lab 1/storyboard3.png b/Lab 1/storyboard3.png new file mode 100644 index 0000000000..d12cce37ef Binary files /dev/null and b/Lab 1/storyboard3.png differ diff --git a/Lab 1/storyboard4.png b/Lab 1/storyboard4.png new file mode 100644 index 0000000000..e2c2c319d3 Binary files /dev/null and b/Lab 1/storyboard4.png differ diff --git a/Lab 2/README.md b/Lab 2/README.md index a50b366b1f..b49ae1b88c 100644 --- a/Lab 2/README.md +++ b/Lab 2/README.md @@ -1,6 +1,8 @@ # Interactive Prototyping: The Clock of Pi **NAMES OF COLLABORATORS HERE** +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) + Does it feel like time is moving strangely during this semester? For our first Pi project, we will pay homage to the [timekeeping devices of old](https://en.wikipedia.org/wiki/History_of_timekeeping_devices) by making simple clocks. @@ -77,20 +79,14 @@ Labs are due on Mondays. Make sure this page is linked to on your main class hub Just like you did in the lab prep, ssh on to your pi. Once you get there, create a Python environment (named venv) by typing the following commands. ``` -ssh pi@ +ssh johnli@100.110.133.141 ... -pi@raspberrypi:~ $ python -m venv venv -pi@raspberrypi:~ $ source venv/bin/activate -(venv) pi@raspberrypi:~ $ +johnli@johnli:~ $ python -m venv venv +johnli@johnli:~ $ source venv/bin/activate +(venv) johnli@johnli:~ $ ``` ### Setup Personal Access Tokens on GitHub -Set your git name and email so that commits appear under your name. -``` -git config --global user.name "Your Name" -git config --global user.email "yourNetID@cornell.edu" -``` - The support for password authentication of GitHub was removed on August 13, 2021. That is, in order to link and sync your own lab-hub repo with your Pi, you will have to set up a "Personal Access Tokens" to act as the password for your GitHub account on your Pi when using git command, such as `git clone` and `git push`. Following the steps listed [here](https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token) from GitHub to set up a token. Depends on your preference, you can set up and select the scopes, or permissions, you would like to grant the token. This token will act as your GitHub password later when you use the terminal on your Pi to sync files with your lab-hub repo. @@ -101,7 +97,7 @@ Following the steps listed [here](https://docs.github.com/en/github/authenticati Clone your own lab-hub repo for this assignment to your Pi and change the directory to Lab 2 folder (remember to replace the following command line with your own GitHub ID): ``` -(venv) pi@raspberrypi:~$ git clone https://github.com//Interactive-Lab-Hub.git +(venv) pi@raspberrypi:~$ git clone https://github.com/zezhili/Interactive-Lab-Hub.git (venv) pi@raspberrypi:~$ cd Interactive-Lab-Hub/Lab\ 2/ ``` Depends on the setting, you might be asked to provide your GitHub user name and password. Remember to use the "Personal Access Tokens" you just set up as the password instead of your account one! @@ -125,7 +121,7 @@ We have asked you to equip the [Adafruit MiniPiTFT](https://www.adafruit.com/pro -The Raspberry Pi 4 has a variety of interfacing options. When you plug the pi in the red power LED turns on. Any time the SD card is accessed the green LED flashes. It has standard USB ports and HDMI ports. Less familiar it has a set of 20x2 pin headers that allow you to connect a various peripherals. +The Raspberry Pi 3 has a variety of interfacing options. When you plug the pi in the red power LED turns on. Any time the SD card is accessed the green LED flashes. It has standard USB ports and HDMI ports. Less familiar it has a set of 20x2 pin headers that allow you to connect a various peripherals. @@ -133,7 +129,7 @@ To learn more about any individual pin and what it is for go to [pinout.xyz](htt ### Hardware (you have already done this in the prep) -From your kit take out the display and the [Raspberry Pi 4](https://cdn-shop.adafruit.com/970x728/3775-07.jpg) +From your kit take out the display and the [Raspberry Pi 3](https://cdn-shop.adafruit.com/970x728/3775-07.jpg) Line up the screen and press it on the headers. The hole in the screen should match up with the hole on the raspberry pi. @@ -220,10 +216,17 @@ After that, Git will ask you to login to your GitHub account to push the updates ## Make a short video of your modified barebones PiClock \*\*\***Take a video of your PiClock.**\*\*\* +[](https://youtu.be/mkWBqAWszes) + ## Part G. ## Sketch and brainstorm further interactions and features you would like for your clock for Part 2. - +![](https://hackmd.io/_uploads/ryMNrBTC3.jpg) +![](https://hackmd.io/_uploads/HkxNrHSaC3.jpg) +![](https://hackmd.io/_uploads/SyhHHSaAn.jpg) +It will interact with the external enviroment, the display will change base on the weather. +![](https://hackmd.io/_uploads/B1-IBBpRn.jpg) +When you click both buttons it will play the classic music that represent the region. # Prep for Part 2 @@ -232,14 +235,61 @@ After that, Git will ask you to login to your GitHub account to push the updates 2. Look at and give feedback on the Part G. for at least 2 other people in the class (and get 2 people to comment on your Part G!) -# Lab 2 Part 2 +**Feedback:** + +_Gloria_: I really liked your displays and button interaction!! So creative and appealing. I also like how you demo the screen with the real clock time on the side in the video. Ngl, this is the best design I've seen so far! It’s fascinating to see how you actually implemented those representative and famous places in each country. Well done! +I’m looking forward to seeing your next step! + +_Yifan_: Overall very good visual and background which provides intuitive understanding of the pi. It will be better if the text font/size/color can blend in to the background. + +_Crystal_: I love how the button could make changes between 12-hour and 24-hour formats. It is interesting to see different time zones from various countries. + + + +## Introduction + +In Part 2 of our project, we've developed an interactive game that allows users to engage with time in a fun and educational way. The game consists of three distinct phases, each with its own unique features and challenges. + +## Phases + +### Start Phase + +In the "Start Phase," the display shows the current time as a traditional clock. Two triangular arrows at the botton of the screen indicating the game mode can be invoked by pressing both buttons simultaneously. + +### Game Phase + +The "Game Phase" is the core of our interactive experience. The screen is divided into two halves, corresponding to the two buttons available. In this phase, various rectangles fall from the top of the screen, and the user's goal is to collect them. Each type of rectangle represents a different unit of time: + +- **Green Rectangles**: Represent hours. +- **Blue Rectangles**: Represent minutes. +- **Red Rectangles**: Represent seconds. + +As these rectangles reach the edge of the screen, the user must press the appropriate button to "collect" them. When successful, a yellow half ellipse is displayed on the corresponding side of the screen, indicating a successful collection. + +### End Phase + +The "End Phase" marks the conclusion of the game. There are two possible outcomes: + +1. **User Collects All Rectangles**: If the user successfully collects all the falling rectangles, this phase displays the time at which the user invoked the game. It serves as a rewarding conclusion to the game. + +2. **User Loses Some Rectangles**: If the user fails to collect some of the falling rectangles, this phase also displays the time when the game was invoked. However, it serves as a gentle reminder of the missed opportunities during gameplay. + +After a few seconds, we will switch the device back to the start phase. + +## Video -Pull Interactive Lab Hub updates to your repo. +[](https://youtu.be/8llVNj1WjJY) -Modify the code from last week's lab to make a new visual interface for your new clock. You may [extend the Pi](Extending%20the%20Pi.md) by adding sensors or buttons, but this is not required. -As always, make sure you document contributions and ideas from others explicitly in your writeup. +## Contribution List -You are permitted (but not required) to work in groups and share a turn in; you are expected to make equal contribution on any group work you do, and N people's group project should look like N times the work of a single person's lab. What each person did should be explicitly documented. Make sure the page for the group turn in is linked to your Interactive Lab Hub page. +Our collaborative effort in developing this interactive time game was a team endeavor, with each team member contributing in various ways: +- **John Li**: Idea brainstorming, end page design and implementation, and assistance with debugging. +- **Shiying (Sophie) Wu**: Idea brainstorming, game mode implementation, README documentation, and debugging support. +- **Mingze (Kevin) Gao**: Idea brainstorming, initial game setup and testing, end page design and implementation, and debugging assistance. +- **Crystal Chong**: Idea brainstorming, game mode design, end product verification and testing, and debugging support. +- **Qianxin (Carl) Gan**: Idea brainstorming, game mode design, end product verification and testing, and debugging support. +- **Mingzhe (Allen) Sun**: Idea brainstorming, game mode implementation, and debugging assistance. +This collaborative effort allowed us to create an engaging and educational interactive game that explores the concept of time in a playful way. diff --git a/Lab 2/__pycache__/binary_time.cpython-39.pyc b/Lab 2/__pycache__/binary_time.cpython-39.pyc new file mode 100644 index 0000000000..59c5b37c87 Binary files /dev/null and b/Lab 2/__pycache__/binary_time.cpython-39.pyc differ diff --git a/Lab 2/__pycache__/screen_test.cpython-311-pytest-7.4.0.pyc b/Lab 2/__pycache__/screen_test.cpython-311-pytest-7.4.0.pyc new file mode 100644 index 0000000000..164dc609e6 Binary files /dev/null and b/Lab 2/__pycache__/screen_test.cpython-311-pytest-7.4.0.pyc differ diff --git a/Lab 2/au.jpg b/Lab 2/au.jpg new file mode 100644 index 0000000000..84a7f83adc Binary files /dev/null and b/Lab 2/au.jpg differ diff --git a/Lab 2/bg.jpg b/Lab 2/bg.jpg new file mode 100644 index 0000000000..c2dd621886 Binary files /dev/null and b/Lab 2/bg.jpg differ diff --git a/Lab 2/binary_time.py b/Lab 2/binary_time.py new file mode 100644 index 0000000000..db86336a7b --- /dev/null +++ b/Lab 2/binary_time.py @@ -0,0 +1,19 @@ +import time + +def binary_time_conversion(): + current_time = time.localtime() + hour_binary = bin(current_time.tm_hour)[2:].zfill(6) + minute_binary = bin(current_time.tm_min)[2:].zfill(6) + second_binary = bin(current_time.tm_sec)[2:].zfill(6) + return current_time, minute_binary, second_binary, hour_binary + +def print_time(current_time): + time_print = f"{current_time.tm_mon:02}/{current_time.tm_mday:02}/{current_time.tm_year} {current_time.tm_hour:02}:{current_time.tm_min:02}:{current_time.tm_sec:02}" + print("Current time:", time_print) + +def convert_binary_to_2d(): + current_time, minute_binary, second_binary, hour_binary = binary_time_conversion() + current_time = time.strftime("%m/%d/%Y \n %H:%M:%S") + return current_time, [[int(m) for m in minute_binary], [int(s) for s in second_binary], [int(h) for h in hour_binary]] + + diff --git a/Lab 2/china.jpg b/Lab 2/china.jpg new file mode 100644 index 0000000000..f56350ec54 Binary files /dev/null and b/Lab 2/china.jpg differ diff --git a/Lab 2/france.jpg b/Lab 2/france.jpg new file mode 100644 index 0000000000..64901722cc Binary files /dev/null and b/Lab 2/france.jpg differ diff --git a/Lab 2/future_font.ttf b/Lab 2/future_font.ttf new file mode 100644 index 0000000000..a49ac2f165 Binary files /dev/null and b/Lab 2/future_font.ttf differ diff --git a/Lab 2/lab2_game_over.py b/Lab 2/lab2_game_over.py new file mode 100644 index 0000000000..064b185579 --- /dev/null +++ b/Lab 2/lab2_game_over.py @@ -0,0 +1,184 @@ +import textwrap +import time +import pytz +import datetime +import subprocess +import digitalio +import board +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.st7789 as st7789 + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.CE0) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +height = disp.width # we swap height/width to rotate it to landscape! +width = disp.height +image = Image.new("RGB", (width, height)) +rotation = 90 + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +# Draw a black filled box to clear the image. +draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) +disp.image(image, rotation) +# Draw some shapes. +# First define some constants to allow easy resizing of shapes. +padding = -2 +top = padding +bottom = height - padding +# Move left to right keeping track of the current x position for drawing shapes. +x = 0 + +# Alternatively load a TTF font. Make sure the .ttf font file is in the +# same directory as the python script! +# Some other nice fonts to try: http://www.dafont.com/bitmap.php +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) + +# Turn on the backlight +backlight = digitalio.DigitalInOut(board.D22) +backlight.switch_to_output() +backlight.value = True +buttonA = digitalio.DigitalInOut(board.D23) +buttonB = digitalio.DigitalInOut(board.D24) +buttonA.switch_to_input() +buttonB.switch_to_input() + +background_list = ["liberty.jpg", "paris.jpg", "au.jpg", "china.jpg"] + +background_image = [] +for image_name in background_list: + bg_image = Image.open(image_name) + bg_image = bg_image.resize((width, height)) + background_image.append(bg_image) + +def get_current_time_in_timezone(timezone_str): + """Return the current time in the specified timezone.""" + local_timezone = pytz.timezone(timezone_str) + local_time = datetime.datetime.now(local_timezone) + return local_time + +location_list = ['America/New_York', + 'Europe/Paris', + 'Australia/Canberra', + 'Asia/Shanghai' + ] + +index = 0 +count = 0 +intended_score = 30 +user_score = 25 +def update_display(background_image, index, format_12_hour=True): + """Updates the display with the given background and time format.""" + image.paste(background_image[index], (0, 0)) + if format_12_hour: + Game_over(intended_score, user_score) + else: + Game_over(1, 1) + + + +index = 0 +format_12_hour = True + +buttonA_pressed_last = False +buttonB_pressed_last = False + +game_state = False + +game_state = False + +def Game_over(intended_score, user_score): + """Updates the display with the given futuristic space-themed text format.""" + # Define text_offset and other variables + text_offset = 10 + bg_image = Image.open("bg.jpg") + bg_image = bg_image.resize((width, height)) + bg_image = bg_image.rotate(90, expand=True) + + # Define the text to display + if intended_score == user_score: + message = "Congrats! You have Won the Game!" + else: + mis_Score = str(intended_score - user_score) + message = "GG! You have missed " + mis_Score + " hit" + + current_time = time.strftime("%m/%d/%Y \n %H:%M:%S") + + # Create a new blank image with the desired dimensions + text_image = bg_image + + # Create a draw object for the text image + text_draw = ImageDraw.Draw(text_image) + + # Define a space-themed font (you can replace 'path_to_your_font.ttf' with your font file) + space_font_t = ImageFont.truetype('future_font.ttf', size=18) + space_font = ImageFont.truetype('future_font.ttf', size=25) + # Define font color (you can use any color that matches your space theme) + font_color = "#FFFFFF" + + # Define background color (you can use any color that matches your space theme) + background_color = "#000000" # Black + + # Draw the current_time text at the top of the screen + text_draw.text((10, 0), current_time, font=space_font_t, fill=font_color) + + # Break the message into new lines at every space + message_lines = message.split(" ") + wrapped_message = "\n".join(message_lines) + + # Draw the wrapped message below the current_time + text_draw.multiline_text((10, height // 2 - text_offset), wrapped_message, font=space_font, fill=font_color) + + # Rotate the text image by 270 degrees + text_image = text_image.transpose(Image.ROTATE_270) + + # Paste the rotated text image onto the original image + y = top + draw.text((x, y), "", fill="#000000") # Clear any previous text + image.paste(text_image, (x, y)) + + # Display the result on your screen + disp.image(image, rotation) + + game_state = False + +while True: + + if not buttonB.value and not buttonB_pressed_last: + index = (index + 1) % len(background_list) + buttonB_pressed_last = True + elif buttonB.value: + buttonB_pressed_last = False + + if not buttonA.value and not buttonA_pressed_last: + format_12_hour = not format_12_hour + buttonA_pressed_last = True + elif buttonA.value: + buttonA_pressed_last = False + + update_display(background_image, index, format_12_hour) + time.sleep(0.1) \ No newline at end of file diff --git a/Lab 2/lab2_part2.py b/Lab 2/lab2_part2.py new file mode 100644 index 0000000000..3ba9dc71ce --- /dev/null +++ b/Lab 2/lab2_part2.py @@ -0,0 +1,290 @@ +import time +import pytz +import datetime +import subprocess +import digitalio +import board +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.st7789 as st7789 +from binary_time import * + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.CE0) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +height = disp.width # we swap height/width to rotate it to landscape! +width = disp.height +image = Image.new("RGB", (width, height)) +rotation = 90 + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +# Draw a black filled box to clear the image. +draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) +disp.image(image, rotation) +# Draw some shapes. +# First define some constants to allow easy resizing of shapes. +padding = -2 +top = padding +bottom = height - padding +# Move left to right keeping track of the current x position for drawing shapes. +x = 0 + +# Alternatively load a TTF font. Make sure the .ttf font file is in the +# same directory as the python script! +# Some other nice fonts to try: http://www.dafont.com/bitmap.php +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) + +# Turn on the backlight +backlight = digitalio.DigitalInOut(board.D22) +backlight.switch_to_output() +backlight.value = True +buttonA = digitalio.DigitalInOut(board.D23) +buttonB = digitalio.DigitalInOut(board.D24) +buttonA.switch_to_input() +buttonB.switch_to_input() + + + +rect_x = width +# Height of the rect +rect_height = 65 +# Wideth of the rect +rect_width = 25 + + +hour_list = [1,0,0,1,1,1] +min_list = [1,1,1,1,1,1] +second_list = [0,1,0,1,1,1] + +binary_list = [[1,1,1,1,1,1],[0,1,0,1,1,1],[1,0,0,1,1,1]] + + +rect_spacing = 100 +rect_init_position = 240 +second_rect_init_position = 240 + 6 * rect_spacing +score = [[0]* 6,[0]*6,[0]*6] +position = [[0]* 6,[0]*6,[0]*6] +game_status = 1 +def sum2d(input): + my_sum = 0 + for row in input: + my_sum += sum(row) + return my_sum + +def init_position(min_list,second_list,hour_list,position): + for i in range(len(min_list)): + position[0][i] = (rect_init_position + i * rect_spacing ) + for i in range(len(second_list)): + position[1][i] = (rect_init_position + i * rect_spacing ) + for i in range(len(hour_list)): + position[2][i] = (second_rect_init_position + i * rect_spacing) + +def restore_position(position,score): + for i in range(len(position)): + for j in range(len(position[i])): + position[i][j] = -100 + score[i][j]=0 + +def update_position(position_list): + update_pixel = 2 + for i in range(len(position_list)): + for j in range(len(position_list[i])): + position_list[i][j] -= update_pixel + +def Game_over(intended_score, user_score,dead_time): + global game_status + """Updates the display with the given futuristic space-themed text format.""" + # Define text_offset and other variables + text_offset = 10 + bg_image = Image.open("bg.jpg") + bg_image = bg_image.resize((width, height)) + bg_image = bg_image.rotate(90, expand=True) + + # Define the text to display + if intended_score == user_score: + message = "Congrats! You have Won the Game!" + else: + mis_Score = str(intended_score - user_score) + message = "GG! You have missed " + mis_Score + " hit" + + current_time = dead_time + + # Create a new blank image with the desired dimensions + text_image = bg_image + + # Create a draw object for the text image + text_draw = ImageDraw.Draw(text_image) + + # Define a space-themed font (you can replace 'path_to_your_font.ttf' with your font file) + space_font_t = ImageFont.truetype('future_font.ttf', size=18) + space_font = ImageFont.truetype('future_font.ttf', size=25) + # Define font color (you can use any color that matches your space theme) + + # Define background color (you can use any color that matches your space theme) + background_color = "#000000" # Black + + # Draw the current_time text at the top of the screen + text_draw.text((10, 0), current_time, font=space_font_t, fill="#FFFFFF" ) + + # Break the message into new lines at every space + message_lines = message.split(" ") + wrapped_message = "\n".join(message_lines) + + # Draw the wrapped message below the current_time + text_draw.multiline_text((10, height // 2 - text_offset), wrapped_message, font=space_font, fill="#FFFFFF" ) + + # Rotate the text image by 270 degrees + text_image = text_image.transpose(Image.ROTATE_270) + + # Paste the rotated text image onto the original image + y = top + draw.text((x, y), "", fill="#000000") # Clear any previous text + image.paste(text_image, (x, y)) + + # Display the result on your screen + disp.image(image, rotation) + time.sleep(7) + game_status = 3 + + +def draw_rec_list(position,binary_list): + global game_status + global score + for i in range(len(position)): + for j in range(len(position[i])): + #check if the rect is in screen + if binary_list[i][j] == 1 and position[i][j] < 240 and position[i][j] > - rect_width: + if i == 0 : + draw.rectangle((position[i][j], center_y_rotated - rect_height, position[i][j] + rect_width, center_y_rotated), fill="blue", outline="blue") + + elif i == 1: + draw.rectangle((position[i][j], center_y_rotated, position[i][j] + rect_width, center_y_rotated + rect_height), fill="red", outline="red") + + else: + draw.rectangle((position[i][j], center_y_rotated - rect_height, position[i][j] + rect_width, center_y_rotated), fill="green", outline="green") + draw.rectangle((position[i][j], center_y_rotated, position[i][j] + rect_width, center_y_rotated + rect_height), fill="green", outline="green") + + if position[i][j] < 0: + if i == 0: + if not buttonA.value: + # pop the light + draw.ellipse((15, center_y_rotated - 125, 80, center_y_rotated - 50), fill="yellow", outline="yellow") + score[i][j] = 1 + if i == 1: + if not buttonB.value: + # pop the light + draw.ellipse((15, center_y_rotated + 50, 80, center_y_rotated + 125), fill="yellow", outline="yellow") + score[i][j] = 1 + + if i == 2: + if not buttonA.value and not buttonB.value: + draw.ellipse((15, center_y_rotated - 125, 80, center_y_rotated - 50), fill="yellow", outline="yellow") + draw.ellipse((15, center_y_rotated + 50, 80, center_y_rotated + 125), fill="yellow", outline="yellow") + score[i][j] = 1 + + if position[2][5]< - rect_width: + global num_block + global total_score + num_block=sum2d(binary_list) + print(score) + total_score = sum2d(score) + print(num_block) + print(total_score) + game_status = 2 + restore_position(position,score) + #print(game_status) + print(position) + + +def start_screen(): + bg_image = Image.open("bg.jpg") + bg_image = bg_image.resize((width, height)) + + draw = ImageDraw.Draw(bg_image) + + arrow_lower_left = [(30, height//4 - 10), (10, height//4), (30, height//4 + 10)] + arrow_upper_left = [(30, 3*height//4 - 10), (10, 3*height//4), (30, 3*height//4 + 10)] + draw.polygon(arrow_lower_left, outline="white", fill="white") + draw.polygon(arrow_upper_left, outline="white", fill="white") + + current_time = time.strftime("%m/%d/%Y \n %H:%M:%S") + space_font_t = ImageFont.truetype('future_font.ttf', size=18) + draw.text((70, 50), current_time, font=space_font_t, fill="#FFFFFF" ) + disp.image(bg_image, rotation) + + bg_image = Image.open("bg.jpg") + bg_image = bg_image.resize((width, height)) + draw = ImageDraw.Draw(bg_image) + draw.text((70, 50), current_time, font=space_font_t, fill="#FFFFFF" ) + + #disp.image(bg_image, rotation) + + + + +center_y_rotated = height // 2 + +while True: + # Clear the screen + draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) + + # Draw the link, breaking the screen into two parts + + # Draw the rectangles on both screens + #draw.rectangle((rect_x, center_y_rotated - rect_height, rect_x + rect_width, center_y_rotated), fill="blue", outline="blue") + #draw.rectangle((rect_x, center_y_rotated, rect_x + rect_width, center_y_rotated + rect_height), fill="red", outline="red") + print("GAME",game_status) + if game_status==0: + draw.line((0, center_y_rotated, width, center_y_rotated), fill="white", width=1) + draw_rec_list(position,binary_list) + update_position(position) + disp.image(image, rotation) + elif game_status==1: + start_screen() + if not buttonA.value and not buttonB.value: + global dead_time + dead_time, binary_list = convert_binary_to_2d() + init_position(binary_list[0],binary_list[1],binary_list[2],position) + game_status=0 + elif game_status==2: + Game_over(num_block,total_score,dead_time) + elif game_status==3: + game_status=1 + else: + pass + + + + + + + + #rect_x -= 5 + #if rect_x < -rect_width: + # rect_x = width + time.sleep(0.00001) + diff --git a/Lab 2/liberty.jpg b/Lab 2/liberty.jpg new file mode 100644 index 0000000000..0e77b3d459 Binary files /dev/null and b/Lab 2/liberty.jpg differ diff --git a/Lab 2/paris.jpg b/Lab 2/paris.jpg new file mode 100644 index 0000000000..10dc2becc9 Binary files /dev/null and b/Lab 2/paris.jpg differ diff --git a/Lab 2/parte.py b/Lab 2/parte.py new file mode 100644 index 0000000000..d3d5559d80 --- /dev/null +++ b/Lab 2/parte.py @@ -0,0 +1,125 @@ +import time +import pytz +import datetime +import subprocess +import digitalio +import board +from PIL import Image, ImageDraw, ImageFont +import adafruit_rgb_display.st7789 as st7789 + +# Configuration for CS and DC pins (these are FeatherWing defaults on M0/M4): +cs_pin = digitalio.DigitalInOut(board.CE0) +dc_pin = digitalio.DigitalInOut(board.D25) +reset_pin = None + +# Config for display baudrate (default max is 24mhz): +BAUDRATE = 64000000 + +# Setup SPI bus using hardware SPI: +spi = board.SPI() + +# Create the ST7789 display: +disp = st7789.ST7789( + spi, + cs=cs_pin, + dc=dc_pin, + rst=reset_pin, + baudrate=BAUDRATE, + width=135, + height=240, + x_offset=53, + y_offset=40, +) + +# Create blank image for drawing. +# Make sure to create image with mode 'RGB' for full color. +height = disp.width # we swap height/width to rotate it to landscape! +width = disp.height +image = Image.new("RGB", (width, height)) +rotation = 90 + +# Get drawing object to draw on image. +draw = ImageDraw.Draw(image) + +# Draw a black filled box to clear the image. +draw.rectangle((0, 0, width, height), outline=0, fill=(0, 0, 0)) +disp.image(image, rotation) +# Draw some shapes. +# First define some constants to allow easy resizing of shapes. +padding = -2 +top = padding +bottom = height - padding +# Move left to right keeping track of the current x position for drawing shapes. +x = 0 + +# Alternatively load a TTF font. Make sure the .ttf font file is in the +# same directory as the python script! +# Some other nice fonts to try: http://www.dafont.com/bitmap.php +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18) + +# Turn on the backlight +backlight = digitalio.DigitalInOut(board.D22) +backlight.switch_to_output() +backlight.value = True +buttonA = digitalio.DigitalInOut(board.D23) +buttonB = digitalio.DigitalInOut(board.D24) +buttonA.switch_to_input() +buttonB.switch_to_input() + +background_list = ["liberty.jpg", "paris.jpg", "au.jpg", "china.jpg"] + +background_image = [] +for image_name in background_list: + bg_image = Image.open(image_name) + bg_image = bg_image.resize((width, height)) + background_image.append(bg_image) + +def get_current_time_in_timezone(timezone_str): + """Return the current time in the specified timezone.""" + local_timezone = pytz.timezone(timezone_str) + local_time = datetime.datetime.now(local_timezone) + return local_time + +location_list = ['America/New_York', + 'Europe/Paris', + 'Australia/Canberra', + 'Asia/Shanghai' + ] + +index = 0 +count = 0 + +def update_display(background_image, index, format_12_hour=True): + """Updates the display with the given background and time format.""" + image.paste(background_image[index], (0, 0)) + if format_12_hour: + current_time = get_current_time_in_timezone(location_list[index]).strftime("%m/%d/%Y %I:%M:%S %p") + else: + current_time = get_current_time_in_timezone(location_list[index]).strftime("%m/%d/%Y %H:%M:%S") + text_offset = 10 + text_box = [(0, height // 2 - text_offset), (x + width, height // 2 + text_offset)] + draw.rectangle(text_box, fill="#FFFFFF") + draw.text((10, height // 2 - text_offset), current_time, font=font, fill="#000000") + disp.image(image, rotation) + +index = 0 +format_12_hour = True + +buttonA_pressed_last = False +buttonB_pressed_last = False + +while True: + if not buttonB.value and not buttonB_pressed_last: + index = (index + 1) % len(background_list) + buttonB_pressed_last = True + elif buttonB.value: + buttonB_pressed_last = False + + if not buttonA.value and not buttonA_pressed_last: + format_12_hour = not format_12_hour + buttonA_pressed_last = True + elif buttonA.value: + buttonA_pressed_last = False + + update_display(background_image, index, format_12_hour) + time.sleep(0.1) \ No newline at end of file diff --git a/Lab 2/screen_clock.py b/Lab 2/screen_clock.py index 1b676dad71..11781f217e 100644 --- a/Lab 2/screen_clock.py +++ b/Lab 2/screen_clock.py @@ -62,10 +62,20 @@ while True: # Draw a black filled box to clear the image. - draw.rectangle((0, 0, width, height), outline=0, fill=400) + draw.rectangle((0, 0, width, height), outline=0, fill=0) + current_time = time.strftime("%m/%d/%Y %H:%M:%S") + y = top + draw.text((x, y), current_time, font=font, fill="#FFFFFF") + #y += font.getsize(current_time)[1] #TODO: Lab 2 part D work should be filled in here. You should be able to look in cli_clock.py and stats.py - +<<<<<<< Updated upstream + current_time = time.strftime("%m/%d/%Y %H:%M:%S") + y = top + draw.text((x, y), current_time, font=font, fill="#FFFFFF") +======= +>>>>>>> Stashed changes + # Display image. disp.image(image, rotation) time.sleep(1) diff --git a/Lab 2/usa.jpg b/Lab 2/usa.jpg new file mode 100644 index 0000000000..13579cc3da Binary files /dev/null and b/Lab 2/usa.jpg differ diff --git a/Lab 3/README.md b/Lab 3/README.md index fc55aa1f3b..893a3f205a 100644 --- a/Lab 3/README.md +++ b/Lab 3/README.md @@ -1,10 +1,11 @@ # Chatterboxes **NAMES OF COLLABORATORS HERE** +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) + [![Watch the video](https://user-images.githubusercontent.com/1128669/135009222-111fe522-e6ba-46ad-b6dc-d1633d21129c.png)](https://www.youtube.com/embed/Q8FWzLMobx0?start=19) In this lab, we want you to design interaction with a speech-enabled device--something that listens and talks to you. This device can do anything *but* control lights (since we already did that in Lab 1). First, we want you first to storyboard what you imagine the conversational interaction to be like. Then, you will use wizarding techniques to elicit examples of what people might say, ask, or respond. We then want you to use the examples collected from at least two other people to inform the redesign of the device. -We will focus on **audio** as the main modality for interaction to start; these general techniques can be extended to **video**, **haptics** or other interactive mechanisms in the second part of the Lab. ## Prep for Part 1: Get the Latest Content and Pick up Additional Parts @@ -18,11 +19,11 @@ Students who have not already received a web camera will receive their [IMISES w As always, pull updates from the class Interactive-Lab-Hub to both your Pi and your own GitHub repo. There are 2 ways you can do so: -**\[recommended\]**Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the *personal access token* for this. +**[recommended]** Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the *personal access token* for this. ``` pi@ixe00:~$ cd Interactive-Lab-Hub -pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2022 +pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2023 pi@ixe00:~/Interactive-Lab-Hub $ git add . pi@ixe00:~/Interactive-Lab-Hub $ git commit -m "get lab3 updates" pi@ixe00:~/Interactive-Lab-Hub $ git push @@ -68,6 +69,9 @@ You can also play audio files directly with `aplay filename`. Try typing `aplay \*\***Write your own shell file to use your favorite of these TTS engines to have your Pi greet you by name.**\*\* (This shell file should be saved to your own repo for this lab.) +Here's the [shell file](./speech-scripts/greetings.sh) greeting us in our favorite TTS engine - the GoogleTTS. + +Greeting shell file located in `\speech-scripts\greetings.sh`. --- Bonus: @@ -110,6 +114,11 @@ python test_microphone.py -m en \*\***Write your own shell file that verbally asks for a numerical based input (such as a phone number, zipcode, number of pets, etc) and records the answer the respondent provides.**\*\* +Here's the [shell file](./speech-scripts/transcribe.sh) that acts as a voicemail message asking for a callback number, records the inputs, and transcribes to numerical outputs. The shell file also invokes a python script [ask_and_record.py](./speech-scripts/ask_and_record.py) that uses `vosk` to transcribe the audio. + +[![Voice Mail](https://hackmd.io/_uploads/SyaHD7Je6.jpg)](https://www.youtube.com/watch?v=cD3JbQLfFVg) + + ### Serving Pages @@ -134,31 +143,87 @@ From a remote browser on the same network, check to make sure your webserver is Storyboard and/or use a Verplank diagram to design a speech-enabled device. (Stuck? Make a device that talks for dogs. If that is too stupid, find an application that is better than that.) \*\***Post your storyboard and diagram here.**\*\* +![](https://hackmd.io/_uploads/B1RiW_yep.jpg) + + Write out what you imagine the dialogue to be. Use cards, post-its, or whatever method helps you develop alternatives or group responses. \*\***Please describe and document your process.**\*\* +The Hotel Robot Butler, affectionately known as "TT", is an innovative and advanced speech-enabled device designed to revolutionize the guest experience within hospitality establishments, particularly hotels and resorts. TT is a cutting-edge fusion of artificial intelligence and robotics, seamlessly integrated into the hotel's environment to provide personalized and efficient service to guests. + +To interact with "TT", the customer will need to say a sentence that includes the keyword "TT." We chose this name because it is not a word commonly used in conversation. Therefore, the device will not be activated accidentally. The device can provide any services offered by the hotel, such as breakfast options, locating objects in the room, ordering towels, room service, and more. It has an embedded AI language model that enables it to process and respond to conversations. + + ### Acting out the dialogue Find a partner, and *without sharing the script with your partner* try out the dialogue you've designed, where you (as the device designer) act as the device you are designing. Please record this interaction (for example, using Zoom's record feature). +Intended Script: +Question: what is the weather today? +Robot: Today is sunny, tempature of 75F +Question: Can you replace the towel? +Robot: For sure, A servant will come shortly. +Question: what is the time now? +Robot: Now is 9:30AM Sep 23rd +Question: Can you tell me a joke? +Robot: for sure, XXXX hahahahaha +Question: when does breakfast time end? +Robot: The breakfast hours is 6:30 to 9:30 everyday at floor 2nd +Question: can you connect me to front desk? +Robot: Calling Front Desk... + +Actual Script +Kevin: Morning TT. +Robot Butler: Morning Kevin, how I can help you? +Kevin: What you get for breakfast today? +Robot Butler: For breakfast today, we have a selection of options. You can choose from a variety of cereals, fresh fruits, and toast with your choice of spreads. Additionally, there are some hot options like scrambled eggs, bacon, and pancakes. Please let me know your preference, and I'll prepare it for you. +Kevin: Can you give me a French toast with wipped cream and two bacons, and some blueberry please. +Robot Butler: No problem, I will send them your way. What else can I help you with? +Kevin: Yes TT, can you let me know where is my glasses? +Robot Butler: Let me check. Your glass are on the sink of the bathroom. +Kevin: Thank you, TT. + +<<<<<<< HEAD +======= +Click to watch the video: +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b +[![Act Out](https://hackmd.io/_uploads/rk0TPmke6.jpg)](https://www.youtube.com/watch?v=MXm7EFBcMv8) + \*\***Describe if the dialogue seemed different than what you imagined when it was acted out, and how.**\*\* + Before the dialogue, Our script was preping for questions like can you replace the towel or put do not disturbe, or some questions like weather or tempature. + But when we acted out, the first question was on what breakfast does the hotel provides and none of the question was expact, that was really surpising. + We figured it would be extremely hard to 'hard code' a script to fullfill it's designed purpose. NLP or big data model will be essanital for the chat bot to be comprehensive in practise. + ### Wizarding with the Pi (optional) In the [demo directory](./demo), you will find an example Wizard of Oz project. In that project, you can see how audio and sensor data is streamed from the Pi to a wizard controller that runs in the browser. You may use this demo code as a template. By running the `app.py` script, you can see how audio and sensor data (Adafruit MPU-6050 6-DoF Accel and Gyro Sensor) is streamed from the Pi to a wizard controller that runs in the browser `http://:5000`. You can control what the system says from the controller as well! \*\***Describe if the dialogue seemed different than what you imagined, or when acted out, when it was wizarded, and how.**\*\* + # Lab 3 Part 2 For Part 2, you will redesign the interaction with the speech-enabled device using the data collected, as well as feedback from part 1. ## Prep for Part 2 -1. What are concrete things that could use improvement in the design of your device? For example: wording, timing, anticipation of misunderstandings... +1. What are concrete things that could use improvement in the design of your device? For example: wording, timing, anticipation of misunderstandings + +With our Part 1 creation, the Hotel Robot Butler, there exists a requirement for users to possess some prior knowledge or instructions to effectively engage with the device. Moreover, it can be uncertain as to the range of services the device is equipped to handle. In light of the upcoming Halloween season, we have chosen to craft a device that seamlessly integrates into Halloween decorations and offers an intuitive and straightforward interaction experience for users. + + 2. What are other modes of interaction _beyond speech_ that you might also use to clarify how to interact? + +In the context of our Halloween device, we are contemplating the incorporation of additional interaction modes beyond speech to enhance user clarity. One approach involves utilizing a sensor that can detect when someone is approaching our door, subsequently triggering our speech detection feature. Additionally, we are considering the utilization of a servo motor mechanism to deliver an element of surprise or scare to the user, complementing the overall interaction experience. + 3. Make a new storyboard, diagram and/or script based on these reflections. +![](https://hackmd.io/_uploads/BJN_X2_la.jpg) + + +Brainstorming Ideas: +- A doorbell trick-or-treat ## Prototype your system @@ -169,26 +234,55 @@ The system should: *Document how the system works* +**The system operates as follows:** + +**Distance Sensor Activation:** A distance sensor is employed to monitor the proximity of individuals approaching the door. When a person comes within 0.5 meters of the door, where the device is securely attached, this sensor triggers the activation of the interaction system. + +**Speech Detection:** Upon activation, the system initiates its speech detection feature, actively listening for the specific keyword, "trick or treat." This keyword serves as the trigger for further interaction. + +**Random Puzzle Generation:** Subsequently, the system randomly selects one of the 100 available puzzles from its database. This selected puzzle is then vocalized by the system, providing the user with a unique Halloween-themed challenge. + +**User Input:** After hearing the puzzle, the user is prompted to input their solution through the keyboard interface provided by the device. The keyboard allows the user to type in their answer. + +**Answer Validation:** Once the user submits their answer, the system performs an immediate validation check to determine its correctness. If the user's answer aligns with the correct solution, the system triggers a rewarding response. + +**Reward or Haunting:** In the event that the user's answer is correct, the system dispenses candy to the user as a Halloween treat, enhancing the interactive experience. However, if the answer is incorrect, the system engages a spooky or haunting response to add an element of surprise and excitement to the Halloween encounter. + *Include videos or screencaptures of both the system and the controller.* +*Click to watch the video: Introduction of the device* +[![Act Out](https://hackmd.io/_uploads/H1tFAlFga.png)](https://www.youtube.com/watch?v=yqGF0PsgBKE) + +*Click to watch the video: Acting out* +[![Act Out](https://hackmd.io/_uploads/rk0Ozbtlp.png)](https://youtu.be/3kqRtDJEGQg) + + ## Test the system Try to get at least two people to interact with your system. (Ideally, you would inform them that there is a wizard _after_ the interaction, but we recognize that can be hard.) Answer the following: ### What worked well about the system and what didn't? -\*\**your answer here*\*\* + + +The distance detection feature effectively prevented speech recognition misinterpretation. This feature allows us to restrict false triggers from other voices, ensuring that only individuals in close proximity to the device can activate the entire system. + +However, our current method for answering questions has limitations. We are restricted to using a keypad for input, which means that our system can only handle numerical questions. ### What worked well about the controller and what didn't? -\*\**your answer here*\*\* +The controller's integration of multiple sensors was impressive, especially its 500mm range which felt just right for detecting someone's proximity without being intrusive. This allowed for a fluid transition from sensing someone nearby to awaiting a voice command. However, a noticeable challenge was the voice recognition with individuals. Given the variety in pronunciations, there were instances where the controller struggled to accurately recognize the "trick or treat" phrase. Improvements in its ability to discern and adapt to diverse voice inputs would greatly enhance its reliability and user experience. ### What lessons can you take away from the WoZ interactions for designing a more autonomous version of the system? -\*\**your answer here*\*\* +Through WoZ interactions, we gain valuable insights into a wide range of potential user behaviors when interacting with the system. For instance, before actually implemented our system, we started with a WoZ interactions. We discovered that user responses to our questions can be highly unpredictable. Consequently, we implemented restrictions on user input with hardware resource, keypad. +Furthermore, designing a more autonomous system is an ongoing, iterative journey. We continually uncover opportunities for enhancement, ensuring that our system evolves to meet user needs and expectations effectively. ### How could you use your system to create a dataset of interaction? What other sensing modalities would make sense to capture? +<<<<<<< HEAD \*\**your answer here*\*\* - +======= +Recording the distances at which participants initiate interactions might reveal nuances in their approach behavior. Additionally, the voices of each individual will be recorded and consolidated into a dataset. By implementing machine learning algorithms, this dataset will aid the system in detecting the myriad ways people might pronounce "trick or treat," ensuring more accurate and inclusive activation. To deepen this interaction dataset, introducing additional sensors would be insightful. A camera, for instance, could observe facial expressions and body language during interactions, while an ambient light sensor might indicate if the device's visibility or attraction changes under different lighting conditions. +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b diff --git a/Lab 3/ask_for_puzzle.sh b/Lab 3/ask_for_puzzle.sh new file mode 100644 index 0000000000..4bac101a70 --- /dev/null +++ b/Lab 3/ask_for_puzzle.sh @@ -0,0 +1,31 @@ +#https://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis) + +#!/bin/bash +say() { local IFS=+;/usr/bin/mplayer -ao alsa -really-quiet -noconsolecontrols "http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&q=$*&tl=en"; } +#say $* +say " Only the smart kid gets candy! Press the answer of this puzzle in the keyboard." + +mapfile -t lines < 'puzzles.txt' + +num_puzzles=$((${#lines[@]} / 2)) + +random_puzzle=$((RANDOM % num_puzzles)) + +puzzle_index=$((random_puzzle * 2)) +puzzle="${lines[puzzle_index]}" +answer_index=$((puzzle_index + 1)) +answer="${lines[answer_index]}" + +say "$puzzle" + +read -p "Enter your answer: " user_answer + +expected_answer="${answer##*: }" + +if [ "$user_answer" -eq "$expected_answer" ]; then + say "Congratulations! You answered correctly. Here is your candy" +else + say "Sorry, your answer is incorrect. The correct answer is $expected_answer." +fi + +python puzzle_prompting.py \ No newline at end of file diff --git a/Lab 3/main.py b/Lab 3/main.py new file mode 100644 index 0000000000..64c451ecd0 --- /dev/null +++ b/Lab 3/main.py @@ -0,0 +1,24 @@ +import time +import qwiic_vl53l1x +import subprocess + +def main(): + mySensor = qwiic_vl53l1x.QwiicVL53L1X() + + mySensor.sensor_init() + print(mySensor.address) + while True: + try: + mySensor.start_ranging() # Write configuration bytes to initiate measurement + time.sleep(.005) + distance = mySensor.get_distance() # Get the result of the measurement from the sensor + time.sleep(.005) + mySensor.stop_ranging() + print("Distance(mm): %s" % distance) + if (distance < 500): + subprocess.call(['python', 'puzzle_prompting.py']) + except Exception as e: + print(e) + +if __name__ == "__main__": + main() diff --git a/Lab 3/pad.py b/Lab 3/pad.py new file mode 100644 index 0000000000..dddc78248a --- /dev/null +++ b/Lab 3/pad.py @@ -0,0 +1,45 @@ +import qwiic_keypad +import time +import sys + +def runExample(): + myKeypad = qwiic_keypad.QwiicKeypad(0x4b) + + if myKeypad.is_connected() == False: + print("The Qwiic Keypad device isn't connected to the system. Please check your connection", file=sys.stderr) + return + + myKeypad.begin() + + button = 0 + while True: + + # necessary for keypad to pull button from stack to readable register + myKeypad.update_fifo() + button = myKeypad.get_button() + + if button == -1: + print("No keypad detected") + time.sleep(1) + + elif button != 0: + + # Get the character version of this char + charButton = chr(button) + if charButton == '#': + print() + elif charButton == '*': + print(" ", end="") + else: + print(charButton, end="") + + # Flush the stdout buffer to give immediate user feedback + sys.stdout.flush() + + time.sleep(.25) +runExample() + + + + + diff --git a/Lab 3/puzzle_prompting.py b/Lab 3/puzzle_prompting.py new file mode 100644 index 0000000000..0f26a6add7 --- /dev/null +++ b/Lab 3/puzzle_prompting.py @@ -0,0 +1,104 @@ +#!/usr/bin/env python3 + +# prerequisites: as described in https://alphacephei.com/vosk/install and also python module `sounddevice` (simply run command `pip install sounddevice`) +# Example usage using Dutch (nl) recognition model: `python test_microphone.py -m nl` +# For more help run: `python test_microphone.py -h` + +import argparse +import queue +import sys +import sounddevice as sd +import subprocess + +from vosk import Model, KaldiRecognizer + +########### Added for Part 2 ######################################### +trick_or_treat_detected = False +subprocess_executed = False +########### Added for Part 2 Ends #################################### + +q = queue.Queue() + +def int_or_str(text): + """Helper function for argument parsing.""" + try: + return int(text) + except ValueError: + return text + +def callback(indata, frames, time, status): + """This is called (from a separate thread) for each audio block.""" + if status: + print(status, file=sys.stderr) + q.put(bytes(indata)) + +parser = argparse.ArgumentParser(add_help=False) +parser.add_argument( + "-l", "--list-devices", action="store_true", + help="show list of audio devices and exit") +args, remaining = parser.parse_known_args() +if args.list_devices: + print(sd.query_devices()) + parser.exit(0) +parser = argparse.ArgumentParser( + description=__doc__, + formatter_class=argparse.RawDescriptionHelpFormatter, + parents=[parser]) +parser.add_argument( + "-f", "--filename", type=str, metavar="FILENAME", + help="audio file to store recording to") +parser.add_argument( + "-d", "--device", type=int_or_str, + help="input device (numeric ID or substring)") +parser.add_argument( + "-r", "--samplerate", type=int, help="sampling rate") +parser.add_argument( + "-m", "--model", type=str, help="language model; e.g. en-us, fr, nl; default is en-us") +args = parser.parse_args(remaining) + +try: + if args.samplerate is None: + device_info = sd.query_devices(args.device, "input") + # soundfile expects an int, sounddevice provides a float: + args.samplerate = int(device_info["default_samplerate"]) + + if args.model is None: + model = Model(lang="en-us") + else: + model = Model(lang=args.model) + + if args.filename: + dump_fn = open(args.filename, "wb") + else: + dump_fn = None + + with sd.RawInputStream(samplerate=args.samplerate, blocksize = 8000, device=args.device, + dtype="int16", channels=1, callback=callback): + print("#" * 80) + print("Press Ctrl+C to stop the recording") + print("#" * 80) + + rec = KaldiRecognizer(model, args.samplerate) + while True: + data = q.get() + if rec.AcceptWaveform(data): + print(rec.Result()) + else: + print(rec.PartialResult()) + if dump_fn is not None: + dump_fn.write(data) + + ########### Added for Part 2 ######################################### + if "trick or treat" in rec.PartialResult(): + trick_or_treat_detected = True + if trick_or_treat_detected and not subprocess_executed: + subprocess.call(['python', 'puzzle_reader.py']) + subprocess_executed = True + break + ########### Added for Part 2 Ends #################################### + +except KeyboardInterrupt: + print("\nDone") + parser.exit(0) +except Exception as e: + parser.exit(type(e).__name__ + ": " + str(e)) diff --git a/Lab 3/puzzle_reader.py b/Lab 3/puzzle_reader.py new file mode 100644 index 0000000000..6bdf5b69b7 --- /dev/null +++ b/Lab 3/puzzle_reader.py @@ -0,0 +1,79 @@ +import qwiic_keypad +import time +import sys +import subprocess +import random + +def say(text): + subprocess.call(['/usr/bin/mplayer', '-ao', 'alsa', '-really-quiet', '-noconsolecontrols', + f'http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&q={text}&tl=en']) + +def get_puzzle_and_answer(): + with open('puzzles.txt', 'r') as file: + lines = file.readlines() + + num_puzzles = len(lines) // 2 + random_puzzle = random.randint(0, num_puzzles - 1) + + puzzle_index = random_puzzle * 2 + puzzle = lines[puzzle_index].strip() + + answer_index = puzzle_index + 1 + answer = lines[answer_index].strip().split(":")[1].strip() # Extracting the answer after the colon + + return puzzle, answer + +def runExample(): + + print("\nSparkFun qwiic Keypad Example\n") + myKeypad = qwiic_keypad.QwiicKeypad(0x4b) + + if myKeypad.is_connected() == False: + print("The Qwiic Keypad device isn't connected to the system. Please check your connection", + file=sys.stderr) + return + + myKeypad.begin() + + button = 0 + user_answer = "" + + puzzle, expected_answer = get_puzzle_and_answer() + say(" Only the smart kid gets candy! Press the answer of this puzzle on the keyboard.") + time.sleep(0.5) + say(puzzle) + say("Enter your answer and press pound key:") + + while True: + # necessary for keypad to pull button from stack to readable register + myKeypad.update_fifo() + button = myKeypad.get_button() + + if button == -1: + print("No keypad detected") + time.sleep(1) + elif button != 0: + # Get the character version of this char + charButton = chr(button) + if charButton == '#': # Assuming '#' denotes end of input + print(user_answer) + if user_answer == expected_answer: + say("Congratulations! You answered correctly. Here is your candy.") + subprocess.call(['python', 'main.py']) + else: + say(f"Sorry, your answer ({user_answer}) is incorrect. The correct answer is {expected_answer}.") + subprocess.call(['python', 'main.py']) + elif charButton == '*': + user_answer = "" # Assuming '*' clears the input + print(" ", end="") + else: + print(charButton, end="") + user_answer += charButton + + # Flush the stdout buffer to give immediate user feedback + sys.stdout.flush() + + time.sleep(0.25) + +if __name__ == "__main__": + runExample() diff --git a/Lab 3/puzzles.txt b/Lab 3/puzzles.txt new file mode 100644 index 0000000000..8468a19c16 --- /dev/null +++ b/Lab 3/puzzles.txt @@ -0,0 +1,200 @@ +Puzzle 1: What is 6 plus 7? +Answer 1: 13 +Puzzle 2: What is 9 minus 3? +Answer 2: 6 +Puzzle 3: What is 8 times 2? +Answer 3: 16 +Puzzle 4: How many legs does a cat have? +Answer 4: 4 +Puzzle 5: How many days are there in a week? +Answer 5: 7 +Puzzle 6: What is 5 plus 8? +Answer 6: 13 +Puzzle 7: What is 7 times 7? +Answer 7: 49 +Puzzle 8: How many colors are there in a rainbow? +Answer 8: 7 +Puzzle 9: What is 3 plus 9? +Answer 9: 12 +Puzzle 10: What is 9 times 3? +Answer 10: 27 +Puzzle 11: How many fingers does a human have? +Answer 11: 10 +Puzzle 12: What is 4 plus 6? +Answer 12: 10 +Puzzle 13: How many months are in a year? +Answer 13: 12 +Puzzle 14: What is 8 plus 4? +Answer 14: 12 +Puzzle 15: What is 7 times 8? +Answer 15: 56 +Puzzle 16: How many seconds are in a minute? +Answer 16: 60 +Puzzle 17: What is 2 plus 9? +Answer 17: 11 +Puzzle 18: What is 10 times 5? +Answer 18: 50 +Puzzle 19: How many days are in a normal February? +Answer 19: 28 +Puzzle 20: What is 12 minus 4? +Answer 20: 8 +Puzzle 21: How many toes does a human have? +Answer 21: 10 +Puzzle 22: What is 3 times 6? +Answer 22: 18 +Puzzle 23: What is 9 plus 3? +Answer 23: 12 +Puzzle 24: How many continents are there? +Answer 24: 7 +Puzzle 25: What is 10 plus 7? +Answer 25: 17 +Puzzle 26: What is 6 times 5? +Answer 26: 30 +Puzzle 27: How many vowels are in the English alphabet? +Answer 27: 5 +Puzzle 28: What is 8 minus 3? +Answer 28: 5 +Puzzle 29: What is 7 plus 7? +Answer 29: 14 +Puzzle 30: How many planets are in our solar system? +Answer 30: 8 +Puzzle 31: What is 5 times 4? +Answer 31: 20 +Puzzle 32: What is 11 plus 2? +Answer 32: 13 +Puzzle 33: How many sides does a rectangle have? +Answer 33: 4 +Puzzle 34: What is 12 plus 5? +Answer 34: 17 +Puzzle 35: What is 5 times 9? +Answer 35: 45 +Puzzle 36: How many legs does a dog have? +Answer 36: 4 +Puzzle 37: What is 10 minus 3? +Answer 37: 7 +Puzzle 38: What is 3 times 9? +Answer 38: 27 +Puzzle 39: How many letters are there in the English alphabet? +Answer 39: 26 +Puzzle 40: What is 9 plus 6? +Answer 40: 15 +Puzzle 41: What is 2 times 7? +Answer 41: 14 +Puzzle 42: How many sides does a square have? +Answer 42: 4 +Puzzle 43: What is 11 minus 5? +Answer 43: 6 +Puzzle 44: How many wheels does a car have? +Answer 44: 4 +Puzzle 45: What is 7 plus 4? +Answer 45: 11 +Puzzle 46: What is 4 times 8? +Answer 46: 32 +Puzzle 47: How many years are there in a decade? +Answer 47: 10 +Puzzle 48: What is 10 plus 9? +Answer 48: 19 +Puzzle 49: What is 2 times 8? +Answer 49: 16 +Puzzle 50: How many eyes does a human have? +Answer 50: 2 +Puzzle 51: What is 7 plus 6? +Answer 51: 13 +Puzzle 52: What is 4 times 6? +Answer 52: 24 +Puzzle 53: How many points does a triangle have? +Answer 53: 3 +Puzzle 54: What is 8 plus 5? +Answer 54: 13 +Puzzle 55: What is 3 times 7? +Answer 55: 21 +Puzzle 56: How many wings does a bird have? +Answer 56: 2 +Puzzle 57: What is 6 plus 8? +Answer 57: 14 +Puzzle 58: What is 9 times 4? +Answer 58: 36 +Puzzle 59: How many legs does a spider have? +Answer 59: 8 +Puzzle 60: What is 10 plus 8? +Answer 60: 18 +Puzzle 61: What is 4 times 5? +Answer 61: 20 +Puzzle 62: How many wheels does a bicycle have? +Answer 62: 2 +Puzzle 63: What is 7 plus 5? +Answer 63: 12 +Puzzle 64: What is 8 times 3? +Answer 64: 24 +Puzzle 65: How many petals does a typical flower have? +Answer 65: 5 +Puzzle 66: What is 5 plus 9? +Answer 66: 14 +Puzzle 67: What is 3 times 8? +Answer 67: 24 +Puzzle 68: How many legs does an octopus have? +Answer 68: 8 +Puzzle 69: What is 6 plus 9? +Answer 69: 15 +Puzzle 70: What is 2 times 9? +Answer 70: 18 +Puzzle 71: How many ears does a human have? +Answer 71: 2 +Puzzle 72: What is 9 plus 7? +Answer 72: 16 +Puzzle 73: What is 7 times 5? +Answer 73: 35 +Puzzle 74: How many sides does a pentagon have? +Answer 74: 5 +Puzzle 75: What is 8 plus 9? +Answer 75: 17 +Puzzle 76: What is 5 times 7? +Answer 76: 35 +Puzzle 77: How many paws does a cat have? +Answer 77: 4 +Puzzle 78: What is 11 plus 6? +Answer 78: 17 +Puzzle 79: What is 4 times 9? +Answer 79: 36 +Puzzle 80: How many days are in a year? +Answer 80: 365 +Puzzle 81: What is 10 plus 5? +Answer 81: 15 +Puzzle 82: What is 6 times 6? +Answer 82: 36 +Puzzle 83: How many fingers are on one hand? +Answer 83: 5 +Puzzle 84: What is 7 plus 8? +Answer 84: 15 +Puzzle 85: What is 3 times 5? +Answer 85: 15 +Puzzle 86: How many oceans are there on Earth? +Answer 86: 5 +Puzzle 87: What is 6 plus 5? +Answer 87: 11 +Puzzle 88: What is 2 times 6? +Answer 88: 12 +Puzzle 89: How many legs does a horse have? +Answer 89: 4 +Puzzle 90: What is 9 plus 5? +Answer 90: 14 +Puzzle 91: What is 8 times 4? +Answer 91: 32 +Puzzle 92: How many teeth does an adult human usually have? +Answer 92: 32 +Puzzle 93: What is 7 plus 9? +Answer 93: 16 +Puzzle 94: What is 5 times 8? +Answer 94: 40 +Puzzle 95: How many noses does a human have? +Answer 95: 1 +Puzzle 96: What is 11 plus 7? +Answer 96: 18 +Puzzle 97: What is 4 times 7? +Answer 97: 28 +Puzzle 98: How many stripes does a zebra have? +Answer 98: Countless +Puzzle 99: What is 10 plus 6? +Answer 99: 16 +Puzzle 100: How many humps does a Bactrian camel have? +Answer 100: 2 \ No newline at end of file diff --git a/Lab 3/sensor.py b/Lab 3/sensor.py new file mode 100644 index 0000000000..71d3ec1881 --- /dev/null +++ b/Lab 3/sensor.py @@ -0,0 +1,22 @@ +import qwiic_vl53l1x +import time +import sys + +def runExample(): + + mySensor = qwiic_vl53l1x.QwiicVL53L1X() + + mySensor.sensor_init() + print(mySensor.address) + while True: + try: + mySensor.start_ranging() # Write configuration bytes to initiate measurement + time.sleep(.005) + distance = mySensor.get_distance() # Get the result of the measurement from the sensor + time.sleep(.005) + mySensor.stop_ranging() + print("Distance(mm): %s" % distance) + except Exception as e: + print(e) + +# runExample() \ No newline at end of file diff --git a/Lab 3/speech-scripts/ask_and_record.py b/Lab 3/speech-scripts/ask_and_record.py new file mode 100755 index 0000000000..d9fdc95603 --- /dev/null +++ b/Lab 3/speech-scripts/ask_and_record.py @@ -0,0 +1,113 @@ +#!/usr/bin/env python3 + +import argparse +import queue +import sys +import sounddevice as sd +import numpy as np +import json +import re +from word2number import w2n + +from vosk import Model, KaldiRecognizer + +def convert_speech_text_to_numbers(speech_text): + # Define a regular expression pattern for matching words representing numbers + pattern = re.compile(r'\b(?:zero|one|two|three|four|five|six|seven|eight|nine|ten|eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|ninety|hundred|thousand|million|billion|trillion)\b', re.IGNORECASE) + + # Find all matching words in the input speech text + matches = pattern.findall(speech_text) + + # Convert the matched words to numbers and store them in a list + numbers = [] + for match in matches: + try: + number = w2n.word_to_num(match) + numbers.append(number) + except ValueError as e: + print(f"Failed to convert {match} to a number: {e}") + + return numbers + +q = queue.Queue() + +def int_or_str(text): + """Helper function for argument parsing.""" + try: + return int(text) + except ValueError: + return text + +def callback(indata, frames, time, status): + """This is called (from a separate thread) for each audio block.""" + if status: + print(status, file=sys.stderr) + q.put(bytes(indata)) + +parser = argparse.ArgumentParser(add_help=False) +parser.add_argument( + "-l", "--list-devices", action="store_true", + help="show list of audio devices and exit") +args, remaining = parser.parse_known_args() +if args.list_devices: + print(sd.query_devices()) + parser.exit(0) +parser = argparse.ArgumentParser( + description=__doc__, + formatter_class=argparse.RawDescriptionHelpFormatter, + parents=[parser]) +parser.add_argument( + "-f", "--filename", type=str, metavar="FILENAME", + help="audio file to store recording to") +parser.add_argument( + "-d", "--device", type=int_or_str, + help="input device (numeric ID or substring)") +parser.add_argument( + "-r", "--samplerate", type=int, help="sampling rate") +parser.add_argument( + "-m", "--model", type=str, help="language model; e.g. en-us, fr, nl; default is en-us") +args = parser.parse_args(remaining) + +try: + if args.samplerate is None: + device_info = sd.query_devices(args.device, "input") + args.samplerate = int(device_info["default_samplerate"]) + + if args.model is None: + model = Model(lang="en-us") + else: + model = Model(lang=args.model) + + rec = KaldiRecognizer(model, args.samplerate) + + # Ask for input verbally + print("Please provide your numerical input after the beep, for example, your phone number.") + sd.play(np.sin(2 * np.pi * 440 * np.arange(args.samplerate) / args.samplerate), samplerate=args.samplerate) + + with sd.RawInputStream(samplerate=args.samplerate, blocksize = 8000, device=args.device, dtype="int16", channels=1, callback=callback): + print("#" * 80) + print("Recording for 5 seconds...") + print("#" * 80) + + for _ in range(5 * args.samplerate // 8000): # Record for 5 seconds + data = q.get() + if rec.AcceptWaveform(data): + print(rec.Result()) + else: + print(rec.PartialResult()) + if args.filename: + with open(args.filename, "wb") as dump_fn: + dump_fn.write(data) + + print("Done recording. Here is your Number:") + result = json.loads(rec.FinalResult()) + result = result.get("text", "") + result = convert_speech_text_to_numbers(result) + result = ''.join([str(item) for item in result]) + print(result) + +except KeyboardInterrupt: + print("\nDone") + parser.exit(0) +except Exception as e: + parser.exit(type(e).__name__ + ": " + str(e)) diff --git a/Lab 3/speech-scripts/ask_for_puzzle.sh b/Lab 3/speech-scripts/ask_for_puzzle.sh new file mode 100755 index 0000000000..4bac101a70 --- /dev/null +++ b/Lab 3/speech-scripts/ask_for_puzzle.sh @@ -0,0 +1,31 @@ +#https://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis) + +#!/bin/bash +say() { local IFS=+;/usr/bin/mplayer -ao alsa -really-quiet -noconsolecontrols "http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&q=$*&tl=en"; } +#say $* +say " Only the smart kid gets candy! Press the answer of this puzzle in the keyboard." + +mapfile -t lines < 'puzzles.txt' + +num_puzzles=$((${#lines[@]} / 2)) + +random_puzzle=$((RANDOM % num_puzzles)) + +puzzle_index=$((random_puzzle * 2)) +puzzle="${lines[puzzle_index]}" +answer_index=$((puzzle_index + 1)) +answer="${lines[answer_index]}" + +say "$puzzle" + +read -p "Enter your answer: " user_answer + +expected_answer="${answer##*: }" + +if [ "$user_answer" -eq "$expected_answer" ]; then + say "Congratulations! You answered correctly. Here is your candy" +else + say "Sorry, your answer is incorrect. The correct answer is $expected_answer." +fi + +python puzzle_prompting.py \ No newline at end of file diff --git a/Lab 3/speech-scripts/greetings.sh b/Lab 3/speech-scripts/greetings.sh new file mode 100755 index 0000000000..8090febea6 --- /dev/null +++ b/Lab 3/speech-scripts/greetings.sh @@ -0,0 +1,7 @@ +#https://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis) + +#!/bin/bash +say() { local IFS=+;/usr/bin/mplayer -ao alsa -really-quiet -noconsolecontrols "http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&q=$*&tl=en"; } +#say $* +say "Hello World! I am a voicebot created by Qianxin Gan, Shiying Wu, John Li, Crystal Chang, Mingze Gao, and Mingzhe Sun, a team of awesome designers and innovators." + diff --git a/Lab 3/speech-scripts/puzzle_prompting.py b/Lab 3/speech-scripts/puzzle_prompting.py new file mode 100755 index 0000000000..7a691c4ddd --- /dev/null +++ b/Lab 3/speech-scripts/puzzle_prompting.py @@ -0,0 +1,104 @@ +#!/usr/bin/env python3 + +# prerequisites: as described in https://alphacephei.com/vosk/install and also python module `sounddevice` (simply run command `pip install sounddevice`) +# Example usage using Dutch (nl) recognition model: `python test_microphone.py -m nl` +# For more help run: `python test_microphone.py -h` + +import argparse +import queue +import sys +import sounddevice as sd +import subprocess + +from vosk import Model, KaldiRecognizer + +########### Added for Part 2 ######################################### +trick_or_treat_detected = False +subprocess_executed = False +########### Added for Part 2 Ends #################################### + +q = queue.Queue() + +def int_or_str(text): + """Helper function for argument parsing.""" + try: + return int(text) + except ValueError: + return text + +def callback(indata, frames, time, status): + """This is called (from a separate thread) for each audio block.""" + if status: + print(status, file=sys.stderr) + q.put(bytes(indata)) + +parser = argparse.ArgumentParser(add_help=False) +parser.add_argument( + "-l", "--list-devices", action="store_true", + help="show list of audio devices and exit") +args, remaining = parser.parse_known_args() +if args.list_devices: + print(sd.query_devices()) + parser.exit(0) +parser = argparse.ArgumentParser( + description=__doc__, + formatter_class=argparse.RawDescriptionHelpFormatter, + parents=[parser]) +parser.add_argument( + "-f", "--filename", type=str, metavar="FILENAME", + help="audio file to store recording to") +parser.add_argument( + "-d", "--device", type=int_or_str, + help="input device (numeric ID or substring)") +parser.add_argument( + "-r", "--samplerate", type=int, help="sampling rate") +parser.add_argument( + "-m", "--model", type=str, help="language model; e.g. en-us, fr, nl; default is en-us") +args = parser.parse_args(remaining) + +try: + if args.samplerate is None: + device_info = sd.query_devices(args.device, "input") + # soundfile expects an int, sounddevice provides a float: + args.samplerate = int(device_info["default_samplerate"]) + + if args.model is None: + model = Model(lang="en-us") + else: + model = Model(lang=args.model) + + if args.filename: + dump_fn = open(args.filename, "wb") + else: + dump_fn = None + + with sd.RawInputStream(samplerate=args.samplerate, blocksize = 8000, device=args.device, + dtype="int16", channels=1, callback=callback): + print("#" * 80) + print("Press Ctrl+C to stop the recording") + print("#" * 80) + + rec = KaldiRecognizer(model, args.samplerate) + while True: + data = q.get() + if rec.AcceptWaveform(data): + print(rec.Result()) + else: + print(rec.PartialResult()) + if dump_fn is not None: + dump_fn.write(data) + + ########### Added for Part 2 ######################################### + if "trick or treat" in rec.PartialResult(): + trick_or_treat_detected = True + if trick_or_treat_detected and not subprocess_executed: + subprocess.call(['bash', 'ask_for_puzzle.sh']) + subprocess_executed = True + break + ########### Added for Part 2 Ends #################################### + +except KeyboardInterrupt: + print("\nDone") + parser.exit(0) +except Exception as e: + parser.exit(type(e).__name__ + ": " + str(e)) diff --git a/Lab 3/speech-scripts/puzzles.txt b/Lab 3/speech-scripts/puzzles.txt new file mode 100755 index 0000000000..8468a19c16 --- /dev/null +++ b/Lab 3/speech-scripts/puzzles.txt @@ -0,0 +1,200 @@ +Puzzle 1: What is 6 plus 7? +Answer 1: 13 +Puzzle 2: What is 9 minus 3? +Answer 2: 6 +Puzzle 3: What is 8 times 2? +Answer 3: 16 +Puzzle 4: How many legs does a cat have? +Answer 4: 4 +Puzzle 5: How many days are there in a week? +Answer 5: 7 +Puzzle 6: What is 5 plus 8? +Answer 6: 13 +Puzzle 7: What is 7 times 7? +Answer 7: 49 +Puzzle 8: How many colors are there in a rainbow? +Answer 8: 7 +Puzzle 9: What is 3 plus 9? +Answer 9: 12 +Puzzle 10: What is 9 times 3? +Answer 10: 27 +Puzzle 11: How many fingers does a human have? +Answer 11: 10 +Puzzle 12: What is 4 plus 6? +Answer 12: 10 +Puzzle 13: How many months are in a year? +Answer 13: 12 +Puzzle 14: What is 8 plus 4? +Answer 14: 12 +Puzzle 15: What is 7 times 8? +Answer 15: 56 +Puzzle 16: How many seconds are in a minute? +Answer 16: 60 +Puzzle 17: What is 2 plus 9? +Answer 17: 11 +Puzzle 18: What is 10 times 5? +Answer 18: 50 +Puzzle 19: How many days are in a normal February? +Answer 19: 28 +Puzzle 20: What is 12 minus 4? +Answer 20: 8 +Puzzle 21: How many toes does a human have? +Answer 21: 10 +Puzzle 22: What is 3 times 6? +Answer 22: 18 +Puzzle 23: What is 9 plus 3? +Answer 23: 12 +Puzzle 24: How many continents are there? +Answer 24: 7 +Puzzle 25: What is 10 plus 7? +Answer 25: 17 +Puzzle 26: What is 6 times 5? +Answer 26: 30 +Puzzle 27: How many vowels are in the English alphabet? +Answer 27: 5 +Puzzle 28: What is 8 minus 3? +Answer 28: 5 +Puzzle 29: What is 7 plus 7? +Answer 29: 14 +Puzzle 30: How many planets are in our solar system? +Answer 30: 8 +Puzzle 31: What is 5 times 4? +Answer 31: 20 +Puzzle 32: What is 11 plus 2? +Answer 32: 13 +Puzzle 33: How many sides does a rectangle have? +Answer 33: 4 +Puzzle 34: What is 12 plus 5? +Answer 34: 17 +Puzzle 35: What is 5 times 9? +Answer 35: 45 +Puzzle 36: How many legs does a dog have? +Answer 36: 4 +Puzzle 37: What is 10 minus 3? +Answer 37: 7 +Puzzle 38: What is 3 times 9? +Answer 38: 27 +Puzzle 39: How many letters are there in the English alphabet? +Answer 39: 26 +Puzzle 40: What is 9 plus 6? +Answer 40: 15 +Puzzle 41: What is 2 times 7? +Answer 41: 14 +Puzzle 42: How many sides does a square have? +Answer 42: 4 +Puzzle 43: What is 11 minus 5? +Answer 43: 6 +Puzzle 44: How many wheels does a car have? +Answer 44: 4 +Puzzle 45: What is 7 plus 4? +Answer 45: 11 +Puzzle 46: What is 4 times 8? +Answer 46: 32 +Puzzle 47: How many years are there in a decade? +Answer 47: 10 +Puzzle 48: What is 10 plus 9? +Answer 48: 19 +Puzzle 49: What is 2 times 8? +Answer 49: 16 +Puzzle 50: How many eyes does a human have? +Answer 50: 2 +Puzzle 51: What is 7 plus 6? +Answer 51: 13 +Puzzle 52: What is 4 times 6? +Answer 52: 24 +Puzzle 53: How many points does a triangle have? +Answer 53: 3 +Puzzle 54: What is 8 plus 5? +Answer 54: 13 +Puzzle 55: What is 3 times 7? +Answer 55: 21 +Puzzle 56: How many wings does a bird have? +Answer 56: 2 +Puzzle 57: What is 6 plus 8? +Answer 57: 14 +Puzzle 58: What is 9 times 4? +Answer 58: 36 +Puzzle 59: How many legs does a spider have? +Answer 59: 8 +Puzzle 60: What is 10 plus 8? +Answer 60: 18 +Puzzle 61: What is 4 times 5? +Answer 61: 20 +Puzzle 62: How many wheels does a bicycle have? +Answer 62: 2 +Puzzle 63: What is 7 plus 5? +Answer 63: 12 +Puzzle 64: What is 8 times 3? +Answer 64: 24 +Puzzle 65: How many petals does a typical flower have? +Answer 65: 5 +Puzzle 66: What is 5 plus 9? +Answer 66: 14 +Puzzle 67: What is 3 times 8? +Answer 67: 24 +Puzzle 68: How many legs does an octopus have? +Answer 68: 8 +Puzzle 69: What is 6 plus 9? +Answer 69: 15 +Puzzle 70: What is 2 times 9? +Answer 70: 18 +Puzzle 71: How many ears does a human have? +Answer 71: 2 +Puzzle 72: What is 9 plus 7? +Answer 72: 16 +Puzzle 73: What is 7 times 5? +Answer 73: 35 +Puzzle 74: How many sides does a pentagon have? +Answer 74: 5 +Puzzle 75: What is 8 plus 9? +Answer 75: 17 +Puzzle 76: What is 5 times 7? +Answer 76: 35 +Puzzle 77: How many paws does a cat have? +Answer 77: 4 +Puzzle 78: What is 11 plus 6? +Answer 78: 17 +Puzzle 79: What is 4 times 9? +Answer 79: 36 +Puzzle 80: How many days are in a year? +Answer 80: 365 +Puzzle 81: What is 10 plus 5? +Answer 81: 15 +Puzzle 82: What is 6 times 6? +Answer 82: 36 +Puzzle 83: How many fingers are on one hand? +Answer 83: 5 +Puzzle 84: What is 7 plus 8? +Answer 84: 15 +Puzzle 85: What is 3 times 5? +Answer 85: 15 +Puzzle 86: How many oceans are there on Earth? +Answer 86: 5 +Puzzle 87: What is 6 plus 5? +Answer 87: 11 +Puzzle 88: What is 2 times 6? +Answer 88: 12 +Puzzle 89: How many legs does a horse have? +Answer 89: 4 +Puzzle 90: What is 9 plus 5? +Answer 90: 14 +Puzzle 91: What is 8 times 4? +Answer 91: 32 +Puzzle 92: How many teeth does an adult human usually have? +Answer 92: 32 +Puzzle 93: What is 7 plus 9? +Answer 93: 16 +Puzzle 94: What is 5 times 8? +Answer 94: 40 +Puzzle 95: How many noses does a human have? +Answer 95: 1 +Puzzle 96: What is 11 plus 7? +Answer 96: 18 +Puzzle 97: What is 4 times 7? +Answer 97: 28 +Puzzle 98: How many stripes does a zebra have? +Answer 98: Countless +Puzzle 99: What is 10 plus 6? +Answer 99: 16 +Puzzle 100: How many humps does a Bactrian camel have? +Answer 100: 2 \ No newline at end of file diff --git a/Lab 3/speech-scripts/test_microphone.py b/Lab 3/speech-scripts/test_microphone.py old mode 100644 new mode 100755 diff --git a/Lab 3/speech-scripts/transcribe.sh b/Lab 3/speech-scripts/transcribe.sh new file mode 100755 index 0000000000..bfcdd5713d --- /dev/null +++ b/Lab 3/speech-scripts/transcribe.sh @@ -0,0 +1,8 @@ +#https://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis) + +#!/bin/bash +say() { local IFS=+;/usr/bin/mplayer -ao alsa -really-quiet -noconsolecontrols "http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&q=$*&tl=en"; } +#say $* +say "Hi, this is Carl's assistant. Please leave a number here after the beep and Carl will reach out to you later. Thanks!" + +python ask_and_record.py diff --git a/Lab 4/GG.mp3 b/Lab 4/GG.mp3 new file mode 100644 index 0000000000..d121ed6218 Binary files /dev/null and b/Lab 4/GG.mp3 differ diff --git a/Lab 4/README.md b/Lab 4/README.md index d66eccb056..83dca990d8 100644 --- a/Lab 4/README.md +++ b/Lab 4/README.md @@ -1,7 +1,7 @@ # Ph-UI!!! **NAMES OF COLLABORATORS HERE** - +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) For lab this week, we focus both on sensing, to bring in new modes of input into your devices, as well as prototyping the physical look and feel of the device. You will think about the physical form the device needs to perform the sensing as well as present the display or feedback about what was sensed. @@ -11,7 +11,8 @@ For lab this week, we focus both on sensing, to bring in new modes of input into As always, pull updates from the class Interactive-Lab-Hub to both your Pi and your own GitHub repo. As we discussed in the class, there are 2 ways you can do so: -**\[recommended\]**Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the personal access token for this. +Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the personal access token for this. + ``` pi@ixe00:~$ cd Interactive-Lab-Hub pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2022 @@ -25,7 +26,6 @@ Option 2: On your own GitHub repo, [create pull request](https://github.com/FAR- Option 3: (preferred) use the Github.com interface to update the changes. ### Start brainstorming ideas by reading: - * [What do prototypes prototype?](https://www.semanticscholar.org/paper/What-do-Prototypes-Prototype-Houde-Hill/30bc6125fab9d9b2d5854223aeea7900a218f149) * [Paper prototyping](https://www.uxpin.com/studio/blog/paper-prototyping-the-practical-beginners-guide/) is used by UX designers to quickly develop interface ideas and run them by people before any programming occurs. * [Cardboard prototypes](https://www.youtube.com/watch?v=k_9Q-KDSb9o) help interactive product designers to work through additional issues, like how big something should be, how it could be carried, where it would sit. @@ -41,10 +41,8 @@ Option 3: (preferred) use the Github.com interface to update the changes. * Cutting board * Cutting tools * Markers - * New hardware for your kit will be handed out. Update your parts list. - (We do offer shared cutting board, cutting tools, and markers on the class cart during the lab, so do not worry if you don't have them!) ## Deliverables \& Submission for Lab 4 @@ -77,7 +75,6 @@ F) [Camera Test](#part-f) G) [Record the interaction](#part-g) - ## The Report (Part 1: A-D, Part 2: E-F) ### Part A @@ -88,14 +85,12 @@ We want to introduce you to the [capacitive sensor](https://learn.adafruit.com/a

-

Plug in the capacitive sensor board with the QWIIC connector. Connect your Twizzlers with either the copper tape or the alligator clips (the clips work better). Install the latest requirements from your working virtual environment: ``` (circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ pip install -r requirements.txt - ``` @@ -113,9 +108,8 @@ Twizzler 6 touched! #### Light/Proximity/Gesture sensor (APDS-9960) We here want you to get to know this awesome sensor [Adafruit APDS-9960](https://www.adafruit.com/product/3595). It is capable of sensing proximity, light (also RGB), and gesture! - + - Connect it to your pi with Qwiic connector and try running the three example scripts individually to see what the sensor is capable of doing! @@ -137,8 +131,6 @@ You can go the the [Adafruit GitHub Page](https://github.com/adafruit/Adafruit_C A rotary encoder is an electro-mechanical device that converts the angular position to analog or digital output signals. The [Adafruit rotary encoder](https://www.adafruit.com/product/4991#technical-details) we ordered for you came with separate breakout board and encoder itself, that is, they will need to be soldered if you have not yet done so! We will be bringing the soldering station to the lab class for you to use, also, you can go to the MakerLAB to do the soldering off-class. Here is some [guidance on soldering](https://learn.adafruit.com/adafruit-guide-excellent-soldering/preparation) from Adafruit. When you first solder, get someone who has done it before (ideally in the MakerLAB environment). It is a good idea to review this material beforehand so you know what to look at.

- -

@@ -153,7 +145,6 @@ You can go to the [Adafruit Learn Page](https://learn.adafruit.com/adafruit-i2c- #### Joystick (optional) - A [joystick](https://www.sparkfun.com/products/15168) can be used to sense and report the input of the stick for it pivoting angle or direction. It also comes with a button input!

@@ -170,12 +161,10 @@ You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Joyst #### Distance Sensor - Earlier we have asked you to play with the proximity sensor, which is able to sense objects within a short distance. Here, we offer [Sparkfun Proximity Sensor Breakout](https://www.sparkfun.com/products/15177), With the ability to detect objects up to 20cm away.

-

Connect it to your pi with Qwiic connector and try running the example script to see how it works! @@ -184,30 +173,69 @@ Connect it to your pi with Qwiic connector and try running the example script to (circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python qwiic_distance.py ``` -You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Proximity_Py) to learn more about the sensor and see other examples +You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Proximity_Py) to learn more about the sensor and see other examples! ### Part C ### Physical considerations for sensing - Usually, sensors need to be positioned in specific locations or orientations to make them useful for their application. Now that you've tried a bunch of the sensors, pick one that you would like to use, and an application where you use the output of that sensor for an interaction. For example, you can use a distance sensor to measure someone's height if you position it overhead and get them to stand under it. - **\*\*\*Draw 5 sketches of different ways you might use your sensor, and how the larger device needs to be shaped in order to make the sensor useful.\*\*\*** +We picked the distance sensor as our primary sensor for the following ideas: + +1. Handheld Device for Visually Impaired Individuals (detect the distance of obstacles)![](https://hackmd.io/_uploads/BJ7vE-fZa.jpg) + +2. Modern Doorbell System (detect people approachness when within certian distance of the door) + +![](https://hackmd.io/_uploads/SkcqOjm-p.png) + +3. Dance Motion Capturer / Detector +Distance and gesture sensors are used to help detect and capture motions for a dancing game that can be played on the TV. +![](https://hackmd.io/_uploads/SkM43uAxT.png) + + +4. Food delivery defender (detect the absense of food) + +![](https://hackmd.io/_uploads/rk-MTomWa.png) + + +5. Home Light Assitant +During the night or when the outdoor light is dimmed, turn on the light of the room when users are within the detectable area. +![](https://hackmd.io/_uploads/H19kgKCep.png) + **\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\*** +- What is the maximum and minimum range of the distance sensor to be effective for the user? +- How is feedback provided to the user (auditory, haptic)? +- Is the device ergonomic and easy to hold for extended periods? +- What is the optimal range to detect a person approaching without causing false alarms? +- How should the system notify the homeowner: sound, light, or a combination? +- How does the system handle varying sizes of individuals (e.g., children)? + +Prototyping Needs: + +- Set up a prototype on a table or countertop to simulate real-world scenarios. +- Test with various objects and food items to gauge accuracy. +- Test the system in different lighting conditions, including complete darkness and dim light. +- Evaluate the sensor's response time when someone enters or exits the detection zone. +- Evaluate sensor accuracy in real-time motion tracking and feedback. + + **\*\*\*Pick one of these designs to prototype.\*\*\*** +![](https://hackmd.io/_uploads/rkzSx3XWa.png) + +This a handheld device for visually impaired person, where he the button start the scan the surrunding enviroment, and the speaker in the center describe the surrounding to the user. + + ### Part D ### Physical considerations for displaying information and housing parts - Here is a Pi with a paper faceplate on it to turn it into a display interface: - @@ -217,12 +245,12 @@ Here is another prototype for a paper display: - Your kit includes these [SparkFun Qwiic OLED screens](https://www.sparkfun.com/products/17153). These use less power than the MiniTFTs you have mounted on the GPIO pins of the Pi, but, more importantly, they can be more flexibly mounted elsewhere on your physical interface. The way you program this display is almost identical to the way you program a Pi display. Take a look at `oled_test.py` and some more of the [Adafruit examples](https://github.com/adafruit/Adafruit_CircuitPython_SSD1306/tree/master/examples). +`pip install adafruit-circuitpython-ssd1306` +

-

@@ -243,17 +271,51 @@ Think about how you want to present the information about what your sensor is se **\*\*\*Sketch 5 designs for how you would physically position your display and any buttons or knobs needed to interact with it.\*\*\*** +1. Home center control panel +2. Intergrated Task completion reminder +3. Car racing game +4. Flappy bird game +5. 3D-Map generator + +![](https://hackmd.io/_uploads/SyP8Gc4-p.jpg) +Home center control panel +Position the knobs on the right side of the screen that displays the light information. Use the knobs to change the light level and color temperature. Push the knobs to turn on and off the light. + +![](https://hackmd.io/_uploads/HyKwz9EZ6.jpg) +Intergrated Task completion reminder +Position the knobs on the right side of the screen that displays the task list. Use the knobs to scroll through the list and push the knobs to check an existing task. + +![](https://hackmd.io/_uploads/B1l4uR4ba.jpg) +Car Racing Game +Modern refrigerators sometimes come with built-in screens for various smart features. Integrate the game to display and play directly from the refrigerator while cooking, so that you can monitor the stove top while not having to get bored in the kitchen with the little joystick on the fridge. + +![](https://hackmd.io/_uploads/BkUUTREWa.jpg) +Flappy Bird +Coffee brewers with built-in touch displays can be perfect for Flappy Bird. While waiting for your awesome cup of coffee, enojy this fun little game using the buttons on the machine. The game will automatically end and return a "WIN" screen if the player kept the bird alive until the coffee finished brewing. + +![](https://hackmd.io/_uploads/SJywp5EbT.jpg) +3D Map Generator +The 3D Map Generator adeptly identifies surrounding objects and terrains. Users can either hold the generator or position it on a flat surface, prompting the device to scan the entirety of the view and subsequently display a 3D map on the screen. The device is equipped with two buttons and a joystick controller to facilitate interaction. The joystick enables users to rotate the map a full 360 degrees, providing a comprehensive view of the area. Meanwhile, the buttons permit users to either generate a new map or browse through existing maps with ease. + **\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\*** -**\*\*\*Pick one of these display designs to integrate into your prototype.\*\*\*** +We need to further study the use cases for our designs and research the users' behavior if they does not have a similar device at hand to see the feasibility, ergonomics, and practicality of our designs. In addition, we need to consider when integrating a game into certain appliances, especially those that generate heat or have moving parts, if it will pose any safety risks. We should also consider if the game's integration enhancing the user's experience with the appliance or becoming a nuisance. Different appliances have different screen sizes and resolutions. Testing how the game looks and feels on these various displays will be crucial. Getting feedback from potential users will provide insights into usability, user experience, and any unforeseen issues. **\*\*\*Explain the rationale for the design.\*\*\*** (e.g. Does it need to be a certain size or form or need to be able to be seen from a certain distance?) -Build a cardboard prototype of your design. - +- The device's size must be compact enough for easy portability, allowing users to scan various environments without it being cumbersome. +- A handheld form ensures that users can elevate the device if necessary to capture a wider view, especially in terrains with obstructions. +- A joystick offers intuitive control for rotating the 3D map. Its 360-degree maneuverability provides users with a full, panoramic view of the scanned area. +- Compared to touch gestures or buttons, a joystick offers precision, especially when viewing intricate details of a 3D map. +- The device needs to integrate advanced sensors to accurately detect surrounding objects and terrains. The sensor's capability determines the fidelity and accuracy of the 3D map. +- The scanning mechanism should be quick, capturing a full view in a short duration to enhance user convenience. +Build a cardboard prototype of your design. +**\*\*\*Pick one of these display designs to integrate into your prototype.\*\*\*** **\*\*\*Document your rough prototype.\*\*\*** +![](https://hackmd.io/_uploads/BySKq04-T.png) +the prototype have a button for start scanning and a display shows the status of the scanning, and it have a sensor at the top of the gun. LAB PART 2 @@ -263,12 +325,10 @@ Following exploration and reflection from Part 1, complete the "looks like," "wo ### Part E (Optional) ### Servo Control with Joystick - > **_NOTE:_** Not in the kit yet. In the class kit, you should be able to find the [Qwiic Servo Controller](https://www.sparkfun.com/products/16773) and [Micro Servo Motor SG51](https://www.adafruit.com/product/2201). The Qwiic Servo Controller will need external power supply to drive, which is included in your kit. Connect the servo controller to the miniPiTFT through qwiic connector and connect the external battery to the 2-Pin JST port (ower port) on the servo controller. Connect your servo to channel 2 on the controller, make sure the brown is connected to GND and orange is connected to PWM. - In this exercise, we will be using the nice [ServoKit library](https://learn.adafruit.com/16-channel-pwm-servo-driver/python-circuitpython) developed by Adafruit! We will continue to use the `circuitpython` virtual environment we created. Activate the virtual environment and make sure to install the latest required libraries by running: @@ -279,10 +339,8 @@ In this exercise, we will be using the nice [ServoKit library](https://learn.ada A servo motor is a rotary actuator or linear actuator that allows for precise control of angular or linear position. The position of a servo motor is set by the width of an electrical pulse, that is, we can use PWM (pulse-width modulation) to set and control the servo motor position. You can read [this](https://learn.adafruit.com/adafruit-arduino-lesson-14-servo-motors/servo-motors) to learn a bit more about how exactly a servo motor works. - Now that you have a basic idea of what a servo motor is, look into the script `servo_test.py` we provide. In line 14, you should see that we have set up the min_pulse and max_pulse corresponding to the servo turning 0 - 180 degrees. Try running the servo example code now and see what happens: - ``` (circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python servo_test.py ``` @@ -295,7 +353,6 @@ You can then call whichever control you like rather than setting a fixed value f We encourage you to try using these controls, **while** paying particular attention to how the interaction changes depending on the position of the controls. For example, if you have your servo rotating a screen (or a piece of cardboard) from one position to another, what changes about the interaction if the control is on the same side of the screen, or the opposite side of the screen? Trying and retrying different configurations generally helps reveal what a design choice changes about the interaction -- _make sure to document what you tried_! - ### Part F (Optional) ### Camera You can use the inputs and outputs from the video camera in the kit. @@ -318,11 +375,48 @@ The following resources are good starts on how to stream video: * [OpenCV – Stream video to web browser/HTML page](https://pyimagesearch.com/2019/09/02/opencv-stream-video-to-web-browser-html-page/) * [Live video streaming over network with OpenCV and ImageZMQ](https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/) ### Part G - ### Record +We are doing a Russian roulette like device. + Document all the prototypes and iterations you have designed and worked on! Again, deliverables for this lab are writings, sketches, photos, and videos that show what your prototype: + +**Components used:** +- Camera with speaker: plays sounds +- Red LED: init the game +- OLED Screen: start and end game display, game mode: display the number of triggers have pulled +- Joystick: trigger of the gun +
+ +![](https://hackmd.io/_uploads/B190AEMM6.png) + +
+ +**Detailed Descriptions:** + + +![](https://hackmd.io/_uploads/S1pYaBEGa.png) + + * "Looks like": shows how the device should look, feel, sit, weigh, etc. + + + +Our prototype is a handheld device resembling a futuristic gun. It has a compact and ergonomic design, allowing users to hold it comfortably. The device is constructed using sturdy cardboard for the main body, with copper tape for the capacitive sensor and various sensors integrated. The addition of an OLED screen on the body provides a sleek and modern touch. The screen displays essential information, such as the start and end of the game, and during gameplay, it shows the count of bullets fired. The device features a distinctive LED Red button on the outside that serves as a game initiation trigger. The inclusion of a camera inside the device adds an interactive element, releasing sound effects when the device is "shot." The prototype has been designed to be lightweight and easy to handle, with the added feature of being pushable and pullable for additional tactile engagement. + + + * "Works like": shows what the device can do + + + +The prototype is designed to provide a simulated and safe version of the classic game. The joystick is configured to simulate the trigger-pulling action, enhancing the overall user experience. Users can load simulated bullets into the rotating chamber, with the OLED screen indicating the number of bullets fired during the game. The chamber can be manually rotated, mimicking the randomness of the Russian roulette game. The device incorporates a camera inside that captures the action and triggers sound effects when "shot." The LED Red button outside the device is a multifunctional control, allowing users to start the game with a press. Importantly, the device lacks any functional firing mechanism, ensuring it cannot cause harm. + * "Acts like": shows how a person would interact with the device + +To start the game, users press the LED Red button on the outside of the device. The OLED screen displays the game status and prompts users to load the simulated bullets into the rotating chamber. Users can then push or pull the device's body to engage with the tactile features. Interacting with the joystick simulates pulling the trigger, initiating the suspenseful sequence. The OLED screen updates in real-time to show the count of bullets fired during gameplay. Auditory cues, including sounds triggered by the internal camera, add to the immersive experience. Safety features are emphasized through the design and messaging, ensuring users understand that it is a toy and not a functional weapon. + +**Video:** +*Click the image below to watch the video:* +[![](https://hackmd.io/_uploads/rJSOkrffp.jpg)](https://www.youtube.com/watch?v=PHGshE1JnpA) diff --git a/Lab 4/button.py b/Lab 4/button.py new file mode 100644 index 0000000000..54fe8258c1 --- /dev/null +++ b/Lab 4/button.py @@ -0,0 +1,33 @@ +import qwiic_button +import pygame +import sys +import time + +def play_sound(file_path): + pygame.mixer.init() + pygame.mixer.music.load(file_path) + pygame.mixer.music.play() + time.sleep(0.5) + +def main(): + button = qwiic_button.QwiicButton() + if button.begin() == False: + print("The Qwiic Button isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + + print("Press the button to play the sound!") + + while True: + if button.is_button_pressed(): + print("Button Pressed!") + play_sound("gunshot.wav") + time.sleep(1) + +if __name__ == "__main__": + try: + main() + except Exception as e: + print(e) + except KeyboardInterrupt: + pass diff --git a/Lab 4/cap_test.py b/Lab 4/cap_test.py index cdb7f6037a..a38a58b32f 100644 --- a/Lab 4/cap_test.py +++ b/Lab 4/cap_test.py @@ -1,4 +1,3 @@ - import time import board import busio diff --git a/Lab 4/game_display.py b/Lab 4/game_display.py new file mode 100644 index 0000000000..fb625e7b3d --- /dev/null +++ b/Lab 4/game_display.py @@ -0,0 +1,76 @@ +# SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries +# SPDX-License-Identifier: MIT + +import board +import busio +import time +import qwiic_button +import adafruit_ssd1306 +from PIL import Image, ImageDraw, ImageFont + +i2c = busio.I2C(board.SCL, board.SDA) + +oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c) +start_button = qwiic_button.QwiicButton() + +width = oled.width +height = oled.height +image = Image.new("1", (width, height)) + +draw = ImageDraw.Draw(image) + +padding = -2 +top = padding +bottom = height - padding + +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 12) + +oled.fill(0) +oled.show() + +def draw_start(): + global image + image = Image.new("1", (width, height)) + draw = ImageDraw.Draw(image) + x = 16 + y = height // 2 - 5 + draw.text((x, y), "Russian Roulette", font=font, fill="#DFC58D") + oled.image(image) + oled.show() + +def draw_end(): + global image + image = Image.new("1", (width, height)) + draw = ImageDraw.Draw(image) + draw.text((8, height // 2 - 10), "You just got served.", font=font, fill="#DFC58D") + draw.text((30, height // 2 + 2), "Game over.", font=font, fill="#DFC58D") + oled.image(image) + oled.show() + +def draw_game(pull): + global image + image = Image.new("1", (width, height)) + draw = ImageDraw.Draw(image) + x = 15 + y = height // 2 - 5 + draw.text((x, y), "Triggers pulled: " + str(pull), font=font, fill="#DFC58D") + oled.image(image) + oled.show() + +# states = ["start", "game", "end"] +curr_state = "end" + +while True: + print(curr_state) + if curr_state == "start": + draw_start() + elif curr_state == "game": + draw_game(1) + elif curr_state == "end": + draw_end() + if start_button.is_button_pressed(): + if curr_state == "start": + curr_state = "game" + if curr_state == "end": + curr_state = "start" + time.sleep(0.1) \ No newline at end of file diff --git a/Lab 4/gunshot.mp3 b/Lab 4/gunshot.mp3 new file mode 100644 index 0000000000..696c02142c Binary files /dev/null and b/Lab 4/gunshot.mp3 differ diff --git a/Lab 4/joystick.py b/Lab 4/joystick.py new file mode 100644 index 0000000000..66851eb355 --- /dev/null +++ b/Lab 4/joystick.py @@ -0,0 +1,36 @@ +from __future__ import print_function +import qwiic_joystick +import time +import sys + +def runExample(): + print("\nSparkFun qwiic Joystick Example 1\n") + myJoystick = qwiic_joystick.QwiicJoystick() + + if myJoystick.connected == False: + print("The Qwiic Joystick device isn't connected to the system. Please check your connection", \ + file=sys.stderr) + return + + myJoystick.begin() + + print("Initialized. Firmware Version: %s" % myJoystick.version) + + pull = False + while True: + if myJoystick.vertical < 10: + print("pulled") + pull = True + else: + pull = False + print("X: %d, Y: %d, Button: %d" % ( \ + myJoystick.horizontal, \ + myJoystick.vertical, \ + myJoystick.button)) + +if __name__ == '__main__': + try: + runExample() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Example 1") + sys.exit(0) diff --git a/Lab 4/oled_test.py b/Lab 4/oled_test.py index d6e96ff59e..cfa8c44d62 100644 --- a/Lab 4/oled_test.py +++ b/Lab 4/oled_test.py @@ -1,4 +1,3 @@ - # SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries # SPDX-License-Identifier: MIT @@ -85,5 +84,4 @@ def draw_circle(xpos0, ypos0, rad, col=1): # draw the new circle draw_circle(center_x, center_y, radius) # show all the changes we just made - oled.show() \ No newline at end of file diff --git a/Lab 4/reload.mp3 b/Lab 4/reload.mp3 new file mode 100644 index 0000000000..48e60bb4af Binary files /dev/null and b/Lab 4/reload.mp3 differ diff --git a/Lab 4/requirements.txt b/Lab 4/requirements.txt index f044d70b8e..a10c861efa 100644 --- a/Lab 4/requirements.txt +++ b/Lab 4/requirements.txt @@ -1,4 +1,3 @@ - Adafruit-Blinka adafruit-circuitpython-busdevice adafruit-circuitpython-framebuf @@ -24,4 +23,3 @@ RPi.GPIO spidev sysv-ipc sparkfun-qwiic-proximity - diff --git a/Lab 4/russian_roulette.py b/Lab 4/russian_roulette.py new file mode 100644 index 0000000000..a3e745ac62 --- /dev/null +++ b/Lab 4/russian_roulette.py @@ -0,0 +1,152 @@ +from __future__ import print_function +import qwiic_joystick +import random +import sys +import time +import pygame +import board +import busio +import time +import qwiic_button +import adafruit_ssd1306 +from PIL import Image, ImageDraw, ImageFont + +i2c = busio.I2C(board.SCL, board.SDA) + +oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c) +myButton = qwiic_button.QwiicButton() + +width = oled.width +height = oled.height +image = Image.new("1", (width, height)) + +draw = ImageDraw.Draw(image) + +padding = -2 +top = padding +bottom = height - padding + +font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 12) + +oled.fill(0) +oled.show() + +def draw_start(): + global image + image = Image.new("1", (width, height)) + draw = ImageDraw.Draw(image) + x = 12 + y = height // 2 - 4 + draw.text((x, y), "Russian Roulette", font=font, fill="#DFC58D") + oled.image(image) + oled.show() + +def draw_end(): + global image + image = Image.new("1", (width, height)) + draw = ImageDraw.Draw(image) + draw.text((8, height // 2 - 10), "You just got served.", font=font, fill="#DFC58D") + draw.text((30, height // 2 + 2), "Game over.", font=font, fill="#DFC58D") + oled.image(image) + oled.show() + +def draw_game(pull): + global image + image = Image.new("1", (width, height)) + draw = ImageDraw.Draw(image) + x = 15 + y = height // 2 - 5 + draw.text((x, y), "Triggers pulled: " + str(pull), font=font, fill="#DFC58D") + oled.image(image) + oled.show() + +def play_sound(file_path, time_to_sleep=1): + pygame.mixer.music.load(file_path) + pygame.mixer.music.play() + time.sleep(time_to_sleep) + +def runGame(): + pygame.mixer.init() + + myJoystick = qwiic_joystick.QwiicJoystick() + if myJoystick.connected == False: + print("The Qwiic Joystick device isn't connected to the system. Please check your connection", \ + file=sys.stderr) + sys.exit(1) + myJoystick.begin() + print("Joystick Initialized. Firmware Version: %s" % myJoystick.version) + + draw_start() + chambers_input = input("Please enter the number of chambers (default = 6): ") + if chambers_input and chambers_input.isdigit(): + chambers = int(chambers_input) + if chambers < 2 or chambers > 10: + print("You have entered an invalid number of chambers. Please enter a number between 2 and 10.") + sys.exit(1) + print("You have chosen to play with " + str(chambers) + " chamber.") + else: + chambers = 6 + print("Invalid input, the game has been set to play with 6 chambers by default.") + + fatal_bullet = random.randint(0, chambers - 1) + current_bullet = 0 + current_shots = 0 + brightness = 250 # The maximum brightness of the pulsing LED. Can be between 0 and 255 + cycle_time = 1000 # The total time for the pulse to take. Set to a bigger number for a slower pulse or a smaller number for a faster pulse + off_time = 200 # The total time to stay off between pulses. Set to 0 to be pulsing continuously. + + + trigger_pulled = False + + print("Press the Button to start!") + + myButton.LED_config(brightness, cycle_time, off_time) + + while True: + if myButton.is_button_pressed(): + myButton.LED_off() + play_sound("reload.mp3", 1.0) + time.sleep(0.1) + break + + while True: + + draw_game(current_shots) + + # print("You have pulled the trigger " + str(current_shots) + " time(s) in the current round.") + # keypress = input("Pull the trigger or type 'r' to reset the chamber: ") + + if myJoystick.vertical < 10 and not trigger_pulled: + trigger_pulled == True + current_bullet = (current_bullet + 1) % chambers + current_shots += 1 + print("You have pulled the trigger " + str(current_shots) + " time(s) in the current round.") + if current_bullet == fatal_bullet: + print("You just got served!") + play_sound("gunshot.mp3", 1) + print("Game Over") + draw_end() + break + print("You will live to see another day") + draw_game(current_shots) + time.sleep(2) + + elif myJoystick.vertical > 500 and trigger_pulled: + trigger_pulled == False + + elif myJoystick.horizontal < 400: + print("You spun the chamber!") + play_sound("reload.mp3", 1.4) + current_shots = 0 + fatal_bullet = random.randint(0, chambers - 1) + + play_sound("GG.mp3", 214) + + +if __name__ == '__main__': + try: + runGame() + except (KeyboardInterrupt, SystemExit) as exErr: + print("\nEnding Game") + myButton.LED_off() + sys.exit(0) \ No newline at end of file diff --git a/Lab 5/README.md b/Lab 5/README.md index 999328e5ca..dca719d058 100644 --- a/Lab 5/README.md +++ b/Lab 5/README.md @@ -1,6 +1,7 @@ # Observant Systems **NAMES OF COLLABORATORS HERE** +John Li (jl4239), Shiying Wu (sw2298), Mingze Gao (mg2454), Crystal Chong (cc2795), Qianxin(Carl) Gan (qg72), Mingzhe Sun (ms3636) For lab this week, we focus on creating interactive systems that can detect and respond to events or stimuli in the environment of the Pi, like the Boat Detector we mentioned in lecture. @@ -71,6 +72,12 @@ The first 2 inferences will be slower. Now, you can try placing several objects Read the `infer.py` script, and get familiar with the code. You can change the video resolution and frames per second (fps). You can also easily use the weights of other pre-trained models. You can see examples of other models [here](https://pytorch.org/tutorials/intermediate/realtime_rpi.html#model-choices). +**Testing out Pytorch for object detaction** + +![](https://hackmd.io/_uploads/r17IELiMa.png) + + + ### Machine Vision With Other Tools The following sections describe tools ([MediaPipe](#mediapipe) and [Teachable Machines](#teachable-machines)). @@ -97,6 +104,23 @@ Consider how you might use this position based approach to create an interaction (You might also consider how this notion of percentage control with hand tracking might be used in some of the physical UI you may have experimented with in the last lab, for instance in controlling a servo or rotary encoder.) +Hand Tracking Interactions: +Leveraging the intuitive nature of hand movements offers an immersive experience in controlling media. Adjusting volume can be as simple as moving one's hand up or down, while a pinch might serve to mute. For navigation, a hand swipe left or right can replace traditional scrolling, and pushing forward can act as a 'press' on digital buttons. + +Face Pose Interactions: +The face, laden with expressiveness, can be a tool for interaction. Simple nods or shakes of the head can translate to affirmative or negative responses in apps. Blinking might replace a mouse click, enhancing accessibility. Even opening the mouth could be harnessed, perhaps activating voice assistants or initiating recordings. + +Body Pose Interactions: +The full body as a control instrument opens avenues for immersive applications. Dance games can detect and respond to entire body movements. Meanwhile, the system could provide real-time feedback on one's posture during work. Gestures can also transition into the realm of home automation, like raising arms to illuminate a room or crossing them to secure doors. + +Physical UI Integration: +Integrating pose controls with tangible interfaces bridges the digital and physical. Hand elevation might dictate a servo's angle, and twisting one's hand can mirror the action of a rotary encoder. Furthermore, the intuitive pinch action for percentage control can be repurposed, adjusting ambient lighting in a room, for instance. + +**Testing out media pipe** + +Click to view video: + +[](https://youtu.be/1QZkdNUnvrc) #### Teachable Machines @@ -121,6 +145,28 @@ Next train your own model. Visit [TeachableMachines](https://teachablemachine.wi Include screenshots of your use of Teachable Machines, and write how you might use this to create your own classifier. Include what different affordances this method brings, compared to the OpenCV or MediaPipe options. +![](https://hackmd.io/_uploads/HylsRFpG6.png) + +![](https://hackmd.io/_uploads/BkV30K6fp.png) + +![](https://hackmd.io/_uploads/B1Oh0K6GT.png) + +![](https://hackmd.io/_uploads/Bkj2RK6z6.png) + +![](https://hackmd.io/_uploads/H10nCFaMa.png) + +![](https://hackmd.io/_uploads/HJzp0YTzT.png) + +![](https://hackmd.io/_uploads/Sy46CtTza.png) + +Simplicity and Accessibility: Teachable Machines provides a user-friendly interface, making the training process more accessible to beginners. There's no need for extensive code to train and test the model. + +Real-time Feedback: The platform allows for instantaneous testing and tweaking, which can expedite the model refinement process. + +Customization: Teachable Machines allows you to customize your model to specific needs. While OpenCV and MediaPipe come with pre-trained models and predefined capabilities, Teachable Machines lets you define and train classes based on your unique requirements. + +Limitations: However, while Teachable Machines simplifies the training process, it might not offer the depth of customization or optimization available in more advanced tools like OpenCV or MediaPipe. For more complex or nuanced applications, one might need the extensive functionalities and controls that OpenCV and MediaPipe offer. + #### (Optional) Legacy audio and computer vision observation approaches In an earlier version of this class students experimented with observing through audio cues. Find the material here: [Audio_optional/audio.md](Audio_optional/audio.md). @@ -136,42 +182,102 @@ In an earlier version of this class students experimented with foundational comp * This can be as simple as the boat detector showen in a previous lecture from Nikolas Matelaro. * Try out different interaction outputs and inputs. - **\*\*\*Describe and detail the interaction, as well as your experimentation here.\*\*\*** +We want to use media pipe model to perform gesture detection to control mouse of the laptop. + +To do this, we need to install mouse package using `pip install mouse`. + +And we modified the `hand_pose.py` to test out different finger tips to move the mouse. + +Click the below image to see the video of an experiment interaction of using index finger tip to control the mouse. + +To move the mouse, we used `numpy` to convert the coordinate of the screen to coordinate of the web camera. And made the mouse to move based on the converted coordinate provided. + +Click to view video: +[](https://youtu.be/mkKc_FbBc1o) + + +By comparing with using different finger tips, we found that using the index finger is the most intuitive way of asking an user to use their hand gesture to move the mouse. + + ### Part C ### Test the interaction prototype Now flight test your interactive prototype and **note down your observations**: For example: 1. When does it what it is supposed to do? -1. When does it fail? -1. When it fails, why does it fail? -1. Based on the behavior you have seen, what other scenarios could cause problems? +- The system successfully detects and responds to hand gestures for mouse control. +- Moving the index finger in front of the camera effectively moves the mouse cursor on the laptop. + +2. When does it fail? +- The system may fail when the hand is not properly detected or when there is confusion in the hand gestures. +- Rapid or erratic movements can lead to inaccurate mouse control. +- The system might struggle if there are multiple hands in the frame, as it is currently designed to track one hand. +3. When it fails, why does it fail? +- Failures are often related to the limitations of the hand pose detection model. If the model misinterprets the hand pose or fails to identify the fingertips accurately, it can lead to erratic mouse movements. +4. Based on the behavior you have seen, what other scenarios could cause problems? +- Changes in lighting conditions might affect the hand pose detection. +- Occlusion of the hand or fingers, even partially, could lead to misinterpretation. +- Background clutter or other objects resembling hand gestures might cause confusion. **\*\*\*Think about someone using the system. Describe how you think this will work.\*\*\*** 1. Are they aware of the uncertainties in the system? -1. How bad would they be impacted by a miss classification? -1. How could change your interactive system to address this? -1. Are there optimizations you can try to do on your sense-making algorithm. - +- Users may not be fully aware of the system's limitations, especially regarding potential misinterpretations of hand gestures. + +2. How bad would they be impacted by a miss classification? +- A misclassification could result in unintended mouse movements, potentially causing the user to click on the wrong elements or activate unintended functions. +3. How could change your interactive system to address this? +- Provide visual or audio feedback when the system is uncertain or when it detects a potential misclassification. +- Implement a calibration or initialization step to ensure the system understands the user's hand gestures accurately. +6. Are there optimizations you can try to do on your sense-making algorithm. +- Fine-tune the hand pose detection model to improve accuracy. +- Implement more sophisticated filtering to reduce noise and erratic movements. +- Include a brief tutorial or onboarding process to familiarize users with the system's capabilities and limitations. +- Provide visual cues or instructions on how to perform gestures effectively. ### Part D ### Characterize your own Observant system Now that you have experimented with one or more of these sense-making systems **characterize their behavior**. During the lecture, we mentioned questions to help characterize a material: * What can you use X for? + * Users can navigate menus, control playback, or play games using hand gestures. It is also possible for users to answer calls, take photos, or navigate screens using gestures. This could be very useful in consumer electronics, or navigating the infotainment system while driving. In addition, it can facilitate users to control the electronics when their hands are dirty (e.g. when cooking, or eating with hands), or when they do not have access to keyboard or mouse. * What is a good environment for X? + * Environment with adequate lighting + * Spacious where camera's field of view can capture the entire hand gesture. * What is a bad environment for X? + * Low light environment + * Extreme light enviroment + * Environment with a lot of disturbance (e.g. a hand picture on the wall, hands waving around) * When will X break? + * Moving hand too fast that the camera fails to capture the motion, or when the lighting condition is not suitable for device to be fully functioning + * Too many hands appear in the same frame + * The hand is partially covered + * The hand has other than five fingers * When it breaks how will X break? + * It will failed to recognize the tracking point of the fingers. + * The tracking points will be placed on incorrect finger(s). * What are other properties/behaviors of X? + * Provide tracking of a maximum of two hands, with the latter recognized one considered as the dominant hand * How does X feel? + * Responsive + * Smooth + * Can be further improved for frame rate **\*\*\*Include a short video demonstrating the answers to these questions.\*\*\*** +Click to view video: + +[](https://youtu.be/2V9fs32dfkc) + ### Part 2. Following exploration and reflection from Part 1, finish building your interactive system, and demonstrate it in use with a video. **\*\*\*Include a short video demonstrating the finished result.\*\*\*** + +For part 2 of the lab, we implemented a pinball machine. The system consists of a 3D-printed pinball machine, the gesture recognition enabled by raspberry pi, and a pair of flippers driven by two servos. The users can play the game with their hand gestures. + +Click to View the Video on the setup and game play: + +[](https://youtu.be/0dFlCIhwrw8?si=X3tShrgpo4UMblCZ) diff --git a/Lab 5/part1.py b/Lab 5/part1.py new file mode 100644 index 0000000000..ee2e14285e --- /dev/null +++ b/Lab 5/part1.py @@ -0,0 +1,60 @@ +import cv2 +import time +import numpy as np +import HandTrackingModule as htm +import math +from ctypes import cast, POINTER +import subprocess +import mouse + +################################ +wCam, hCam = 640, 480 +wScreen, hScreen = 1276, 791 +################################ + +cap = cv2.VideoCapture(0) +cap.set(3, wCam) +cap.set(4, hCam) +pTime = 0 + +detector = htm.handDetector(detectionCon=int(0.7)) + +while True: + success, img = cap.read() + img = cv2.flip(img, 1) + img = detector.findHands(img) + lmList = detector.findPosition(img, draw=False) + if len(lmList) != 0: + + thumbX, thumbY = lmList[4][1], lmList[4][2] #thumb + pointerX, pointerY = lmList[8][1], lmList[8][2] #pointer + + middleX, middleY = lmList[12][1], lmList[12][2] + ringX, ringY = lmList[16][1], lmList[16][2] + pinkyX, pinkyY = lmList[20][1], lmList[20][2] + + cx, cy = (thumbX + pointerX) // 2, (thumbY + pointerY) // 2 + + cv2.circle(img, (thumbX, thumbY), 15, (255, 0, 255), cv2.FILLED) + cv2.circle(img, (pointerX, pointerY), 15, (0, 255, 255), cv2.FILLED) + cv2.circle(img, (middleX, middleY), 15, (255, 0, 255), cv2.FILLED) + cv2.circle(img, (ringX, ringY), 15, (255, 0, 255), cv2.FILLED) + cv2.circle(img, (pinkyX, pinkyY), 15, (255, 0, 255), cv2.FILLED) + cv2.line(img, (thumbX, thumbY), (pointerX, pointerY), (255, 0, 255), 3) + cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) + + conv_x = int(np.interp(pointerX, (0, wCam), (0, wScreen))) + conv_y = int(np.interp(pointerY, (0, hCam), (0, hScreen))) + + mouse.move(conv_x, conv_y) + + cTime = time.time() + fps = 1 / (cTime - pTime) + pTime = cTime + + cv2.imshow("Img", img) + if cv2.waitKey(1) & 0xFF == ord('q'): + break + +cap.release() +cv2.destroyAllWindows() \ No newline at end of file diff --git a/README.md b/README.md index 3f69b682d2..4c0250c5af 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,8 @@ -# [Your name here]'s-Lab-Hub +<<<<<<< HEAD +# [Crystal Chong]'s-Lab-Hub +======= + +>>>>>>> 4f162c26a8d8317a1abe097a817e873dc7cee35b for [Interactive Device Design](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/) Please place links here to the README.md's for each of your labs here: