+ We are creating a Unity VR application that allows + users to explore real-life building scans in true scale using a + Meta Quest headset. The application will focus on photogrammetry + scans sourced primarily from OpenHeritage3D, providing detailed + and accurate models of culturally significant buildings.
, -
- What if we could build a system that…
-
- …provides a richer text summary of a virtual environment, complete
- with descriptions of how objects compose each other, are placed
- within/next to/on top of each other?
-
- …also describes how you, the user, is interacting with that
- environment at any moment? Could we assign additional text to
- describe that you are pointing at a specific object, or reaching
- out for one?
-
- …runs in real time, that is, can constantly update every frame to
- provide an updated description. That way, we wouldn't have to
- wait for text generation, and we could create a live captioning
- system? …runs entirely on-device, meaning this information is
- never sent to the cloud?
+
+ The project will involve importing and optimising these 3D scans, + integrating VR interaction systems, and ensuring smooth + performance on Meta Quest devices. The end goal is to deliver an + immersive and realistic virtual experience where users can walk + through and interact with real-world buildings as if they were + physically present.
, -
- If we created this, we could use it for…
-
- …in-application virtual assistants that make use of a rich text
- summary for high-accuracy responses
-
- …virtual science labs where users could receive detailed
- auto-generated scientific explanations about tools and objects
- they interact with
-
- …dynamic VR scene descriptions for the visually impaired,
- describing layout and objects, or even what they're holding,
- pointing at or nearby to
-
- …and so much more
-
- Universal Text aims to explore this. We are creating a structured - software package for Unity that allows for real time captioning of - a VR user's interactions with their virtual environment. In - other words, tools provided by our package aim to describe in - natural language "what's happening" in a VR application at - any moment in time, as if recounted by a third party observer. - This textual description will be rich in detail and generated - on-the-fly, providing seamless integration of tutorials, live - captioning for accessibility, or virtual assistants into VR - applications. +
+ From the users perspective, they would be able to download the + Anima application either on the Meta Quest storefront or through + other means, select one of the preloaded scans or import one into + the application, and then enter a rendering of the building and be + able to view it from multiple different angles through the VR + headset. This would include a 360 degree view of the rendered + building and the application responding to the orientation of the + headset.
, ]} - link="https://github.com/uwrealitylabs/universal-gestures-unity" + // link="https://github.com/uwrealitylabs/universal-gestures-unity" />