Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ray casting in Unity without prebaking #375

Open
LVamos opened this issue Sep 19, 2024 · 18 comments
Open

Ray casting in Unity without prebaking #375

LVamos opened this issue Sep 19, 2024 · 18 comments

Comments

@LVamos
Copy link

LVamos commented Sep 19, 2024

Hi,
I'm blind and due to big accessibility issues in Unity editor, I have to create all game objects programmatically at runtime. For the same reasons I'm not able to perform pre-baking. Is it possible to make all objects dynamic and let the Steam Audio do all the calculations at runtime? I'm desperate, I asked this question in several discussions but noone was able to help me with this.

@lakulish
Copy link
Collaborator

@LVamos Could you provide a little more information about which features of Steam Audio you are currently using? The feasibility of doing everything at runtime depends on this.

In general, you would have to use a C# script to call the same function that the Steam Audio Unity plugin calls when you try to export a scene in the editor. This is SteamAudioManager.ExportScene. You will have to modify this function so that instead of writing the scene data to an asset file, it directly creates a Steam Audio Static Mesh and adds it to the current scene. After this, you should be able to use real-time simulation options to avoid having to bake anything.

One potential complication is that if your project is using static batching for optimizing mesh rendering, the geometry data may not be accessible at run-time, so you may have to use run-time static batching to work around this: https://docs.unity3d.com/Manual/static-batching.html.

If all you need is occlusion (and not reflections, reverb, etc.), then you can also consider using Unity's built-in ray tracer, which does not require anything to be exported in the editor. See https://valvesoftware.github.io/steam-audio/doc/unity/settings.html for more.

Hope this helps!

@LVamos
Copy link
Author

LVamos commented Oct 2, 2024

@lakulish
Thanks a lot for your reply. I want to take full advantage of Steam Audio’s capabilities, including reflections, occlusion, reverb, pathing, and transmission. I'd like all computations to be performed at runtime. In the current state of my project, it seems that reflections and occlusion are working, but I’m having trouble with the reverb. I can hear something that could either be reflections or a weak reverb, but if it’s the reverb, it doesn’t seem appropriate for the size of the room I created. The dimensions are 25 x 25 x 4, but the reverb sounds more like a small room. I could swear I followed all the instructions from the official documentation, but it’s still not working as expected. I'm concerned that I may have created the geometry in a non-standard way or made a mistake when setting up the audio mixer. Could you please help me with this? We can discuss it via email or chat if that's more convenient for you. Here’s what I’ve tried so far:

All steps, except setting up the mixer, were done programmatically in an editor script. The script runs only once, as I placed a return statement at the start after execution.

  1. I created a mixer with a dedicated group and attached the SteamAudioReverb to it. ApplyHRTF is set to true, and ReverbType to RealTime.
  2. I added an AudioListener and SteamAudioListener to the main camera and set SteamAudioListener.ApplyReverb to true.
  3. I created a simple room (four walls, a floor, and a ceiling) and moved the camera to the center. Each object was created using GameObject.CreatePrimitive(PrimitiveType.Cube), and each has a BoxCollider and SteamAudioGeometry attached.
  4. I added an object with a SteamAudioStaticMesh component to the scene.
  5. I added an object with the SteamAudioManager component to the scene.
  6. I executed the "Export Active Scene" command from the Steam Audio menu. The asset was successfully created, and at runtime, the SteamAudioStaticMesh.Asset.Data array was populated, and SteamAudioStaticMesh.Asset.ExportedScene was set to the name of the scene.

Then, at runtime:

  1. I created an empty object without a MeshFilter, MeshRenderer, or Collider and attached an AudioSource and SteamAudioSource to it.
  2. I played a sound this way:
    https://pastebin.com/1dKVHP73

I tried disabling reverb on the SteamAudioListener and setting AudioSource.OutputMixerGroup to null, but the resulting sound remains the same.
Here’s my mixer: https://pastebin.com/48E7Xske
Here’s how I set up the scene: https://pastebin.com/QYdWVzu0
At runtime, I also added a large number of game objects without MeshFilters, MeshRenderers, BoxColliders, or SteamAudioGeometry. These objects shouldn’t affect the spatial sound effects.

@bytecauldron
Copy link

Hi! We're sort of going through a similar issue where a lot of stuff has to be performed at runtime. I'm not sure how much I'll be able to help, but I wanted to provide a sanity check.

I can hear something that could either be reflections or a weak reverb, but if it’s the reverb, it doesn’t seem appropriate for the size of the room I created.

It's not just you, nor do I believe it's your geometry. From our experience, the Unity implementation for real-time reverb appears to be very weak or non-existent. What you're likely hearing is just reflections. We ran into the exact same issue with our FMOD/Unity 2021 implementation. As soon as we switched to baked (and we added a baked listener/probe batches to the scene "before" runtime), reverb started working great. Obviously, that's a problem when you're trying to build something that handles everything at runtime. I'm finding it difficult to do what you're describing (as intended by the current implementation, there isn't a lot of info on how to do this programmatically in the docs).

I added an object with the SteamAudioManager component to the scene.

SteamAudioManager is automatically created, so you might have two managers in your scene if you're doing this.

What would be helpful is if we can clarify if real-time is supposed to sound exactly the same as baked reverb. The difference for us was night and day. I can try to get a small test project to demonstrate/double check but I might need some time.

@LVamos
Copy link
Author

LVamos commented Oct 7, 2024

@bytecauldron
Thanks for the response. Well, that's bad news. I try to break it up in every conceivable way and still nothing. I don't know if I can use the baked listener batches, because in my adventure game you can't talk about a mostly static listener, the player can move freely around the game world. That would probably have to be a lot of batches. But even so, could you please write me the exact list of steps for creating the baked reverb? Do I have to create SteamAudioProbeBatches as well?

@LVamos
Copy link
Author

LVamos commented Oct 7, 2024

@bytecauldron I tried to add a SteamAudioProbeBatch into the scene, generating probes and baking but the Unity editor crashed right after game startup.

@bytecauldron
Copy link

Yeah, I figure you're trying to get things to bake via a script. I just don't think that will work out of the box. I couldn't get it to work either, but I'm pretty new to Steam Audio myself. The only way I was able to get baked reverb to work was by utilizing the Generate Probes and Bake buttons available on the Probe Batch component itself, not programmatically. So, I'm not sure how useful it would be to list off the steps for you if you're trying to do that exclusively through scripting.
That said, I do believe the root of your issue stems from real-time reverb not providing expected behavior. We should clarify if real-time reverb is working as intended, or there's something wrong with it. I just brought up baked because I noticed just how different it sounds compared to real-time. It would be nice to hear what real-time reverb sounds like in another engine (UE) to narrow down if it's a Unity problem.

@LVamos
Copy link
Author

LVamos commented Oct 8, 2024

@bytecauldron
You're right that I'm trying to do everything via scripts but that's editor scripts executed in edit mode. And I call exactly the same methods that are called from the Steam Audio editor scripts (e.g. SteamAudioProbeBatch.GenerateProbes). That's why I supposed it will work the same way but I'm sucha a naive man. :-D

@bytecauldron
Copy link

So, from my limited testing (grain of salt, I'm using fmod), I'm finding that regardless if you use real-time or baked, reverb in isolation is just extremely subtle. The reason why it's more pronounced in baked is because of pathing, which is only available with probes. Without reflections and pathing, it can be much more subtle depending on the sound and surrounding geometry. I have to really crank the reverb mixer to hear any noticeable changes.
So, I don't think there is a difference in quality between real time and baked reverb (aside from performance/accuracy), but baked has access to pathing that allows reverb to become much more audible.
I figure you're using Unity's native audio mixer? I haven't played with this so I'm just spitballing here. Is it possible for you to add Unity's native reverb effect first before channeling it to the Steam Audio Reverb effect in the mixer group? I don't know if Steam Audio Reverb is intended to replace the native effect entirely, or work with it, but nothing in the docs says you "can't" do that. Could you try that with real time?

@LVamos
Copy link
Author

LVamos commented Oct 8, 2024

@bytecauldron
You know, the reason I decided for Steam Audio was that they promissed realistic spatial effects including reverb. I'm affraid that combining Steam Audio reflections with the simple reverb from Unity would end up with weird unrealistic sound. The Unity reverb also kind of simulates some reflections, so the technologies would probably go against each other.
Could you make a stereo recording of the baked reverb with pathing for me? I'd like to hear it and consider if it's worth it. By the way, don't you have any experiences with other simmilar plugins for advanced spatial effects with ray tracing? What about Meta XR Audio?

@bytecauldron
Copy link

bytecauldron commented Oct 9, 2024

Ok, couple of things. You're correct in saying that using both native Unity reverb alongside Steam Reverb in the mixer leads to unrealistic sounds. More like, it just sounds muddier to me, at least when I did it with FMOD. However, I would say there's nothing wrong with using the native Unity option combined with the Steam Spatializer (without Steam Reverb) in cases where you really want to ham up the reverb but do not have the surrounding geometry/reflections/materials to justify it. If you have dynamic sounds where the surrounding geometry changes, then I would just use Steam Reverb.

No idea how helpful this will be to you since I set this all up via scene view and dragging components in, but maybe it will give you a baseline of what I think it's supposed to sound like.

  1. Here is a test with a static emitter (placed in the scene view) in a 7x8x11 meter room. The first video is just a dry sound with the spatializer plugin. Reflections have not been enabled, but occlusion is.
dry_no_reverb.mp4
  1. The second video is realtime reverb with reflections enabled (no pathing). In our other project, our footsteps were very dry when we tried to programmatically create a Steam Audio Source/FMOD emitter for realtime reverb, so this is different than what I originally experienced. Interesting to note. When I rotate my camera and get closer to the wall, you can hear the delay in reflections trying to catch up to the listener.
realtime_reverb_reflections.mp4
  1. The last test is what you asked for. This is baked everything. Baked probes, baked source, baked reverb, with reflections/pathing enabled. Overall, the difference is still subtle compared to realtime, at least in the context of reverb. Updating reflections for the listener is a much smoother transition, and the sudden shift in volume between occluded/non-occluded sources are smoother as well. Increasing pathing in the mix seems to help with that a lot.
baked_reverb_reflections_pathing.mp4

Pathing really doesn't have "that" big of a change when comparing baked reverb to realtime. It seems to help a lot with occluded sounds reaching the listener more accurately, and updating reflections reaching the listener when it changes its orientation quickly, but it isn't going to have a dramatic effect on reverb.

Sorry I can't be more helpful, I'm not familiar with other audio plugins. I'm sure someone smarter can chime in, but I would say you're better off trying to get realtime to work first and foremost with scripting, because the extra steps necessary to get other forms of baking (probe batches and static source baking) to work via scripting at runtime is a tall order.

@lakulish
Copy link
Collaborator

@LVamos @bytecauldron There seems to be some confusion here regarding how various Steam Audio features work together, so I'll try to clarify. First, a couple of potential issues I noticed in the files you provided:

  • As @bytecauldron mentioned, you don't need to explicitly create a Steam Audio Manager. It is automatically created when the game starts or when entering play mode.
  • When setting up the scene, it looks like you have steamListener.applyReverb = false;. This is fine assuming you eventually set that to true, otherwise no reverb calculations will be performed.
  • The mixer might be set up incorrectly. You have the SFX mixer nested under the Reverb mixer, so all output from SFX will be sent to Reverb regardless of the Send/Receive you've configured. A more typical set up might be to have SFX and Reverb nested at the same level under Master.
  • The absorption coefficients of the materials are a little high, and that might be impacting the amount of reverb you hear. You could try reducing them to something very low, like 0.1 or 0.05.

Also, there seems to be some confusion about what reflections, reverb, and pathing do. Briefly:

  • Reflections means using ray tracing to calculate how sound bounces from the source to the listener, after one or more reflections off of geometry. This is the most compute-intensive option for simulating sound propagation, because ray tracing needs to be done for each source.
  • Reverb means using ray tracing to calculate how sound bounces from the listener back to itself, after one or more reflections off of geometry. This is much less compute-intensive than reflections, because it only has to be done once for the listener, rather than for every source.
  • Pathing does not use ray tracing at all. It involves finding the shortest path from the source to the listener that passes through a series of probes, without being occluded by geometry. This is a much cheaper approximation to sound propagation, which is useful when you mainly care about ensuring that the sound appears to come from the correct direction, through doors, etc. While it has to be done once per source, it is much cheaper than reflections.

So you usually don't want to combine reflections with either reverb or pathing. You can combine reverb and pathing, of course.

As for why baked reverb might sound different from real-time reverb: the ray tracing involves calculated repeated reflections of rays off of geometry. Each extra bounce we model increases the CPU usage. So in real-time reverb, we model far fewer bounces (by default), whereas in baked reverb, we model many more bounces. Again, these are just the default settings, so you can change the number of bounces used by real-time reverb (by setting "Real-Time Bounces" to be equal to "Baking Bounces"), and see if that helps.

@bytecauldron
Copy link

@lakulish This is extremely helpful and clears up a lot, thank you so much. Definitely check your applyReverb is true, @LVamos. I missed that.

So you usually don't want to combine reflections with either reverb or pathing. You can combine reverb and pathing, of course.

I have no idea why I thought you could mix and match these, but it makes sense. Two separate strategies for sound propagation.
Sort of off-topic but do you know why the Unity plugin allows you to check both reflections and pathing if it's generally not something you should combine? Is there a circumstance where you would want both of those on for one source?

@LVamos
Copy link
Author

LVamos commented Oct 10, 2024

@lakulish @bytecauldron
Thanks for the explanation and the great tips. I’ll definitely try increasing the number of bounces and decreasing the absorption factor. Sorry, I set ApplyReverb to false just for testing, and then I forgot to turn it back on. Could I ask one of you to fix that mixer for me? Rearranging groups is one of those things I just can’t manage blind yet. Otherwise, my main issue with the reverb is that it’s not creating that tail effect that gradually fades out. I mean the kind of effect you’d hear in a cathedral or a long hallway. I tried creating a 150-meter-long narrow corridor, but it still sounded more like a faint echo and didn’t have the tail.
I’m still not entirely clear on the difference between reverb and reflections. I’m no physicist, so I’ll just describe what I’m trying to achieve:

  1. I want to simulate what I know as early reflections. That means the player can tell where they are in relation to a wall or another obstacle, based on how the sounds of their footsteps bounce, and from which side the obstacle is. In fact, it should imitate echolocation - a skill typical for the blind.
  2. I need a realistic reverb, so the player can understand how big and how filled the room is. I need to combine both approaches.
  3. Occlusion, so the player can tell if, for example, there’s a piece of furniture like a wardrobe standing between them and another character.
  4. Lastly, I want to use pathing, so that music from a nearby room sounds like it’s coming through open doors.

I’m really short on time, and my game development is stalled because of these issues, so I’ve been experimenting with Meta XR Audio and Resonance Audio at the same time. But I still think Steam Audio is the best tool. Hopefully, I’ll get the hang of it soon.

@lakulish
Copy link
Collaborator

@LVamos I'm attaching an updated mixer file that should match the setup I described. Let me know if you run into any issues.

TestMixer.mixer.txt

Reverb and reflections are similar in that the fundamental computations needed for both are the same: ray tracing. They are different in that reflections are source-to-listener, and reverb is listener-to-listener.

So in your example, assuming footsteps can be considered to be emitted from the listener's position, you need reverb + occlusion + pathing. The occlusion and pathing will model how sounds get from distant sources to the immediate vicinity of the listener, and the reverb will model how sounds bounce off of geometry around the listener.

Note that the tail length of a reverb is a function of how many times sound bounces around within a space, so you will need to increase the number of bounces in order to get a more realistic reverb for spaces like a cathedral.

Hope this helps!

@LVamos
Copy link
Author

LVamos commented Oct 11, 2024

@lakulish Thanks a lot! Now it sounds beter. We'll see how it behaves once I create the geometry for each game object. I still have a few questions.

  1. I tried turning off ApplyReverb and enabling Reflections on the SteamAudioSource because I assume I’ll need it for things like playing sounds from the position of other NPCs. However, then it started acting weird again. The effect didn’t kick in right at the start of the sound playback, but about half a second later. And once the original dry sound stopped playing, the wet sound cut off as well. I played the sound from a position just below the listener, near the floor, and it was related to walking, so the listener might have been moving during playback. What could be causing this? Here’s an example: https://filetransfer.io/data-package/aqiuXa3u#link
  2. Can I easily load predefined materials at runtime?
  3. I want to learn how to use pathing as well. Where should I place probe batches, for example, in every room in the game? I assume I can perform baking by calling the GenerateProbes and BeginBake methods on each probe batch from an editor script before running the game. Is that correct? Do I then have to set each SteamAudioSource to follow a specific probe batch?

@lakulish
Copy link
Collaborator

@LVamos To answer your questions:

  1. Are you using the same Steam Audio Source for multiple NPCs? You will typically want to use a separate Steam Audio Source for each physically distinct source of sound, because Steam Audio needs to track the positions and orientations over time, and smoothly vary the reflections.
  2. I'm not sure I fully understand the question, but you should be able to create any material as an asset (look at the built-in set of material assets for examples), load it via standard Unity asset loading APIs, and use the material properties to create a scene.
  3. You will typically want to place probe batches over the entire scene or over large portions of the scene. Basically, Steam Audio will calculate paths between any pair of probes in the same probe batch, so your probe batch should be defined over a volume that contains all the relevant doors, windows, etc. through which you want sound to travel. You should then be able to use GenerateProbes, BeginBake, etc. before running the game. And then you will have to point the Steam Audio Source to the appropriate probe batch within which you want to find paths.

Hope this helps!

@LVamos
Copy link
Author

LVamos commented Oct 15, 2024

@lakulish Does it mean that the objects with SteamAudioSource should alway move gradually without big leaps? I use object pooling for all sounds in the game. Whenever I want to play a sound, I pick up a GameObject from the pool, set appropriate Steam Audio parameters, move the containing object to the target position and start the playback. So it is likely that a particular SteamAudioSource will be moved over long distances.

@lakulish
Copy link
Collaborator

@LVamos Strictly speaking, it's fine for an object with a Steam Audio Source component to move with big leaps, as long as it's the same game object. So it's fine for a teleporting NPC, but you shouldn't use the same Steam Audio Source for different NPCs that are active at the same time. Note that there will be some smoothing of audio parameters to prevent glitches and artifacts due to the sudden change.

Having said that, the Steam Audio API also provides functions to allow the same audio effects to be reused for different objects, which is needed when using pooling. However, there is a bug/limitation with Unity where it does not correctly call these API functions. I reported this issue to Unity, but was told that this was by design. So if you're using Unity's built-in audio engine, it will be difficult to get completely artifact-free pooling to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants