top of page
Search
  • Writer's pictureJesse Humphry

Diving into UE5 MetaSounds

Updated: Sep 8, 2023

Today, we're going to jump into one of the new features of Unreal Engine 5: MetaSounds. MetaSounds are the UE5 "replacement" for typical sound cues. It's worth noting that my knowledge of this side of Unreal is limited to what I've attempted to do with a particular project. You'll see a video later on that demonstrates one of the capabilities I've planned for a WIP title called ISOSCAPE.


So let's get a quick outline of the desired behavior, see it in action, and then discover how we could implement this in MetaSounds. At the end, we'll go over a couple things, including what I'd personally like to see Epic do with MetaSounds, some interesting possibilities from the system, and a few features I noticed but didn't use.


TASK OUTLINE

What we'd like to do is use gently-changing music to identify level progression in our game ISOSCAPE. As the player solves puzzles, their progress can be identified by the changing tone and depth of the music.


Musically, I've implemented this as four music files with varying chord progression, designed to loop and exported at the same track length. All 4 files are intended to (and will usually) seamlessly loop.


We also want a low pass filter that increases its range as the music progresses, but we can't (or rather shouldn't) implement that into the music directly for a couple of reasons, which we'll get into once we see how we plan to implement this as a part of responsive game design.


In addition, we need to change from one .wav file to another at the right time, gradually.


So let's see how we achieve all of this.


BLUEPRINTS

Let's start with the Blueprints. There's a lot less here so we'll see where we're going to end up, and we'll come back to it to see how it links in.


Here's the BP_GameMode.



MainMusic is a variable that represents a component of type AudioComponent attached to the GameMode, and its sound is set to our MetaSound, MS_I_MainLevelMusic. I like prefixes. So we play it on BeginPlay, then when we receive a call to event MoveToNextMusicSelection, we make sure we have a valid sound, then we message the component via the interface AudioParameterControllerInterface. MetaSounds will receive that interface message and attempt to execute a trigger named TransitionToNext.


 

Next is an object of type BP_MusicTransitionActor, which inherits from Actor.


TriggeringActor is an instance-editable reference to one of our custom classes, SEInteractableActor. But you could set up this kind of code with any actor as long as you have a dispatcher to fire / bind to.


So we bind to the dispatcher. When the dispatcher fires, we get the game mode and call that event from above, MoveToNextMusicSection. The rest is all for cleanup.


With that done, let's dive into the MetaSound.


METASOUND

Before we get started, it's worth noting that the logic inside of MetaSounds is not the same kind of logic you'd find in Blueprints. In BP, you could set variable x to the value of x * .25 by getting the variable, multiplying, and then setting the same variable.


You can't do that in MetaSound.

You can also only have a single set node for a variable in the entire graph

This "connection causes loop" error will pop up a lot as you use MetaSounds if you're trying to do behavior you'd normally be able to achieve in Blueprint. You'll find that logic that doesn't totally create a loop will still pop this error. That's because MetaSounds is looking for circular node references, not necessarily testing for an infinite loop situation.


So let's take a look at the implementation of this idea.


What a shitshow

Wires in MetaSounds are an absolute mess, for a couple of reasons. Firstly, all of these macro-like functions really add to the confusion (Trigger Route, Trigger Sequence). It's also worth noting that there are no reroute nodes, so it's even harder to organize your thoughts and present them, like I'm trying to do here.


In a forthcoming post, we'll try to abstract a lot of this logic away (which is done differently for MetaSounds than it is a typical Blueprint); for now, let's go over what's actually happening.


 

Even with notation it's difficult to get your head around

When we call Play on the AudioComponent from BP_GameMode on BeginPlay, the MetaSound receives the OnPlay execution, show in the top left of A above. That plays our WavePlayer, which executes OnPlay from its own node in turn.


WavePlayer's OnPlay executes TriggerAttack in section B on our ADSR Envelope (Audio). If you don't know what an "envelope" is, it's not a bad idea to read about envelopes in sound design, but suffice to say that it's just a float curve.


The AttackTime tells us how long in seconds this envelope will take to go from a volume (rather, an 'amplitude') of 0 to 1.0. The variable AttackCurve describes the shape of that interpolation, with 1.0 being linear.


It returns the envelope in the form of an Audio variable. We need to multiply the OutLeft and OutRight outputs of the wave player with OutEnvelope in order to hear the effect. Then, we push that to the mixer.


As for C, we want to get the current playtime so that we can start the next wave player at the same point. In order for GetWaveDuration to work, your .wav must use the ADPCM compression format. We multiple the duration by the PlaybackLocation (0 - 1.0) to get the point of playback in seconds. You can see in A that there's a StartTime input that requires a "Time" variable, which is what the light blue pin represents.


So how do we move to the next step of the music?


 

Recall that in the BP_GameMode, we're sending a message to the audio component via an interface to execute a parameter named TransitionToNext. In A, you can see where that starts to work.


The execution input fires a "Trigger Sequence", which will increment which output it firest each time it receives an input. So the first time we call it, it'll fire Out0. The next time we call it, it fires Out1, etc. For the sake of clarity, I've put a red circle over every pin that Out0 connects to. We'll discuss them from top to bottom.


The first pin is to stop the previous sound once the release of the previous sound's envelope has completed. You can't do this from the envelope's "OnDone" pin because that causes a loop.


The second pin is to trigger the release of that envelope, so that the sound fades out.


The third pin starts the next wave player. Note that C is the output from C in the previous image. This is where our calculated time is used. Also note that this wave player sends an execution from OnPlay to TriggerAttack for its own envelope. The result is a crossfade from the previous sound to this one.


The fourth pin sets our FilterRange via InterpTo, and we'll see how this is used in a moment.


Note that for D, a "TriggerAny" is used on both the TransitionToNext and OnPlay executions (please note the correction to this below). TriggerAny will fire its execution whenever any execution comes in from the connected pins. This ensures that we set the filter value on the first play command as well as subsequence transition executions.


Note for B that the delay and envelope both use the same ReleaseTime variable. This is to ensure that we don't stop the sound until after the envelope has been fully released.


This entire process is unreadable to someone who didn't make it, so abstraction here will be key moving forward. Lastly, though, we need to look at the mixer and the filter.


 

Small Correction Incoming. I actually discovered that the code above caused a bug if I ever put more that 3 music changing actors into a level, because my logic was flawed. I needed to change D to the below instead of using TriggerAny.

Yeah that makes WAY more sense.

 


Ahhhhhhhhh. So clean...

The MonoMixers take in the Left and Right outputs, respectively, from the result of the previous wave outs times the envelope. So our Out from each mixer boils down to just the sound we need.


The filter, however, needs a dynamic cutoff frequency, but it can't be a snappy change, or it'll alarm the player. That's the reason for the logic in D in the previous section. We get the filter range based on the current 'step' in the music, interp the filter so it moves properly, and then we apply the filter before finally passing the Outs to the final MetaSounds output.


So, big question....does it work?


Take a listen. Though you may not be able to tell if the music track is actually changing, you should definitely hear the gradual release of the filter.




 

EVALUATION OF METASOUNDS

While the system undoubtedly has promise, it doesn't yet feel intuitive enough to replace the likes of FMOD (though as of this writing [May 17, 2022], FMOD still has not released a UE5.0.1 plugin). FMODStudio has a slightly more user-friendly interface, and while it can take a tutorial or two to figure out how to utilize it, it at the least makes sense in BP and has a lot of options for sound design in the studio itself.


The problem at the moment with MetaSounds is how disconnected it is from its implementation in BP. Calling an interface with a property name feels like a hacky way to deal with executing or changing properties in MetaSounds; it feels flimsy. I'd much rather be able to cast an audio component's "sound" property to my exact MetaSound class and call direct functions on it. At least this way, any changes would stop compilation, which makes debugging way easier.


That being said, it does still feel very powerful in terms of how you can change your sound with envelopes and filters, as well as a lot of the logic tools and their utility, difficult as they may be to use.


However, given the complexity of the code due to its repetition, there's undoubtedly a bit of abstraction possible.


EDIT: Here's the follow-up post where we make this Blueprint a bit more readable.

276 views0 comments

Recent Posts

See All
bottom of page