The problems of me using Wwise
My first exposure to Wwise was when I was in my final months of Full Sail University. We were given a crash course on music composition for games and how to implement them into the game. The first two weeks were terrible but the final two weeks pulled the strings together and made it click.
Fast forward a couple of months later, I was exposed to ambisonics (when I was doing my post production internship) and it’s amazing B-format. It is rather old tech but the cost of getting B-format records have reduced dramatically over the years. So I decided that I wanted to revisit games and make sfx/music for spatial audio purposes.
Great, so I got together a couple of my friends and started to get to work on a VR game. On the side, I’ve worked with Kent and a couple other designers and that’s were I realized, we don’t know how to call audio events from Wwise. Or at least have a pipeline that allows me to work in parallel with them. I’ve had a couple of conversation during the game audio la meetups and they mentioned that Wwise was more of a tool for AAA companies that have already done all the audio calls for whatever audio middleware the sound designers will have to use.
Ok then great, there should be resources for audio programmers to learn to call audio in an event of needing to work with middleware. Nothing. The only thing I’ve learnt was that it was the audio programmer’s responsibility to provide states for the audio events.
I tried Wwise - Unity integration but at that point in time it required little for the audio to do and we haven’t got the expertise nor the time to fully implement the little intricacies that we wanted in. So I was like, let’s just do it within Unity. And that’s where I started to learn the process of game development. I’m liking it at the moment but I still have the audio side as my main goal. I guess now I would know how to communicate to my fellow game development people.