Brief list of credits:
Killzone 2, Killzone 3, Killzone: Shadow Fall
Tell us a little about yourself and what you do for a living?
I’m Anton Woldhek. I am senior sound designer at Guerrilla Games in Amsterdam, the Netherlands. Guerrilla Games is part of Sony World Wide Studios. So far, I’ve shipped 3 games at Guerrilla. Killzone 2, 3 and the latest Killzone Shadow Fall, a PS4 launch title. Before my time at Guerrilla I was a student at the HKU in Hilversum. And during my time there I was an intern at Jamey Scott’s studio Dramatic Audio.
What is your niche or speciality, that makes you stand out from rest of the audio professionals?
My passion comes from a long time interest in perception. How it is manipulated with sound. I might be wrong about this but I feel like I’m at the start of a generation of sound designers that set out to get into games. And by that I mean that my studies my interests in life have been about making interactive experiences and that I did not come from the musician/composer turned sound designer background that the generation before me in general seem to come from. I’ve always looked at sound in a game as a very dynamic system, something there ready to (ab)use for your artistic intentions. One of the deep joys of game sound design is building dynamic systems. This can be a collaboration between me and others from different specialities within the studio or it can be a completely sound driven system. I thrive on knowing as much about the inner workings of an engine as possible, there are usually gems there of untapped power that inspire and enable sound.
Can you give us a brief summary of the equipment you use regularly?
Hardware wise things are pretty simple. We have a MADI based setup at Guerrilla, based on SSL convertors and the MADI X-8 router. That combined with the RME MADI PCI express cards is so rock solid that its boring to talk about. Since we moved into our studios 3 years ago we’ve had zero downtime (knock on wood). The studios were designed and built by Mutrox. Other than that I’ve been using a Sound Devices 722 since my student days, combined with a MS rig MKH60/30 usually. I recently got a Line audio CM3 and I’ve been using that especially indoors in my blimp.
When do you find you are most creative?
I think my most creative output comes when you bitten through some frustration, had to struggle through some slump where things just seem impossible and then they suddenly click. The hard part is to not get too frustrated, its very tempting to get distracted by anything in those moments. I like to try a lot of ideas at lightning speed and then filter, so the more i can smooth my workflow to allow the flow to keep going without interruption the better. The goal is to prevent yourself from getting too judgemental. Sure you steer your direction but I try not to reflect too much. Instead I ask myself, what else can I try. Once you have enough raw material you stop. Take a break. And set yourself the task to find the gem that you’ve just created in there.
What is your usual process for creating audio content?
Being an in-house sound designer in games means that the process is quite collaborative. It starts by talking to game designers about what the intent is of a certain feature of the game. Or by going to the concept artists behind certain pieces to ascertain the precise nature of a machine or environment. From that you get an idea of what the sound should do. Then I start by creating a quick mockup, get that sound in the game as soon as possible. Once its in and you’ve heard it a few times you know what’s needed for a first pass. Rinse and repeat till it works to satisfaction. Although for production reasons its not always possible I prefer to let a sound rest for awhile before returning to it. Hearing something in the context of the rest of the game always helps. As is true in all software development, its never really finished you just have to stop at some point.
Are there any particular secrets to your creativity?
My creativity really comes from thinking in dynamic systems. Of course this really shines when it comes to in game assets but it also shows in how I approach the creation of the raw audio itself. Back at the conservatory I was always busy with developing processes for generating sound in pure data or max/msp. That is one aspect of my approach. The other is more performance based in that I like to record and play around with physical things to obtain a sound. If no instrument or object is available i just switch to my body, voice and mind.
Do you have any audio creation techniques that resulted in something interesting?
Randy Thom once wrote about randomly picking sounds from your library. In my case I press the random button in Basehead and put the first result straight into my current sound and see how I can fit that in. It works well for sounds that are otherwise perhaps a bit bland.
Another technique I like to employ occasionally is vocalisation, I just scream sputter and grunt into a microphone in various ways and chop that up for relevant bits. I have on occasion recorded others and put them into sounds this way as well. They don’t end up recognizably human sounding in the end but it helps me get a certain character and flavor in a sound.
Any specific “lessons learned” on a project that you could share?
To be very specific to games, I would say that testing a lot is quite important. Understanding what happens in your engine when you take an asset from Nuendo into the game and what causes it to sound a certain way. Although as a sound designer you create sound assets, this is not the soundtrack in a game. This is created at runtime by the player of the game. Therefor everybody that creates parts of the game that are affected by how the player plays the game is a software developer. A software developer will learn early on to develop and test constantly. Because when you’re working on big logic structures and there is an issue somewhere, you want to identify the source of that issue quickly. This becomes way more complex if you don’t test as you go. So as a game sound designer, as you develop your sound and it sounds great by itself you will need to have context for that sound in order to have confidence of that. This is also why having a very short iteration time is so key for game developers. All the time wasted waiting for a game to restart can be a real drag on your creative flow.
What were your main audio responsibilities in Killzone: Shadow Fall?
I was responsible for the in-game weapon sounds. I was also responsible for tech, which meant that in collaboration with the lead, Lewis James and audio programmer Andreas Varga, I devised the features for the sounds. MADDER is a good example of that. I was also ‘first user’ for our toolset and hardware. We were on ps4 very early and this meant lots of bug finding on a system level as well as on an engine level.
How much time did you have for designing your sounds and what was the hardest sound to design?
The great thing about being an in-house game sound designer is that you are there in the trenches as things are developed. You play the game as much as possible and notice the new things that come up. Not a lot of other audio folk have the luxury of being on one project from pretty much its inception until the finish line. So the short answer is I had about 3 years to do my sounds. However the truth is as with a lot of these things that you do most of the work in the last 6 months of production. That’s when the feel of things start to come together overall and when you can really add something.
The hardest sound to get right was the Sta409 assault rifle. Getting that to be both futuristic and grounded enough took quite some time.
Did you do a lot of location recordings for the game? Any special stories you could share?
We did quite a few and most of them have special stories. I just think we never do enough of them in retrospect i mean. Two stories stand out for me this project. On one recording session we went to a shipping container harbor. The guys there were terrific, basically they let us use a container as a prop, dragging it and dropping it with a fork lift at first, later on with a massive crane. Me , Pinar Temiz and Lewis went on top of the crane. One of the scariest places i’ve been in the last few years. Until the operator invited me to go to the top of the thing because that’s where this cool electric engine was at. Eeek.
Another recording session was with fellow senior sound designer Lucas van Tol of a 16th century windmill, still fully operational. Another helpful chap, a guy who used to work as a technician in a theatre offered to put my microphone setup, MKH60/30 combo in a rode blimp, up high near a special wood to wood transfer cog. A sound quite unlike a metal to metal connection of “modern” machines. After recording a few minutes of intricate wood connections I start to hear this thumping noise. I looked up and i saw my setup come tumbling down as if on a gallow. Then it dangled there, limping. Swaying back and forth not an inch away from a crushing wooden frame that operates the gigantic saws used in this mill. Immediately i disconnected my cables, as i was attached to microphone and could have been dragged inside this vicious machine. And then, they turned on the emergency brakes, mind you it still takes minutes before this thing has come to a full stop because if you do it too fast the whole thing can catch fire. It was at that point I realized why everybody that worked at the mill was either missing or had a non functional body extremity. Fortunately everybody and the gear was ok and we got to record the emergency brakes. Unfortunately just not on my channels.
You had to adapt to a new way of working with programmers in Killzone: Shadow Fall. Can you tell us more about this state-driven system that you developed and how did it change your workflow / way of working?
It changed a lot, in fact me and Andreas Varga, who built that system to a large extent did a GDC talk about this subject. From an interface perspective it looks like data flow programming like pure data and max/msp. However there is a key difference, the interface is not the place where the code runs so when you hit play in our tool it converts the asset and then places it in the game. Compared to traditional game audio tools our iteration time is very short.
Can you tell us a little bit about the MADDER (Material Dependent Environmental Reactions) system that you used for audio in Killzone: Shadow Fall? What makes it unique and how difficult was it to use?
MADDER came about because we felt like guns didn’t have enough presence in the environment. The thing is, when you fire a real weapon, you notice how much power it has and how it influences the environment. Not just the bullet but the shock wave at the firing position will resonate any objects that are somewhat loose in its vicinity. In the game MADDER tries to give you that sense of grounding in your environment. And whenever you fire the gun there will be these details that never sound truly the same. Lewis James, Audio Lead, summarized it as being impossible to get bored with the sound.
MADDER is about knowing how far the walls around are, what material they are and what angle they are from you. In Shadow Fall we used 4 ray casts, front, back left and right. When you fire a gun you hear the gunshot but additional a MADDER logic unit kicks off. that will play up to 4 additional voices per shot pending on where the walls are. Once the system was in place it was actually very easy to use. It is one block of logic and one block of content that is referenced from each gun. If a gun required specific content because for instance of its size or odd firing mechanic we would replace the madder content bank but the logic was the same. This meant that you could update the logic for all guns at once without having to check each one individually.
For instance, there is a check if a gun is a silenced version of that weapon. If it is, it uses slightly different fall off ranges on madder, because the firing sound is so much lower in volume and length. If you didn’t do that the MADDER would overpower the firing sound. We built that in the last month or so of production without any real stress. In any other environment that would have been a huge and risky undertaking, it hardly registered on the scale in our case.
The interesting thing about having all this information in the sound system is that you can create so many things that don’t exist in the game but in sound with it. If you think about it, the sounds that are playing tell you a great deal about what’s going on in the game so as a sound designer you can really start to connect the dots and built systems that add depth to the sonic experience that do not require huge amounts of code support but actually are systems that you can develop yourself, experiment and play with. This is one of the many reasons why its so awesome to be making games at Guerrilla.
Any tips, hints or motivational speeches for the readers?
Be an active member of your professional community especially when you’re starting out but also as you get more experienced. Don’t just join a Facebook group on gameaudio but also post on there. The passive lurking experience isn’t going to get you in a conversation. I find that when you engage in the process and speak your mind, you get further much more quickly. And these days, there are so many communities to join there’s bound to be one that fits you. Going to conferences is also very inspiring, meeting up with likeminded sound folk, and plenty of those going on all the time. Myself and Damian Kastbauer started the Game Audio Podcast back in 2009 for this reason, so we could extend the conversations we were having at conferences or in online places.