LightBlog

lundi 6 novembre 2017

Google Launches Resonance Audio SDK for Immersive Virtual Reality Positional Audio

We constantly use sound to help us navigate the world in ways that we often do not even realize, and as a result inaccurate positional audio is often one of the key factors that can break immersion with virtual reality (VR). Today, Google is announcing that they are taking steps to improve the quality and ease of implementation of spatial audio with the launch of their new Resonance Audio SDK.

Google built upon the work that they have done with the Google VR Audio SDK when building the Resonance Audio SDK, with the aim to further reduce the computational intensity and latency of processing spatial audio. Extended Reality (XR) applications place high demands on the allowable latency for a comfortable and immersive experience across every sense that they stimulate, and all those processes are constantly fighting for the computational power that they need. This is especially true for VR applications on smartphones, like Google Daydream, where processing power can be relatively limited.

Resonance Audio uses Ambisonic techniques to allow for accurate positioning of hundreds of audio sources without damaging audio quality on smartphones. This will allow developers to more easily model how audio changes as you walk around a room and even as you move your head, simulating the way sound spreads out, bounces off objects, and is blocked by objects depending on the environment that you are in.

Unity Technologies has partnered closely with Google on this development, allowing developers to integrate this tool with their existing environments, and immediately benefit from the enhanced reverb and audio propagation simulation that it brings. Google is aiming for full cross-platform support for Resonance Audio however, and have designed the tool to make it easy to export sounds files from Unity for use anywhere that supports Ambisonic soundfield playback. While Unity is a first class partner of Google’s on this project, Google has also developed Resonance Audio to be integrated with Unreal Engine, FMOD, Wwise, and various digital audio workstations (DAW) with APIs for C/C++, Java, Objective-C and web applications across Android, iOS, Windows, macOS, and Linux.

With extensive cross-platform support, Resonance Audio aims to make it easier for developers to change environments without changing their audio workflow, thereby speeding up development and reducing the number of new skills and techniques that need to be learned. Interestingly, Resonance Audio will have first class support from web browsers thanks to its integration with the W3C’s recently-updated Web Audio API. This extensive support for Resonance Audio built sounds includes full backend support from YouTube for 360 degree videos and support from any apps developed with the Resonance Audio SDK, in addition to the aforementioned ability to integrate anywhere that supports Ambisonic soundfield playback.

Google is currently making a bit of a push to create new tools and libraries that will help make VR and AR development easier at the moment, with the Resonance Audio SDK joining last week’s launch of Google’s 3D object database, Poly.

Check out the codebase on the Resonance Audio Github page! We can’t wait to see the improvements that can be brought to virtual reality with these new tools.

Are you planning on using Google’s new tools for extended reality development? Which tool are you most excited to try? Are there any areas that you wish Google would create a tool for? Let us know in the comments!



from xda-developers http://ift.tt/2Aeluzx
via IFTTT

Aucun commentaire:

Enregistrer un commentaire