Ever had that awkward hesitation before asking the person sitting in front of you in a noisy restaurant to repeat their sentence for the third time?
Facebook’s researchers are working on a better solution than pondering whether to chance it and tentatively formulate a vague response, hoping that it fits the conversation.
VR and AR
- Microsoft’s HoloLens 2 looks ready to go on sale in September
- Meltdown averted: How VR headsets are making nuclear power plants safer
- Space robots remotely controlled in VR
- Using AR and VR to train surgeons (ZDNet YouTube)
- Best VR headsets for 2019 (CNET)
- Virtual reality: A cheat sheet for business pros (TechRepublic)
Instead, a new technology developed by Facebook Reality Labs (FRL), the research unit creating ever-more sophisticated AR and VR headsets, would let you “zoom in” on and enhance the sounds you care about in a real-life situation, while dimming loud background noises that might come in the way of a clear conversation.
SEE: Magic Leap 1 augmented reality headset: A cheat sheet (TechRepublic download)
The research team, led by Ravish Mehra, calls this “perceptual superpowers”. The technology, for now, only exists as a prototype in-ear monitor, which is paired with an off-the-shelf eye-movement tracking device that picks up what the user takes interest in, before turning up the volume on whatever grabs their attention.
Facebook’s interest in sound rendering goes back years, and has mostly focused on creating believable AR and VR acoustic experiences for Oculus Quest and Rift. “As we started doing this research in VR and as that morphed into AR, we realized that all of the technologies that we’re building here can serve a higher purpose, which is to improve human hearing,” said Mehra.
Ultimately, Mehra and his team hope to deliver perceptual “superpowers” on AR glasses, to integrate audio capabilities with the visual environment on a single platform. For example, the technology could be combined with Facebook’s LiveMaps, to create a virtual map of both physical objects and sounds surrounding the user.
Upon walking into a restaurant, for instance, the AR glasses could identify different types of events happening around the wearer, such as people having a conversation, the air-conditioning noise, or dishes and silverware clanking. Using contextualized AI, the device could then remove distracting noises while enhancing the sounds that the user should focus on.
If it sounds a lot like a hearing aid, it’s because it is – and Facebook has plans to explore, in parallel to the work researchers are carrying out on AR glasses, how the technology could help those who suffer from hearing loss. The company has welcomed hearing scientist Thomas Lunner to explore this research path further.
FRL also revealed advances in what the company calls “audio presence”, a concept that lies at the core of Facebook’s AR and VR project, and which consists of providing users with the feeling that they are in the same room as the person they are hearing virtually.
In other words, the goal is to re-create in VR the eerie sensation of not knowing whether the beeping you are hearing in your headphones is coming from a phone ringing in the series you are watching on your laptop, or from a household appliance in your flat.
This is much easier said than done. Sounds in a room interact with the environment, bounce off the walls, and affect the listener in ways that depend on the shape or size of their ear. Re-creating all of these interactions in virtual reality requires huge technical capabilities.
One way to do so is to carry out a head-related transfer function (HRTF), which digitally represents an individual’s personal experience of hearing. But the current methods of capturing personal HRTFs are too complex to scale, so the FRL team is considering novel approaches, such as developing an algorithm that could work out an approximate HRTF from a photograph of the user’s ears.
The hyper-realistic sound renderings that Facebook’s researchers are working on could one day fool users’ brains into believing that the sounds they are hearing through their VR headset are coming from the room they are sitting in. This could enable next-generation telepresence – the ability to feel present in a location other than your own, in real time, for example during a video call with friends or a team meeting.
The research is still in the very early stages, but FRL’s scientists are already thinking about the privacy implications of their project to “redefine human hearing”. From generating deepfakes to eavesdropping on private conversations, there is no lack of harmful applications that could stem from the technology if it were to fall into malicious hands.
“The goal is to put guardrails around our innovation to do it responsibly, so we’re already thinking about potential safeguards we can put in place,” said Mehra. “For example, before I can enhance someone’s voice, there could be a protocol in place that my glasses can follow to ask someone else’s glasses for permission.”
Whether that – and the prospect of dodging awkward conversations in busy restaurants – will be enough to convince users to jump on the technology remains to be seen.
- Facebook is recycling heat from its data centers to warm up these homes
- Robots, drones and surveillance apps: The unexpected future of medicine
- Best video conferencing software for business: Microsoft Teams plus eight more Zoom alternatives
- United Airlines says there’s no point leaving middle seats empty (ZDNet YouTube)
- The US, China and the AI arms race: Cutting through the hype (CNET)
- Verizon develops 5G-enabled EMS solutions with its fourth First Responder Lab (TechRepublic)