Home / News & Analysis / Inside Dolby Laboratories, and the Future of Sound and Vision

Inside Dolby Laboratories, and the Future of Sound and Vision

Dolby Laboratories recently invited 40+ international journalists, some from as far away as China and India, to its San Francisco headquarters for an inside peek at Dolby's biophysical lab, where staff studies the science of sound and vision and its effect on the human form.

Dolby's lobby set the scene—full of trippy imagery curated within the Dolby Art Series. As we waited for the day to start, journalists knocked back caffeine and recovered from jet lag next to a video installation, Substance: a Study of Matter, created by Javier Cruz and Kamil Nawratil to show off the power of Dolby Atmos.

Dolby wants you to know that it's much more than a cool sound promo in the movie theater.

"At Dolby we have something unique to offer," Kevin Yeaman, President and CEO, told us. "By focusing on the science of sight and sound, we can create and enable these immersive experiences. With a channel-based system, you might have five channels, maybe with 50 speakers, but [in other theaters] they are grouped. With Dolby Atmos, [creators can utilize], the world's first object-based audio, with up to 128 sound objects at a time, in a three-dimensional soundscape where the sound moves around you with pinpoint accuracy."

In a demo, 3D sounds shot across the darkened cinema in a choreographed flow that reminded me why, sophisticated home entertainment systems aside, there's nothing like a state-of-the-art theater.

Dolby Labs

Which is probably why AMC partnered with Dolby Cinema at 80 venues in the US, to date. Dolby is also keen to expand its international footprint (hence the presence of journalists from around the globe that day), and has partnerships in countries like China, the United Arab Emirates, France, and Spain.

Dolby Laboratories, founded in London in 1965 by engineer and physicist Dr. Ray Dolby, moved its HQ to San Francisco in 1967 and has been associated with Hollywood ever since Star Wars came out in ear-thrilling Dolby Stereo. In 1992, Batman Returns was released in Dolby Digital and in 1999, Star Wars: Episode I-The Phantom Menace debuted in Dolby Surround EX. Next out of the gate is Blade Runner 2049, mastered in Dolby Vision and Dolby Atmos.

Dolby Labs

"We sit side by side with directors, colorists, and sound engineers," said Yeaman. "We seek to understand whether our innovation is, in fact, a palette they can work with and what tools do they need to be successful."

Another demo showed off Dolby Vision's laser projection system, which utilizes high dynamic range (HDR) and wide color gamut (WCG) to show colors that pop, brilliantly; true deep blacks and luminous whites.

"You need dark blacks and colors that come alive," David Leitch, director of Atomic Blonde, said in a video featuring testimonials from Hollywood directors. Dolby Vision and Dolby Atmos "add another level to the complexity of storytelling. It's immersive. You want to exhibit your work in the highest quality—it allows the audience to see it in its purest form—and I'm going to take advantage of that."

It was time to head upstairs, to the labs, where Dolby is looking at the effects of sound, vision, and VFX. There, Poppy Crum, Chief Scientist, had a willing participant hooked up to a bunch of biosensors, including a 64-channel EEG, watching two fire dancers battle a vivid conflagration on a high-end monitor.

You guessed it: the fire was shot in Dolby Vision, so the biosensors went crazy. We got to see the participant's physical responses tracked and analyzed in real time on multiple screens. It was evident she perceived the intense heat, due to the high-fidelity visuals, as real.

Dolby Labs

"As a neurophysiologist, my focus has been on the bi-directional interplay between tech innovation and the sensory experience," Crum explained. "We really are at a point where tech is enabling us to engage our sensory systems in such authentic ways, whether it's in the cinema or virtual/augmented reality.

"Our computational neuroscientists, here in the biophysical lab are looking at how human experience can be modeled in different ways, for immersive technologies. We want to take content creators' intent and amplify it, using our tools to create better insights and results."

Related

Sadly, Dolby isn't using its biophysical research as a managed software service for public dissemination; it's more of an internal collaboration with those in the sound and vision business, like Hollywood, but it was interesting stuff. It would be wild to get hooked up to their system while watching movies to see exactly how easy it is to manipulate our neuroendocrine systems.

Next up? We didn't see a demo of this, but Dolby is looking to lead in VR/AR/MR too, with the latest release of authoring tools for Dolby Atmos, allowing high-end mapped projection and 3D video via the various head-mounted displays.

At the end of the day, as we all drifted out past the video installation, the smart soundscape and vivid visuals appeared prescient of a futuristic off-world colony transit hub. So perhaps Dolby Laboratories is indeed on its way to next-gen sound and vision. In fact, just before we exited the main lobby, executives told us to be back at 10:30 a.m. as—surprise—they'd be taking us to Skywalker Sound, a key Dolby Laboratories partner. But that's another story.

Read more

Check Also

Does Google’s Duplex violate two-party consent laws?

Google’s Duplex, which calls businesses on your behalf and imitates a real human, ums and ahs included, has sparked a bit of controversy among privacy advocates. Doesn’t Google recording a person’s voice and sending it to a data center for analysis violate two-party consent law, which requires everyone in a conversation to agree to being recorded? The answer isn’t immediately clear, and Google’s silence isn’t helping. Let’s take California’s law as the example, since that’s the state where Google is based and where it used the system. Penal Code section 632 forbids recording any “confidential communication” (defined more or less as any non-public conversation) without the consent of all parties. (The Reporters Committee for the Freedom of the Press has a good state-by-state guide to these laws.) Google has provided very little in the way of details about how Duplex actually works, so attempting to answer this question involves a certain amount of informed speculation. To begin with I’m going to consider all phone calls as “confidential” for the purposes of the law. What constitutes a reasonable expectation of privacy is far from settled, and some will have it that you there isn’t such an expectation when making an appointment with a salon. But what about a doctor’s office, or if you need to give personal details over the phone? Though some edge cases may qualify as public, it’s simpler and safer (for us and for Google) to treat all phone conversations as confidential. What we know about Google’s Duplex demo so far As a second assumption, it seems clear that, like most Google services, Duplex’s work takes place in a data center somewhere, not locally on your device. So fundamentally there is a requirement in the system that the other party’s audio will be recorded and sent in some form to that data center for processing, at which point a response is formulated and spoken. On its face it sounds bad for Google. There’s no way the system is getting consent from whomever picks up the phone. That would spoil the whole interaction — “This call is being conducted by a Google system using speech recognition and synthesis; your voice will be analyzed at Google data centers. Press 1 or say ‘I consent’ to consent.” I would have hung up after about two words. The whole idea is to mask the fact that it’s an AI system at all, so getting consent that way won’t work. But there’s wiggle room as far as the consent requirement in how the audio is recorded, transmitted and stored. After all, there are systems out there that may have to temporarily store a recording of a person’s voice without their consent — think of a VoIP call that caches audio for a fraction of a second in case of packet loss. There’s even a specific cutout in the law for hearing aids, which if you think about it do in fact do “record” private conversations. Temporary copies produced as part of a legal, beneficial service aren’t the target of this law. This is partly because the law is about preventing eavesdropping and wiretapping, not preventing any recorded representation of conversation whatsoever that isn’t explicitly authorized. Legislative intent is important. “There’s a little legal uncertainty there, in the sense of what degree of permanence is required to constitute eavesdropping,” said Mason Kortz, of Harvard’s Berkman Klein Center for Internet & Society. “The big question is what is being sent to the data center and how is it being retained. If it’s retained in the condition that the original conversation is understandable, that’s a violation.” For instance, Google could conceivably keep a recording of the call, perhaps for AI training purposes, perhaps for quality assurance, perhaps for users’ own records (in case of time slot dispute at the salon, for example). They do retain other data along these lines. But it would be foolish. Google has an army of lawyers and consent would have been one of the first things they tackled in the deployment of Duplex. For the onstage demos it would be simple enough to collect proactive consent from the businesses they were going to contact. But for actual use by consumers the system needs to engineered with the law in mind. What would a functioning but legal Duplex look like? The conversation would likely have to be deconstructed and permanently discarded immediately after intake, the way audio is cached in a device like a hearing aid or a service like digital voice transmission. A closer example of this is Amazon, which might have found itself in violation of COPPA, a law protecting children’s data, whenever a kid asked an Echo to play a Raffi song or do long division. The FTC decided that as long as Amazon and companies in that position immediately turn the data into text and then delete it afterwards, no harm and, therefore, no violation. That’s not an exact analogue to Google’s system, but it is nonetheless instructive. “It may be possible with careful design to extract the features you need without keeping the original, in a way where it’s mathematically impossible to recreate the recording,” Kortz said. If that process is verifiable and there’s no possibility of eavesdropping — no chance any Google employee, law enforcement officer or hacker could get into the system and intercept or collect that data — then potentially Duplex could be deemed benign, transitory recording in the eye of the law. That assumes a lot, though. Frustratingly, Google could clear this up with a sentence or two. It’s suspicious that the company didn’t address this obvious question with even a single phrase, like Sundar Pichai adding during the presentation that “yes, we are compliant with recording consent laws.” Instead of people wondering if, they’d be wondering how. And of course we’d all still be wondering why. We’ve reached out to Google multiple times on various aspects of this story, but for a company with such talkative products, they sure clammed up fast.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Trading in bitcoins or other digital currencies carries a high level of risk and can result in the total loss of the invested capital. theonlinetech.org does not provide investment advice, but only reflects its own opinion. Please ensure that if you trade or invest in bitcoins or other digital currencies (for example, investing in cloud mining services) you fully understand the risks involved! Please also note that some external links are affiliate links.