もっと詳しく

The next time you see a movie or TV show that was dubbed from a foreign language, the voices you hear may not belong to actors who rerecorded dialogue in a sound booth. In fact, they may not belong to actors at all. From a report: Highly sophisticated digital voice manufacturing is coming, and entertainment executives say it could bring a revolution in sound as industry-changing as computer graphics were for visuals. New companies are using artificial intelligence to create humanlike voices from samples of a living actor’s voice — models that not only can sound like specific performers, but can speak any language, cry, scream, laugh, even talk with their mouths full. At the same time, companies are refining the visual technology so actors look like they are really speaking.

As streaming services export American fare globally and foreign markets send their hits to the U.S., dubbing is a bigger business than ever. But the uses of synthetic voices extend well beyond localizing foreign films. AI models can provide youthful voices for aging actors. The technology can resurrect audio from celebrities who have died or lost the ability to speak. And it can tweak dialogue in postproduction without the need for actors. All the tinkering raises thorny ethical questions. Where is the line between creating an engrossing screen experience and fabricating an effect that leaves audiences feeling duped?

The technology is set to hit a new target in the coming months, when foreign-language dubbed versions of the 2019 indie horror movie “Every Time I Die” are released in South America. Those versions mark one of the first times entire dubbed movies use computerized voice clones based on the voices of the original English-speaking cast. So when the film comes out abroad, audiences will hear the original actors “speaking” Spanish or Portuguese. Deepdub created the replicas based on 5-minute recordings of each actor speaking English.

Read more of this story at Slashdot.