OpenAI’s Voice Cloning AI Model Requires Just a 15-Second Sample to Operate
April 8, 20243 min read 分钟阅读
Share
OpenAI is rolling out limited access to its text-to-voice generation platform called Voice Engine, as reported by The Verge. This innovative platform can synthesize a voice based on a 15-second audio clip, enabling the creation of realistic-sounding artificial voices. These AI-generated voices are capable of reading text prompts in multiple languages and have potential applications across various industries, according to OpenAI’s blog post.
Among the companies granted access to Voice Engine are Age of Learning, HeyGen, Dimagi, Livox, and Lifespan. OpenAI has showcased samples demonstrating how Age of Learning is utilizing the technology to produce pre-scripted voice-over content and deliver personalized responses to students generated by GPT-4.
Voice Engine development commenced in late 2022 and has since powered preset voices for text-to-speech APIs and ChatGPT’s Read Aloud feature. Jeff Harris from OpenAI’s Voice Engine product team revealed to TechCrunch that the model was trained on a combination of licensed and publicly available data. The platform will be limited to approximately 10 developers, according to OpenAI’s disclosure to the publication.
While AI text-to-audio generation continues to advance, voice generation has received less attention due to various concerns, as highlighted by OpenAI. However, companies like Podcastle and ElevenLabs are exploring AI voice cloning technologies, as previously explored on The Vergecast.
Simultaneously, the US government is taking measures to regulate unethical applications of AI voice technology. The Federal Communications Commission recently prohibited robocalls utilizing AI voices after instances of spam calls impersonating President Joe Biden’s voice.
OpenAI’s partners have committed to adhering to usage policies that prohibit impersonation without consent, requiring explicit and informed consent from original speakers, and disclosing AI-generated voices to listeners. To ensure accountability, OpenAI has implemented watermarking on audio clips and actively monitors their usage.
OpenAI suggests several measures to mitigate risks associated with such tools, including phasing out voice-based authentication for bank accounts, implementing policies safeguarding the use of individuals’ voices in AI, enhancing education on AI deepfakes, and developing AI content tracking systems.
Sumsub’s latest identity fraud report reveals a 121% rise in APAC identity fraud and a 194% surge in deepfake incidents. Explore the growing FaaS threat and strategies to combat digital fraud challenges.
Have you ever received a notification about a Google account recovery attempt? Be careful! It could be the start of a new AI-driven scam. Recently, a Gmail user fell victim to such a meticulously crafted scam where fraudsters used AI-generated human-like voices combined with phishing emails to gradually lure the victim into providing sensitive information. …
As we transition into a digital-first era, technological advancements in quantum computing pose both incredible opportunities and new cybersecurity threats. Quantum computers, capable of solving complex computations much faster than traditional computers, have the potential to break current encryption standards that protect sensitive information. In response, IT leaders are fast-tracking the development and implementation of …
OpenAI’s Voice Cloning AI Model Requires Just a 15-Second Sample to Operate
OpenAI is rolling out limited access to its text-to-voice generation platform called Voice Engine, as reported by The Verge. This innovative platform can synthesize a voice based on a 15-second audio clip, enabling the creation of realistic-sounding artificial voices. These AI-generated voices are capable of reading text prompts in multiple languages and have potential applications across various industries, according to OpenAI’s blog post.
Among the companies granted access to Voice Engine are Age of Learning, HeyGen, Dimagi, Livox, and Lifespan. OpenAI has showcased samples demonstrating how Age of Learning is utilizing the technology to produce pre-scripted voice-over content and deliver personalized responses to students generated by GPT-4.
Voice Engine development commenced in late 2022 and has since powered preset voices for text-to-speech APIs and ChatGPT’s Read Aloud feature. Jeff Harris from OpenAI’s Voice Engine product team revealed to TechCrunch that the model was trained on a combination of licensed and publicly available data. The platform will be limited to approximately 10 developers, according to OpenAI’s disclosure to the publication.
While AI text-to-audio generation continues to advance, voice generation has received less attention due to various concerns, as highlighted by OpenAI. However, companies like Podcastle and ElevenLabs are exploring AI voice cloning technologies, as previously explored on The Vergecast.
Simultaneously, the US government is taking measures to regulate unethical applications of AI voice technology. The Federal Communications Commission recently prohibited robocalls utilizing AI voices after instances of spam calls impersonating President Joe Biden’s voice.
OpenAI’s partners have committed to adhering to usage policies that prohibit impersonation without consent, requiring explicit and informed consent from original speakers, and disclosing AI-generated voices to listeners. To ensure accountability, OpenAI has implemented watermarking on audio clips and actively monitors their usage.
OpenAI suggests several measures to mitigate risks associated with such tools, including phasing out voice-based authentication for bank accounts, implementing policies safeguarding the use of individuals’ voices in AI, enhancing education on AI deepfakes, and developing AI content tracking systems.
Related Posts
Identity Fraud on the Rise: Insights from Sumsub’s Annual Fraud Report
Sumsub’s latest identity fraud report reveals a 121% rise in APAC identity fraud and a 194% surge in deepfake incidents. Explore the growing FaaS threat and strategies to combat digital fraud challenges.
Beware of AI Scams in Gmail: How to Prevent Phishing Attacks
Have you ever received a notification about a Google account recovery attempt? Be careful! It could be the start of a new AI-driven scam. Recently, a Gmail user fell victim to such a meticulously crafted scam where fraudsters used AI-generated human-like voices combined with phishing emails to gradually lure the victim into providing sensitive information. …
IT Leaders are Fast-Tracking Post-Quantum Cryptography: Building a Future-Proof Cybersecurity Strategy
As we transition into a digital-first era, technological advancements in quantum computing pose both incredible opportunities and new cybersecurity threats. Quantum computers, capable of solving complex computations much faster than traditional computers, have the potential to break current encryption standards that protect sensitive information. In response, IT leaders are fast-tracking the development and implementation of …