A security firm says deepfaked audio is being used to steal millions of pounds.
Symantec said it had seen three cases of seemingly deepfaked audio of different chief executives used to trick senior financial controllers into transferring cash.
Deepfakes use artificial intelligence to create convincing fake footage.
The AI system could be trained using the "huge amount" of audio the average chief executive would have innocently made available, Symantec said.
Corporate videos, earning calls, media appearances as well as conference keynotes and presentations would all be useful for fakers looking to build a model of someone`s voice, chief technology officer Dr Hugh Thompson said.
"The model can probably be almost perfect," he said.
And they had used background noise to cleverly mask the least convincing syllables and words.
"Really," said Dr Thompson, "who would not fall for something like that?"
Dr Alexander Adam, a data scientist at AI specialist Faculty, said it would take a substantial investment of time and money to produce good audio fakes.
`Training the models costs thousands of pounds," he said.
"This is because you need a lot of compute power and the human ear is very sensitive to a wide range of frequencies, so getting the model to sound truly realistic takes a lot of time."
Typically, he said, hours of good quality audio was needed to help capture the rhythms and intonation of a target`s speech patterns.
- Here`s How You Can Delete Voice Recordings on Your Smart Devices
- Facebook workers listened to Messenger conversations
- Biostar security software `leaked a million fingerprints`
- Top 5 BEST Smartphones of 2019. So Far
- Facebook letter fails to satisfy DCMS chair Damian Collins
- Amazon quizzed over `Choice` store ratings
- AI reads books out loud in authors` voices
- Minecraft ditches Super Duper graphics plan
- 10 YOUNG MEN`S Style Tips to Look COOLER Than Other Guys!
- In fighting deep fakes, mice may be great listeners