(Lightly edited version of an e-mail sent to RISKS and published on Volume 31, Issue 48.)

My landlady send me the other day this news article:

From encrypted passwords to padlocked doors, Canadians will go to extreme lengths to avoid scammers.

Now it may not be safe to pick up the phone.

A new scam relies on your voice to answer a simple question: "Can you hear me now"? The scammers try to bait callers into answering "yes."

Anti-fraud agencies say that simple acknowledgment can be used to make it sound as if you signed on for a purchase or service. "They're trying to get a recording of you saying yes," said Ron Mycholuk, a spokesman with the Better Business Bureau of Central and Northern Alberta. "They're going to take that recorded yes, play around with that audio and make it seem to you, or a representative of a business, that you have paid for some advertising, a cruise or a big ticket item, and send you the bill."

At this point I don't pick up the phone if I don't recognize the number. Voicemail is quite useful and I can always call back if the message is not spam, which rarely happens.

However, I then remembered this other article (which, incidentally, I haven't been able to find on the RISKS archive, but I'd be surprised if it hasn't been sent before):

Criminals are using AI-generated audio to impersonate a CEO's voice and con subordinates into transferring funds to a scammer's account. So-called deepfake voice attacks could be the next frontier in a scam that's cost US businesses almost $2bn over the past two years using fraudulent email.

The Wall Street Journal reports that the CEO of an unnamed UK-based energy company thought he was talking on the phone with his boss, the CEO of the German parent company, who'd asked him to urgently transfer [the equivalent of] $243,000 to a Hungarian supplier.

However, the UK CEO was in fact taking instructions from a scammer who'd used AI-powered voice technology to impersonate the German CEO. It's the voice equivalent of deepfake videos that are causing alarm for their potential to manipulate public opinion and cause social discord.

So of course at this point one would expect that the first scam (the method) and the second one (the technology) are a match made in heaven. Let's see if that starts happening. I'm betting on "sure, what else is there to expect".