Playing ‘whack-a-mole’ with Meta over my fraudulent avatars - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

Playing ‘whack-a-mole’ with Meta over my fraudulent avatars

How is it possible that a company with such huge resources, including artificial intelligence tools, cannot deal with this?
Examples of deepfake avatars of Martin Wolf promoted in adverts on Facebook and Instagram

I have an alter ego or, as it is now known on the internet, an avatar. My avatar looks like me and sounds at least a bit like me. He pops up constantly on Facebook and Instagram. Colleagues who understand social media far better than I do have tried to kill this avatar. But so far at least they have failed.

Why are we so determined to terminate this plausible-seeming version of myself? Because he is a fraud — a “deepfake”. Worse, he is also literally a fraud: he tries to get people to join an investment group that I am allegedly leading. Somebody has designed him to cheat people, by exploiting new technology, my name and reputation and that of the FT. He must die. But can we get him killed?

I was first introduced to my avatar on March 11 2025. A former colleague brought his existence to my attention and I brought him at once to that of experts at the FT.

It turned out that he was in an advertisement on Instagram for a WhatsApp group supposedly run by me. That means Meta, which owns both platforms, was indirectly making money from the fraud. This was a shock. Someone was running a financial fraud in my name. It was as bad that Meta was profiting from it.

My expert colleague contacted Meta and after a little “to-ing and fro-ing”, managed to get the offending adverts taken down. Alas, that was far from the end of the affair. In subsequent weeks a number of other people, some whom I knew personally and others who knew who I am, brought further posts to my attention. On each occasion, after being notified, Meta told us that it had been taken down. Furthermore, I have also recently been enrolled in a new Meta system that uses facial recognition technology to identify and remove such scams.

In all, we felt that we were getting on top of this evil. Yes, it had been a bit like “whack-a-mole”, but the number of molehills we were seeing seemed to be low and falling. This has since turned out to be wrong. After examining the relevant data, another expert colleague recently told me there were at least three different deepfake videos and multiple Photoshopped images running over 1,700 advertisements with slight variations across Facebook, and Instagram. The data, from Meta’s Ad Library, shows the ads reached over 970,000 users in the EU alone — where regulations require tech platforms to report such figures.

“Since the ads are all in English, this likely represents only part of their overall reach,” my colleague noted. Presumably many more UK accounts saw them as well.

These ads were purchased by ten fake accounts, with new ones appearing after some were banned. This is like fighting the Hydra!

That is not all. There is a painful difference, I find, between knowing that social media platforms are being used to defraud people and being made an unwitting part of such a scam myself. This has been quite a shock. So how, I wonder, is it possible that a company like Meta with its huge resources, including artificial intelligence tools, cannot identify and take down such frauds automatically, particularly when informed of their existence? Is it really that hard or are they not trying, as Sarah Wynn-Williams suggests in her excellent book Careless People?

We have been in touch with officials at the Department for Culture, Media and Sport, who directed us towards Meta’s ad policies, which state that “ads must not promote products, services, schemes or offers using identified deceptive or misleading practices, including those meant to scam people out of money or personal information”. Similarly, the Online Safety Act requires platforms to protect users from fraud.

A spokesperson for Meta itself said: “It’s against our policies to impersonate public figures and we have removed and disabled the ads, accounts, and pages that were shared with us.”

Meta said in self-exculpation that “scammers are relentless and continuously evolve their tactics to try to evade detection, which is why we’re constantly developing new ways to make it harder for scammers to deceive others — including using facial recognition technology.” Yet I find it hard to believe that Meta, with its vast resources, could not do better. It should simply not be disseminating such frauds.

In the meantime, beware. I never offer investment advice. If you see such an advertisement, it is a scam. If you have been the victim of this scam, please share your experience with the FT at visual.investigations@ft.com. We need to get all the ads taken down and so to know whether Meta is getting on top of this problem. 

Above all, this sort of fraud has to stop. If Meta cannot do it, who will?

martin.wolf@ft.com

Follow Martin Wolf with myFT and on X

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

那些周六兼职都去哪儿了?

英国青少年不得不与年长求职者竞争,并面临更为严格的就业规定。

关于“心态”的神话

心态被过度强调,才能却被忽视——而这恰恰是成功的关键所在。

军事简报:武装直升机的回归

美国为抓捕尼古拉斯•马杜罗并扣押一艘俄罗斯油轮的行动,提醒人们直升机持久不衰的威力。

“愤怒、悲痛与催泪瓦斯”:ICE枪击后明尼苏达掀起公愤

蕾妮•妮可•古德之死激起了对特朗普军事化移民镇压的反对。

特朗普、委内瑞拉与那条“死而不僵”的学说

长期被视为名存实亡的“门罗主义”,如今再次被援引为强势美国外交的蓝图。历史学家格雷格•格兰丁梳理了这一模棱两可的信条的兴衰与再生。

前中央情报局局长威廉•伯恩斯:模仿独裁者不是制胜之道

这位前情报局长谈到特朗普在委内瑞拉行动的风险、美国政府对政权更迭的误判——以及为何普京对乌克兰做出了严重误判。
设置字号×
最小
较小
默认
较大
最大
分享×