Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Deepfake attack: 'Many people could have been cheated'
Deepfake attack: ‘Many people could have been cheated’
2 March 2026
ShareSave
Gideon Longand
Ed Butler
ShareSave
Sundararaman Ramamurthy says it is impossible to know how many people saw the fake video
At the start of this year, a video popped up on social media sites in India showing the chief executive of the Bombay Stock Exchange, Sundararaman Ramamurthy, giving investors advice on which stocks to buy.
Viewers were promised handsome returns if they heeded his advice.
The only problem was, it was not Ramamurthy speaking. It was a deepfake video of him, made using artificial intelligence.
“It was in the public domain where many people could see it, and get cheated into buying or selling stocks, as if I’d recommended them,” explains Ramamurthy.
“When we see an incident like this, we immediately lodge a complaint. We go to Instagram and other places where it’s posted to get the video taken down. And we regularly write to the market warning people not to believe in fake videos.”
Ramamurthy adds: "We don’t know how many people have seen this video, it’s really difficult to find out, so we can’t really judge if it’s had a big impact or not.
“What we want is for it to have had no impact at all. No one should incur a loss because they believe something that is untrue.”
BBC Business Daily: The deepfake CEOs
Non-consensual deepfakes are illegal now in the UK - how are they detected?
Tackling deepfakes ‘has turned into an arms race’
Ramamurthy and the Bombay Stock Exchange are not alone.
“The latest data shows that over the past two years or so, we’ve seen an increase of almost 3,000% in the number of deepfakes being utilized,” says Karim Toubba, the chief executive of US-based password security company LastPass.
Toubba himself was deepfaked in 2024.
“One of our employees in Europe received an audio message and a text message from someone alleging to be me, urgently requesting some help from me,” he says.
Fortunately for Toubba - and LastPass - the employee was suspicious.
“The message was on WhatsApp, which for us is not a sanctioned communication channel,” says Toubba. “Also, we have corporate sanctioned mobile devices and this came in via his personal phone. So that made him think this was potentially a little murky, a little fishy.”
The employee reported the incident to LastPass’s cyber-security team and no harm was done.
It is not known how many people were affected by the attack on the boss of the Bombay Stock Exchange
British engineering firm Arup was not so lucky. In 2024 it was hit by one of the most sophisticated deepfake attacks ever seen in the corporate world.
According to Hong Kong police, an Arup employee working there received a message purporting to come from the firm’s chief financial officer (CFO), who was based in London, regarding a “confidential transaction”.
The employee got onto a video call with the CFO and other staff. On the basis of that call, the employee then transferred $25m (£18.5m) of Arup money to five different bank accounts, as instructed. It only later emerged that the people on the call, including the CFO, were deepfakes.
“You would never want to simply jump on a video call with someone and transfer $25m,” says Stephanie Hare, a tech researcher and co-presenter of the BBC’s AI Decoded TV programme.
“Companies are having to take extra steps to secure these types of communications. That’s the brave new world we’re in now.”
The rapid evolution of AI means that these videos are becoming more lifelike all the time.
“Deepfakes are becoming very, very easy to do,” says Matt Lovell, co-founder and CEO of UK-based cyber-security company CloudGuard. “To generate video and audio quality of extremely accurate specifications - it takes minutes.”
It is also becoming cheaper.
“For, say, a simple, single individual-led attack, you’re looking at $500 to $1,000 with the use of largely free tools,” says Lovell. “For a more sophisticated attack, you’re looking at between $5,000 and $10,000.”
While deepfake videos are becoming more sophisticated, so are the tools used to thwart them. Companies can now use verification software that can assess a person’s facial expressions, the way they turn their head and even the way the blood flows through their face to establish whether it really is them or a deepfake version of them.
“In your cheeks or just underneath your eyelids, we’ll be looking for changes in blood flow when a person is talking or presenting.” Lovell says. “That’s really where we can tease out whether it’s AI-generated or it’s real.”
AI is allowing cyber criminals to far more easily make deepfake videos
But firms are in a constant battle to stay one step ahead of the fraudsters.
“It’s a race, between who can deploy a technology and who can thwart that technology as quickly as possible,” says LastPass’s Toubba. “Luckily, there seems to be quite a bit of money flowing into this, which will only accelerate the pace with which organisations will develop technologies to detect and ultimately block these things.”
At CloudGuard, CEO Matt Lovell is more downbeat.
“Attack vectors are accelerating faster than we can accelerate defence automation and protection,” he says. “Are people moving fast enough to respond to the speed the threat is developing? Absolutely not.”
Hare says the proliferation of deepfake attacks means that people with the skills to combat fraudsters are in high demand. “We have a shortage of cybersecurity professionals worldwide, We need more people to get into this.”
And she says companies are waking up to the threat, albeit slowly.
“In the past it was not considered a priority to secure your operations in quite the same way as it is now,” she points out.
“Now that we have these types of risks, with the leaders at companies, with CEOs, being deepfaked, I think company executives will be spending more time with their chief information security officers and teams than before. And that is a good thing.”
Are you cut out for living and working in Antarctica?
The two farms in Senegal that supply many of the UK’s vegetables
Trump eyes Venezuela visit – but obstacles to his oil plan remain
The Dutch love four-day working weeks, but are they sustainable?
World of Business
International Business
Artificial intelligence
Internet fraud
India
Deepfakes
Fraud