|
On 22 May, a fake photo created by an artificial intelligence (AI) showing an explosion at the Pentagon was posted on Twitter. The image immediately went viral.
Automated stock-trading programs sold off and markets plunged briefly, sending the Standard & Poor’s 500 down 0.3 percent within minutes. Markets normalized shortly after the Pentagon announced that no explosion had happened.
Stock-trading algorithms, many now controlled by AI, had been snookered by AI.
The fake photo might have been just a prank—or it might have been someone who had sold short, then tricked the market into a sell-off and made a pile of money for a few minutes’ work.
Regardless, among trading houses the experience sparked new worries.
Highly automated trading programs constantly scour the Internet for market-moving news and have gotten better at screening out false information.
However, “AI opens the door to all manner of mischief in the information environment which is becoming harder to manage,” Doug Greenig, founder of the Florin Court Capital hedge fund, told the Financial Times.
Trading algorithms face two new challenges: phony stories or photos that fool a reputable reporter who repeats the information, and information that fools the trading algorithm itself, chief data scientist Peter Hafez at RavenPack, said to the FT.
The company uses AI to read data for financial firms.
AI might treat the reports as verified fact “and produce corresponding analytics,” which would trigger trades based on false information, he explained.
The new kind of “info-threat” could drive trading firms to hire data companies to aggregate data and rate the likelihood that a piece of news is real.
“There’s a huge tug-of-war going on between bulls and bears,” Charles-Henry Monchau, chief investment officer at Syz Bank, said in an FT interview.
When trading algorithms see any sharp market move unexplained by market data, “they’re going to react and force some selling, accelerating the move,” he added.
“In a sense, it’s back to the past when we didn’t have accurate, fast news,” strategist Kit Juckes at Société Générale, said to the FT. “This is another step down the road to easy disinformation, made possible by technology and laziness.”
Traders do have some safeguards in place against market lurches.
Many algorithms trade market patterns that appear over longer periods than just moments. Traders routinely place a large number of small bets, which tends to even out any sudden market jolts. Also, many traders place “stop orders” that instantly sell when a stock they own falls by a relatively small percentage or dollar amount.
Still, “there will be more of these [Pentagon-type] stories for a long time and the perpetrators will attempt to extract value from the markets as a result,” Mike Zigmont, chief trader at Harvest Volatility Management, told the FT.
TRENDPOST: As we noted in “It’s Official: Social Media Is Now Worthless” (4 Apr 2023), some experts are urging AI programmers to embed watermarks or other IDs in their code so anything created by an AI will be automatically identified as such.
That could prevent AIs from being fooled by other AIs.
However, anyone with ill intent wouldn’t bother with a watermark or other ID.
It will take an indefinite period of time for AI experts to create a failsafe way to instantly identify something as artificially generated. Until then, no information or images you haven’t verified independently yourself should be trusted.