In recent earnings calls, executives from Klarna and Zoom introduced a fresh way to communicate by using AI avatars. This shift not only emphasises the growing role of AI in corporate life but also brings to light the tricky regulatory mix surrounding these digital representations.
In the United States, there isn’t a single federal law covering AI avatars. Instead, oversight largely falls to the Federal Trade Commission alongside various state impersonation laws. By contrast, the European Union is taking a proactive lead with its new AI Act, which demands clear disclosure when AI-generated content is in use.
Of course, the rise of AI avatars comes with its own set of challenges. As these digital duplicates become more common in high-stakes settings, they can also pave the way for deepfake content and other threats, potentially opening up new opportunities for fraud and misrepresentation.
For example, Klarna’s CEO, Sebastian Siemiatkowski, recently presented the company’s earnings using his AI avatar, making it clear that his digital self was on stage instead of him. Similarly, Zoom’s CEO, Eric Yuan, showcased his firm’s avatar creation service during an earnings call, underlining the need for safeguards against misuse.
In May, the Financial Times noted that UBS was using AI avatars developed by OpenAI and Synthesia to deliver research to clients—a move that highlights how digital tools are reshaping client interactions. Nvidia’s CEO, Jensen Huang, has also embraced this trend, using an AI avatar for product announcements since 2021.
Brian Jackson from Info-Tech Research Group views AI avatars as the logical next step in generative AI development, evolving from simple chatbots to sophisticated digital personas capable of real-time interaction. He also warns of the risks, such as AI hallucinations and difficulties in navigating complex, live conversations.
There are also growing concerns about fraud. A PYMNTS Intelligence report points out that, while advanced cybersecurity measures are available, many accounts payable departments still rely on outdated anti-fraud techniques that may not withstand the challenges posed by AI-driven impersonations.
Legal frameworks are struggling to keep pace with rapid technological change. In the U.S., the Securities and Exchange Commission requires companies to disclose their use of AI, yet specific rules for avatars are still missing. Meanwhile, the EU’s AI Act demands transparency in AI-generated content, recognising the risks posed by deepfakes.
Taras Tymoshchuk of Geniusee raises important legal questions, such as who should be held accountable when an AI avatar shares incorrect information. There’s also the risk that clients might feel deceived if they later learn they were interacting with a digital representation rather than a human.
Similarly, Reality Defender’s CEO, Ben Colman, cautions that normalising AI avatars in executive communications might inadvertently train stakeholders to trust these synthetic personas, thereby increasing the risk of fraud.
As organisations continue to adopt AI avatars, the opportunities they offer are matched by significant challenges. The need for clear, robust regulatory frameworks is more pressing than ever to ensure these technologies are harnessed safely and ethically.