In an exciting development for AI security, SplxAI, a Croatian startup, has successfully raised $7 million to tackle vulnerabilities in artificial intelligence systems. This funding round was led by Launchhub Ventures and saw contributions from Rain Capital, Runtime Ventures, Inovo, DNV Ventures, and South Central Ventures. The goal? To strengthen the security framework of AI technologies.
SplxAI is on a mission to identify and mitigate security vulnerabilities within AI systems. Their innovative approach involves tweaking system prompts—those guidelines that dictate how AI models respond—so there’s less need for extra security measures down the line.
Kristian Kamber, CEO of SplxAI, put it simply: “GenAI technologies give cyber attackers an edge, making early vetting of AI systems crucial.” The company is changing the game for businesses by testing AI systems for security issues before threats can even take hold.
As AI becomes a staple in boosting productivity and profits, the risks of things going wrong—like data poisoning or adversarial attacks—are on the rise. A 2023 survey by the World Economic Forum found that over half of business leaders think generative AI will benefit cyber attackers in the coming years.
Another survey by Accenture, involving 600 banking cybersecurity executives, showed that 80% believe generative AI is helping hackers outpace banks. SplxAI aims to counter this with tools like Agentic Radar, which maps vulnerabilities across multiple AI agents.
SplxAI’s strategy is robust. They run over 2,000 attacks and 17 scans in under an hour to test AI systems for biases, harmful content, and potential misuse. They’ve uncovered significant vulnerabilities, like data leaks in productivity tools and healthcare chatbots giving incorrect medical advice.
The startup doesn’t just stop at identifying issues. They generate detailed reports outlining vulnerabilities and offer practical recommendations for resolution. Their standout feature? “Hardening” system prompts to boost security. Kamber explains, “We focus on remediation because no one would invest in a platform that only offers testing and offensive security advice.”
Recently, SplxAI helped an Arabic chatbot avoid sensitive topics, such as Abu Dhabi’s royal family. Kamber noted the industry’s rapid shift towards recognizing AI’s risks, saying, “Last year, AI red-teaming wasn’t widely understood. Now, demand is soaring.”
For businesses concerned about AI security, SplxAI’s proactive measures offer a strong solution to potential threats, ensuring AI systems remain safe and effective.