A recent 10‑month trial led by CSIRO, Australia’s national science agency, in collaboration with global cybersecurity firm eSentire, offers a fresh look at how ChatGPT‑4 can ease the daily grind in Security Operations Centres (SOCs). By automating routine tasks, this tool allows analysts to concentrate on more critical decisions.
During the trial in Ireland and Canada, 45 analysts turned to ChatGPT‑4 more than 3,000 times. They used the AI to interpret alerts, analyse malware code, and refine reports—tasks that are essential but repetitive. Dr Mohan Baruwal Chhetri from CSIRO’s Data61 explained, “ChatGPT‑4 supported analysts by handling the routine work while leaving the judgment calls to human experts.” This blend of human oversight and AI assistance can help reduce fatigue and sharpen focus.
Security teams face an avalanche of alerts, many of which turn out to be false positives. In this study, only a handful of interactions sought direct answers from ChatGPT‑4, with most requests aimed at providing context and evidence to inform decision-making. Dr Martin Lochner, the study’s Data Scientist and Research Coordinator, pointed out that this represents the first long‑term industrial exploration of large language models like ChatGPT‑4 in a real‑world cybersecurity setting.
Looking ahead, the research moves into a two‑year phase that will gather qualitative feedback from analysts, helping to optimise these AI tools for broader use in SOC environments. If you’ve ever felt overwhelmed by the relentless influx of alerts, this approach underscores that the goal isn’t for AI to replace human expertise, but rather to support it so you can focus on what truly matters.
Part of CSIRO’s Collaborative Intelligence (CINTEL) programme, the study marks a significant step toward harnessing the strengths of both humans and machines. It’s a move that could lead to better well‑being and increased efficiency across cybersecurity teams.