Microsoft has announced its new Security Copilot chatbot, designed to help security analysts manage and respond to security incidents. Powered by GPT-4, the latest large language model from OpenAI, and a security-specific model built by Microsoft using daily activity data, the resulting generative AI software can at times be “usefully wrong”, but the company is proceeding with the project. The Security Copilot is expected to be available to a small set of Microsoft clients in a private preview before a wider release at a later date.
Microsoft’s recent announcement of its Security Copilot, a new chatbot designed to help security professionals manage and respond to security incidents, has garnered a lot of attention in the tech community. The chatbot is powered by GPT-4, the latest large language model from OpenAI, and a security-specific model built by Microsoft using daily activity data. The resulting generative AI software can be “usefully wrong,” as Microsoft puts it, but the company is proceeding with the project as it seeks to expand its cybersecurity business.
The Security Copilot chatbot is designed to help security analysts by composing PowerPoint slides summarizing security incidents, describing exposure to active vulnerabilities, or specifying the accounts involved in an exploit, in response to a text prompt that a person types in. The chatbot draws on knowledge of a given customer’s security environment, but that data is not used to train models.
Users can confirm an answer if it’s correct or select an “off-target” button to signal a mistake, which will help the service learn. Engineers inside Microsoft have been using the Security Copilot to do their jobs, and the tool has shown promising results. It can process 1,000 alerts and give analysts the two incidents that matter in seconds. The chatbot has even reverse-engineered a piece of malicious code for an analyst who didn’t know how to do that.
The Security Copilot is designed to help companies that have trouble hiring experts and end up hiring inexperienced employees. “There’s a learning curve, and it takes time,” said Vasu Jakkal, corporate vice president of security, compliance, identity, management, and privacy at Microsoft. “And now Security Copilot with the skills built-in can augment you. So it is going to help you do more with less.”
Microsoft has not disclosed how much the Security Copilot will cost, but Jakkal hopes many workers inside a given company will use it, rather than just a handful of executives. The service will work with Microsoft security products such as Sentinel for tracking threats. Microsoft will determine if it should add support for third-party tools such as Splunk based on input from early users in the next few months.
The Security Copilot chatbot will be available to a small set of Microsoft clients in a private preview before wider release at a later date. The hope is that over time, the tool will be capable of holding discussions in a wider variety of domains.
Frank Dickson, group vice president for security and trust at technology industry researcher IDC, said the Security Copilot may be the single biggest announcement in security this calendar year. He added that Microsoft’s decision to get out first may give it a head start over its competitors, such as Palo Alto Networks.
However, if Microsoft were to require customers to use Sentinel or other Microsoft products if they want to turn on the Security Copilot, that could very well influence purchasing decisions, Dickson said.
Overall, the Security Copilot chatbot is an exciting development in the world of cybersecurity. It has the potential to help companies manage and respond to security incidents more efficiently, and it could be a valuable tool for analysts who are new to the field. As Microsoft continues to invest in AI and machine learning, we can expect to see more innovations like this in the future.