Microsoft has integrated its generative AI security solution, Microsoft Security Copilot, with the suite of security tools available in Microsoft 365 E5. This integration is designed to assist security professionals by providing AI-powered capabilities directly within their existing workflows. The solution leverages large language models (LLMs) to streamline incident response, threat hunting, and intelligence gathering for security teams utilizing the Microsoft ecosystem.
By connecting with the security signals from the entire Microsoft portfolio, Security Copilot provides analysis and guidance for security analysts. It is built on a foundation of threat intelligence informed by Microsoft’s analysis of trillions of daily signals, working to augment the capabilities of human security experts.
Streamlining Security Operations with AI
Microsoft Security Copilot is designed to function as an analytical tool that enhances the capabilities of security teams. It processes data from services included in Microsoft 365 E5, such as Microsoft Defender XDR and Microsoft Sentinel, to provide summaries of security incidents in natural language. The tool automates and simplifies complex tasks, such as reverse-engineering malware and analyzing malicious scripts. This allows security operations center (SOC) analysts to understand and address threats with greater speed and precision, reducing the time spent on manual investigation and data correlation.
Core Capabilities and Integration Points
The integration of Security Copilot into the Microsoft 365 E5 security stack provides specific functionalities to aid defenders. Within the Microsoft Defender portal, Security Copilot can summarize device timelines, analyze files, and provide information on security incidents. It assists in creating Kusto Query Language (KQL) queries for advanced threat hunting in Microsoft Sentinel, making sophisticated data interrogation accessible to a broader range of security personnel. The platform operates on Azure’s infrastructure and adheres to Microsoft’s Responsible AI principles, ensuring that interactions are processed with security and privacy controls in place.