This is the seventh in a series of blogs that focus on sections of Demisto’s State of SOAR Report, 2019. You can read the other blogs by visiting the links below:
- Overview of the report
- Hidden security challenges
- Incident ingestion and enrichment
- Case management
- Incident investigation
- Response and enforcement
About the Report
Demisto commissioned a study with 552 respondents to find out specific challenges at each stage of the incident response lifecycle, how current product capabilities help overcome these challenges, and what capabilities are missing within security products today.
Defining the Security Incident Response Lifecycle
Every security team has its own set of security tools, competencies, common use cases, and compliance requirements. One of the few common threads that weaves across all these elements is the steps followed while responding to a security incident.
We defined the security incident response lifecycle as a continuous and cyclical process of incident ingestion and enrichment, incident management, deeper investigation, enforcement of response actions, performance measurement, and the adoption of lessons learnt to improve operational efficiency going forward.
Below, we will use phishing response as an example to outline each step in the lifecycle. This is merely illustrative; each lifecycle step can have more actions than the ones listed below.
Incident ingestion and enrichment: The email gets forwarded by a concerned employee to the organization’s quarantine mailbox. The security team studies the email and checks the reputation of indicators attached to the email (sender name and address, IP, domain, etc.).
Case management: The security team opens a ticket to capture the status for the phishing email. They mail the end user, confirming receipt of the forwarded phishing email. They add notes and comments to record their findings from the incident, measure SLAs for each step of investigation and response, and generate reports once the incident is resolved.
Incident investigation: The suspected phishing email has a PDF attachment. The security team detonates this file using a malware analysis tool and captures the results. They also check whether other end users were affected by the same phishing email, or emails that look like they’re part of the same phishing campaign.
Response and enforcement: Based on the data gathered during enrichment and investigation, the security team decides that the email is a verified phishing attempt. They send an email to the end user with this update, delete the email from all inboxes they can find, add IOCs to dynamic blocklists, and update the ticket assigned to the phishing email.
Performance measurement: The security team measures the Mean Time to Respond (MTTR) to the phishing incident and checks whether this time is within organizational SLA requirements. They also hold a debrief to discuss lessons learned from this incident: which actions were useful, which actions took the most time, and how they can better respond to similar incidents in the future.
Once incidents have been driven to resolution, it’s vital that security teams measure their performance to ensure they can repeat what worked and avoid what didn’t work or took too much time. This measurement ideally spans across use cases (what’s our average response time for phishing incidents?), personnel (what’s Bob’s average response time for phishing incidents?), incident phase (which step of phishing investigation is taking the most time?), and more.
Common Tools Used
We asked respondents what tools they commonly used for performance measurement.
Figure 1: Common tools used for performance management
SIEMs ruled the roost again, with 66% of respondents privileging them for performance measurement (Figure 1). Interestingly, excel spreadsheets ranked second with 44% of responses. This should be an eye-opener for security vendors and hammer home two things:
- SIEMs can’t take care of all performance measurement needs, either through limited scope of security data or product capabilities that are lacking.
- Security teams don’t just use security tools; they use whatever tools solve their problems.
Wish List of Capabilities
For performance measurement, we asked respondents to highlight product capabilities their tools possessed and create wish lists of capabilities their tools lacked.
Results showed that respondents desired ‘measurement multipliers’: features that could continue to improve their efficiencies with time (Figure 2). Roughly 61% of respondents wished for ‘machine learning recommendations’ for improving security operations (with only 30% of respondents claiming that this feature was already present in their security products). Around 49% of respondents also included ‘customizable dashboards for each user’ in their wish lists, underlining the need to provide security teams with the flexibility to personalize the data at their disposal.
Figure 2: Tool capabilities and wish lists for performance management
We hope you've enjoyed our coverage of the State of SOAR Report, 2019. If you’re interested in learning more about SOAR, you can download our eBook – Security Orchestration for Dummies – below.