Northeastern University Incident Response Standard

Document Metadata

Related Policy: Northeastern University Information Security Policy

Responsible Office: Office of Information Security (OIS)

Purpose and Scope

Northeastern University is committed to securing its data and providing clear and concise guidance on protecting the many information technology (IT) systems we use. Given the widespread use and diversity of the types of IT systems employed within Northeastern University, it is paramount that a technology-agnostic set of standards are in place and uniformly applied across all IT systems.

This standard establishes the minimum incident response criteria to carry out and meet the intent of the directives within Northeastern University’s Information Security Policy. This standard applies to all organizations (e.g., academic entities, entities other than Colleges and Departments, legally separate but wholly owned entities) of Northeastern University.

IT systems are considered in the scope of this standard if they utilize any of the following: Northeastern’s Network, ITS troubleshooting or administration, OIS incident response or investigation, or a Northeastern Microsoft account (e.g., @northeastern.edu).

Incident Response Overview

This domain focuses on security Incident Response (IR). IR is the process of preparing, detecting, containing, eradicating, recovering from security incidents. IR focuses on mitigating the risks from security incidents by responding to incidents effectively and efficiently and restoring systems to normal operations.

Roles and Responsibilities

The following high-level functional roles support the incident response processes for IT systems. In some cases, there may be more than one functional role associated with a specific process or task; similarly, more than one person may perform some roles. The following describe the roles and responsibilities associated with incident response within the Northeastern University environment.

Chief Information Security Officer (CISO)

Individual responsible for the overall Northeastern University information security program.

Incident Manager

Member of the Incident Response Team, responsible for managing the IR process once an incident has been declared and developing response plays.

Incident Response Team

A group, or organization responsible for setting up and maintaining a system, appliance, or specific system elements responsible for implementing approved secure baseline configurations, incorporating secure configuration settings for IT products, and conducting/assisting with configuration monitoring activities as needed. Members of the IR team are responsible for assisting the System Administrator in the analysis of security events and working with the Incident Manager to implement remediation actions to address security incidents.

System Administrator

An organization or individual responsible for setting up and maintaining an IT system, appliance, or specific IT system elements. This role revolves around hands on management of the IT system, usually more technical in nature than the System Owner. They are also responsible for implementing approved secure baseline configurations, incorporating secure configuration settings for IT products, and conducting/assisting with configuration monitoring activities as needed.

Depending on the size of the IT system, these responsibilities can be split across multiple skill-based domains listed below. These domains can be managed by separate teams across Northeastern University depending on the skills necessary to carry out the listed responsibilities.

  • Infrastructure: manages any servers that are not aligned to a specific skill-based domain listed below.
  • Network: manages all hardware and IT systems related to managing network communications.
  • Security: manages all IT systems that ensure and confirm security of the environment. Sentinel, Defender, Tenable, Azure, Intune, Windows Cloud PC, etc.
  • Desktop: manages the physical workstations and the software installed on them.
  • Identity: manages IT systems that control identity-based access, like Entra ID.

System Owner

An individual or organization responsible for the development, procurement, integration, modification, operation, and maintenance, and/or final disposition of an information IT system.

Standard

This standard is scoped primarily around a subset of the National Institute of Standards and Technology (NIST) 800-171 controls to protect the confidentiality, integrity, and availability of information. The related NIST controls have been tagged (e.g., 3.6.1) in the text below to identify where each listed responsibility inherits its requirements from.

As the incident response capability is matured over time, additional controls may be considered to augment confidentiality and address the availability and integrity of information. Additionally, when implementing the criterion of this document, organizations may choose to implement stricter criteria; however, the criterion cannot be lessened without formal exception by the Northeastern University Chief Information Security Officer (CISO) as described in the Compliance section of this standard.

Preparation for Security Incident Handling

(3.6.1) Incident Response Capabilities

To ensure overall preparedness to detect, analyze, contain, eradicate, and recover from security incidents, the System Owner is responsible for identifying all key stakeholders (internal and external) required to support the IR process. The System Owner is also responsible for ensuring the deployment of supporting technologies for tracking (e.g., incident management software) incidents and monitoring (e.g., SIEM) the Northeastern University information system. Additionally, the System Owner is also responsible for establishing the relevant tools (e.g., storage, forensics), processes (e.g., communications), and procedures (e.g., response plan/playbook) which support the IR program.

Detection and Analysis

Security incidents are categorized by type and subtype. The types identify the high-level security incident type while the subtypes identify the most likely vector that may lead to the incident. The subtypes may be useful in determining the response type for a given security incident, particularly for the more common types of incidents. This approach also allows for developing standard processes defined by incident response plays.

Table 1. Security Incident Types
Types Subtypes Examples
Social Engineering Phishing Gift card purchase scam, IRS Late Payment calls
Spear Phishing Email specifically targeted at employees (E.g., impersonation of IT staff)
Malicious Code Ransomware Crypto Locker
Malware Computer viruses, worms, Trojan horses, spyware, adware, bot net
Exploits Shellcode, zero days
Interruption of Service Denial of Service (DOS) DDOS, DNS Flood, Syn Flood
Natural Disaster Earthquake, tornado, etc.
Loss of Power or other Services Loss of HVAC causes system overheating and failure
Misconfiguration / Malfunction Equipment failure results in loss of connectivity. Misconfiguration causes inability to push AV updates and virus definitions. Known vulnerability detected.
Unauthorized Access Data Breach Data exfiltration
Account Compromise / Privileged Escalation Stolen credentials, exploit
Covert Channels Rogue wireless access point
Improper Usage Violation of Policy Violation of acceptable use policy, access control policy, etc.
Hosting Personal Services Peer to peer file sharing, game server, etc.
Insider Threat Purposeful malicious use of IT resources from an employee
Scans / Probes / Attempted Access Brute Force Password dictionary attacks
Network / Port Scans NMAP, OS Fingerprinting, etc.
Fuzzing Web app fuzzing, SQL injection
Loss of Equipment Lost / Stolen equipment Lost / Stolen Laptop, Server, or other equipment, misplaced thumb drive with sensitive data

Once a security incident is formally declared, a severity level must be determined based on the criteria below and expert judgment of the System Administrator in consultation with the System Owner.

  • Operational Impact: The size, scope, and current negative operational impact of the security incident (e.g., unauthorized user-level access to data) on the affected systems. In addition to the current realized impact, the likely future impact of the security incident (e.g., administrator/root compromise) if it is not immediately contained must be considered as part of the impact analysis.
  • Technical Resource Criticality: Technical resources (e.g., applications, databases, firewalls, servers, network connectivity, workstations, mobile devices) by the nature of their functions and scope have varying criticalities within an environment. Therefore, a security incident against a more critical resource will have greater overall impact on an organization than one with a less critical function. The criticality of a resource is based on its data or services, users, trust relationships, and interdependencies with other resources.
  • Data Risk Level: Data within a system or environment may have differing Risk Level classifications (High Risk, Medium Risk, Low Risk, No Risk) in accordance with the Data Classification Guidelines as referenced in the Northeastern University Policy on Confidentiality of University Records and Information. The overall impact on the confidentiality, integrity, and availability of data to the organization will largely depend on the data’s Risk Level. As such, a security incident involving High Risk Level data, such as Social Security Numbers, Medical Records, would pose a greater risk than data of a lower Risk Level.

The University has adopted four severity levels (Critical, High, Moderate, Low) which drive prioritization for responding to incidents. The severity level is determined by combining the criticality of the affected technical resources, risk level of the data, and the current and potential operational impact of the security incident. Once established, the security incident severity drives prioritization and response times for addressing security incidents.

The response times identified in the table below occur on weekdays during normal working hours (e.g., 8am — 5pm). The response times start at declaration of a security incident.

Table 2. Security Incident Severity
Severity Response Times Description Examples
Critical (Priority 1) Within 1 hour of incident declaration A security incident with a severe or detrimental impact on the organization’s operations, assets, or users. High Risk Level data exfiltration, Critical application unavailable
High (Priority 2) Within 4 hours of incident declaration A security incident with a serious impact on the organization’s operations, assets, or users. Compromised user account, breach of High Risk Level data
Moderate (Priority 3) Within 8 hours of incident declaration A security incident with moderate impact on the organization’s operations, assets, or users. Lost/Stolen laptop with disk encryption enabled, Low Risk Level data unavailable
Low (Priority 4) Within 48 hours of incident declaration A security incident with limited impact on the organization’s operations, assets, or users. Disruption of non-critical services, suspicious email

Containment, Eradication, and Recovery

(3.6.2) Incident Tracking and Communication

The Incident Manager is the primary point of contact responsible for directing, tracking, and overseeing all incident response and recovery activities. All security incidents must be formally tracked and documented throughout the incident lifecycle. The Incident Manager is responsible for updating the ticket associated with the security incident to record key activities taken by the Incident Response Team. During the Computer Evidence Recovery (CER) execution, the Incident Manager will maintain contact with the System Owner.

The System Owner maintains responsibility for identifying relevant stakeholders (e.g., system owner, system users) affected by the security incident and relevant authorities (e.g., government sponsor, law enforcement) to be notified, as appropriate within 72 hours. The System Owner is responsible for handling communications regarding incident response and recovery activities with relevant stakeholders based on the severity of the incident. These communications include communications with internal stakeholders and teams including third parties assisting in incident response activities.

The Incident Manager is responsible for notifying the System Owner upon completion of the associated Computer Evidence Recovery (CER) activities.

The System Owner is responsible for reviewing the activities taken by the Incident Response Team during CER activities and declaring the security incident resolved, as appropriate.

Incident Response Testing

(3.6.3) Testing Requirements

The Incident Manager is responsible for testing the IR process on an annual basis, or when significant changes to the program are instituted to validate the process as documented in the relevant security incident response plan. The testing should include all relevant roles (internal and external to the organization) across the security incident response process to ensure the process is fully exercised. Testing should include both tabletop exercise scenarios as well as real world exercises (e.g., taking down a network to simulate an outage). Lessons learned as a part of the testing process should be leveraged in developing or modifying systems, policies, training, and resources to improve the overall IR process.

Definitions

The following definitions have been derived from industry standard definitions provided by the National Institute of Standards and Technology (NIST) Computer Security Resource Center Glossary and, where appropriate, tailored for Northeastern University’s IT environment.

Event
Any observable occurrence in an information system. Events include a user connecting to a file share, a server receiving a request for a web page, a user sending email, and a firewall blocking a connection attempt.
Information Technology (IT)
Computing and/or communications hardware and/or software components and related resources that can collect, store, process, maintain, share, transmit, or dispose of data. IT components include computers and associated peripheral devices, computer operating systems, utility/support software, and communications hardware and software.
Organization
An entity of any size, complexity, or positioning within an organizational structure (e.g., school, department, lab, operational elements).
Security Incident
An occurrence that actually or potentially jeopardizes the confidentiality, integrity, or availability of an information system or the information the system processes, stores, or transmits or that constitutes a violation or imminent threat of violation of security policies, security procedures, or acceptable use policies.
Security Incident Response
The mitigation of violations of security policies and recommended practices.

Compliance

This standard complies with the directives defined in the Northeastern University Information Security Policy. The university recognizes that on rare occasions there might be compelling reasons to consider allowing an organization to operate outside of the criterion defined in this standard, as derived from the Northeastern University Information Security Policy. To facilitate this consideration the System Owner must submit a petition for a risk-based policy exception in writing, including supporting rationale, and forward it to the Northeastern University CISO for review and approval. All approved risk-based policy exceptions must be formally documented by the Northeastern University CISO and indicate the exception duration (e.g., temporary, long-term). The Northeastern University CISO is responsible for disseminating and communicating all risk-based exception approvals and rescissions to the relevant stakeholders in a timely manner.

Change and Review Log

Document Version History
Date Description Version Editor
01/08/2025 Initial draft for Stakeholder Review 0.1 Kwaku Danquah
01/24/2025 Manager review before stakeholder review 0.2 Brad Wing
9/3/2025 Final draft approved by CISO 1.0 Brad Wing

Appendix A. Northeastern University Incident Response Standard Summary

The table below summarizes the Northeastern University IT system environment minimum criteria for enabling incident response capabilities within the Northeastern University IT system environments.

  • The first column “Northeastern University Practice ID” identifies the related Northeastern University practice ID as defined in the NIST 800-171.
  • The “Northeastern University Practice Statement” column includes the Northeastern University practices required to be met for that control.
  • The third column, “Derived Requirement”, provides a description of the requirement derived from the high-level Northeastern University practice statement. Derived requirements were developed from analysis of the intent of the practice and the logical components required to satisfy the practice. In some instances, an Northeastern University practice statement may be derived into several requirements to be addressed to satisfy the Northeastern University practice.
  • The final column, “Northeastern University IT system environment Criteria”, defines the minimum criteria (e.g., configurations, actions, responsibilities, practices, etc.) which the university will implement to satisfy the related Northeastern University practice.
Incident Response Standards Summary Table
CMMC Practice ID CMMC Practice Statement Derived Requirement Northeastern University Environment Criteria
3.6.1 Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities. Preparation The System Owner is responsible for establishing IR capabilities that includes: identifying all key stakeholders (internal and external) required to support the IR process. Ensuring the deployment of supporting technologies for tracking (e.g., incident management software) incidents and monitoring (e.g., SIEM) the NU information system. Establishing the relevant tools (e.g., storage, forensics), processes (e.g., communications), and procedures (e.g., response plan/playbook) which support the IR program.
3.6.2 Track, document, and report incidents to designated officials and/or authorities both internal and external to the organization. Track & Document Incident Incident Manager is responsible for: The primary point of contact responsible for directing, tracking, and overseeing all incident response and recovery activities. Updating the ticket associated with the security incident to record key activities taken by the Incident Response Team. Maintaining contact with the System Owner during the Computer Evidence Recovery (CER) execution. Notifying the System Owner upon completion of the associated Computer Evidence Recovery (CER) activities.
Incident Communications The System Owner is responsible for: Identifying relevant stakeholders (e.g., system owner, system users) affected by the security incident and relevant authorities (e.g., government sponsor, law enforcement) to be notified, as appropriate within 72 hours. Handling communications regarding incident response and recovery activities with relevant stakeholders based on the severity of the incident. Reviewing the activities taken by the Incident Response Team during CER activities and declaring the security incident resolved, as appropriate.
3.6.3 Test the organizational incident response capability. Incident Response Testing Incident Manager is responsible for: Testing the IR process on an annual basis, or when significant changes to the program are instituted to validate the process as documented in the relevant security incident response plan.

Appendix B. Security Incident Response References

The following list of references are common industry standards used to carry out the incident response criterion defined within this standard.

  1. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-61 Rev. 2, Computer Security Incident Handling Guide. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf
  2. NIST SP 800-86, Guide to Integrating Forensic Techniques into Incident Response. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-86.pdf
  3. NIST SP 800-101 Rev. 1, Guidelines on Mobile Device Forensics. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-101r1.pdf
  4. NIST SP 800-161, Supply Chain Risk Management Practices for Federal Information Systems and Organizations. https://doi.org/10.6028/NIST.SP.800-161r1