How to Complete the Springfield Power Plant Network Breach Diagram and Defensive Controls Mapping Exercise
A structured guide for cybersecurity students tackling the Frank Grimes attack chain exercise — covering how to diagram each stage of the breach from spear phishing through SCADA compromise, and how to map defensive toolsets to each attack step with the technical depth the exercise requires.
The Springfield Power Plant candidate exercise is one of the more creative — and technically demanding — assignments you’ll see in a cybersecurity program. It wraps a full attack chain through a fictional Simpsons universe, but every step maps to real attacker techniques you need to know cold: spear phishing with macro-enabled Office documents, C2 channel establishment, privilege escalation with credential harvesting tools, lateral movement via shared local admin credentials, SSH key abuse for network pivoting, SCADA access over insecure protocols, and ransomware deployment across a compromised network. The humor is there to make you relax. The technical requirements are not. This guide walks through how to approach both deliverables — the network diagram and the defensive controls mapping — so you know what each section needs to contain, what level of technical specificity is expected, and where most students lose marks.
This is not a creative writing exercise dressed up in yellow cartoon characters. The diagram deliverable tests whether you can visually represent an attack chain with enough technical precision that a security team could reconstruct what happened — showing network segments, host identifiers, IP addresses, protocols, and attack tools at each step. The defensive controls mapping tests whether you can identify specific, named security controls and explain — not just list — how each one would have detected or prevented each attack stage. Generic answers like “use a firewall” or “implement security awareness training” will lose you marks. The exercise expects named products, framework references, and paragraph-form explanations of the mechanism by which each control operates. Treat each attack step as a technical problem with specific technical solutions.
What This Guide Covers
How to Build the Network Diagram
The diagram is not a pretty flowchart of cartoon characters. It is a technical network diagram that happens to use the names from the scenario. Every host needs to appear with its correct hostname and IP address as specified in the exercise. Every connection between hosts needs to show the protocol, tool, or method used. Network segments need to be visually distinguished — the corporate LAN, the jump box, and the SCADA network are three different environments, and your diagram needs to make that segmentation obvious.
The Minimum Elements Your Diagram Must Show
The attacker’s machine (Frank Grimes / external, via EC2 at 52.85.89.218), the C2 infrastructure (d35fkdjh4gt99.cloudfront.net), Homer’s workstation (HS-CRBNBLB / 172.16.22.4), Smithers’ workstation (WS-ULLMAN / 172.16.10.42), the SSH jump box (SCRATCHY / 10.253.65.85), and the SCADA reactor system (SIDESHOW-90 / 1.1.1.230). Between each node you label the attack action: spear phishing email, PowerShell C2 beacon, Mimikatz credential dump, pass-the-hash lateral movement, PuTTY SSH authentication, Nmap scan, Telnet connection, malware placement, and ransomware deployment. The direction of each arrow matters — make sure they show who is initiating the connection and to what.
Network boundaries are as important as the nodes themselves. Draw a clear boundary around the corporate network (172.16.0.0/x), a separate zone for the jump box, and a clearly isolated segment for the SCADA network (1.1.0.0/23). The SCADA segment should visually communicate that it is supposed to be isolated — then show how Frank gets into it anyway via the unprotected SSH key. That visual tells the story of the control failure more clearly than any paragraph of text.
Choosing and Using Your Diagramming Tool
The exercise explicitly permits Visio, PowerPoint, LucidChart, and GraphViz. Each has real trade-offs for this specific diagram.
LucidChart (Free Tier)
The fastest option for most students. The free tier supports enough shapes and connections to complete this diagram without trouble. Use the Network shape library for routers, firewalls, servers, and workstations. Drag your network segments into swim lanes or bordered containers to show segmentation. Export as PNG or PDF for submission. The collaborative share link also lets you get quick peer feedback.
Microsoft Visio
The professional-grade option if your institution provides a license. Use the Network and Security stencils. Visio produces the cleanest output for this type of diagram but has a steeper learning curve. If you’re using it for the first time, budget an extra hour just for setup. The main advantage: your diagram will look like something a real SOC analyst would produce, which is the implicit standard the exercise is measuring against.
GraphViz (Free)
Code-driven, so more work upfront but very precise. Define your nodes and edges in DOT language, run the renderer, and get a clean directed graph. Useful if you want to be certain every connection direction is explicit. The visual output is functional but not polished — fine for this exercise since technical precision matters more than aesthetics here. Use the digraph type since you’re representing directed attack flow.
Every attack stage in this exercise maps directly to a MITRE ATT&CK technique. The framework is publicly available at https://attack.mitre.org and is the industry-standard taxonomy for both documenting attacker behavior and mapping defensive controls. Referencing specific ATT&CK technique IDs (e.g., T1566.001 for spear phishing with a malicious attachment, T1059.001 for PowerShell execution, T1003.001 for LSASS credential dumping via Mimikatz) in your defensive controls mapping demonstrates the technical fluency the exercise is designed to evaluate. MITRE ATT&CK is not a textbook reference — it’s a living knowledge base used by real security teams, and citing it correctly signals that you know how practitioners actually work.
The Complete Frank Grimes Attack Chain at a Glance
Before getting into each step individually, it helps to see the full chain in one place. Every node in your diagram corresponds to a row in this table. Every row in this table corresponds to a section in your defensive controls mapping.
| Step | What Happens | Key Hosts / Infrastructure | ATT&CK Technique |
|---|---|---|---|
| 1 | Spear phishing email with malicious Word attachment; Homer enables macros | Homer’s inbox → HS-CRBNBLB (172.16.22.4) | T1566.001 / T1204.002 |
| 2 | Macro runs PowerShell; C2 beacon to cloudfront.net (52.85.89.218) over EC2 | HS-CRBNBLB → d35fkdjh4gt99.cloudfront.net → EC2 | T1059.001 / T1071.001 |
| 3 | Privilege escalation; Mimikatz dumps NTLM hashes from Homer’s machine | HS-CRBNBLB (local admin) | T1078 / T1003.001 |
| 4 | Lateral movement using shared local admin hash to Smithers’ machine | HS-CRBNBLB → WS-ULLMAN (172.16.10.42) | T1021 / T1550.002 |
| 5 | Unprotected SSH private key found on Smithers’ machine; PuTTY auth to jump box | WS-ULLMAN → SCRATCHY (10.253.65.85) | T1145 / T1021.004 |
| 6 | Nmap scan of SCADA network (1.1.0.0/23) for TCP/666; Telnet to reactor without password | SCRATCHY → SIDESHOW-90 (1.1.1.230) | T1046 / T1021.002 |
| 7 | Malware placed on reactor system to alter core temperature within 30 days | SIDESHOW-90 | T1105 / T0839 |
| 8 | Ransomware deployed across compromised hosts on retreat through attack chain | SIDESHOW-90 → SCRATCHY → WS-ULLMAN → HS-CRBNBLB | T1486 |
Step 1 — The Flaming Moe’s Email: Spear Phishing With a Malicious Attachment
Frank’s first move is the oldest trick in the attacker playbook, but it works because it’s targeted. This is not a mass phishing campaign — it’s a spear phish. The subject line is social engineering: who at a power plant wouldn’t open an email promising free Flaming Moe’s in the cafeteria? Homer opens the attachment. The attachment prompts macro enablement. Homer clicks Enable. Game over — for the moment.
What to Show in Your Diagram
Draw Frank Grimes’ external attacker node with an outbound arrow to Homer’s email client on HS-CRBNBLB (172.16.22.4). Label the arrow: Spear phishing email — malicious .docx macro attachment (SMTP). Show the email traversing your perimeter (firewall/email gateway) to reach Homer’s workstation. This makes your diagram’s “entry point” explicit — and sets up the question of what controls should have stopped it at the perimeter before it ever reached Homer’s screen.
The primary control for this stage is email security gateway filtering — products like Proofpoint, Mimecast, or Microsoft Defender for Office 365 use attachment sandboxing to detonate suspicious Office files in an isolated environment before delivery. A properly configured sandbox would have executed the macro in a controlled environment, observed the PowerShell invocation, and quarantined the email before Homer ever saw it. Separately, macro execution policy should be enforced via Group Policy to block macros in documents received from external sources — Microsoft’s Attack Surface Reduction rules include a specific rule (ASR rule GUID: d4f940ab-401b-4efc-aadc-ad5f3c50688a) that blocks Office applications from creating child processes, which would have prevented the PowerShell execution even if Homer enabled the macro. Security awareness training is a supporting control, not a primary one — it reduces the probability of Homer clicking Enable, but it cannot be the only layer when the attacker has specifically crafted the lure to be believable to this user.
Step 2 — The C2 Channel: PowerShell Beacon to Amazon’s CloudFront
Once the macro executes, it spawns a PowerShell command that phones home to d35fkdjh4gt99.cloudfront.net, which resolves to 52.85.89.218 — an EC2 instance Frank controls. This is a classic living-off-the-land C2 setup: PowerShell is a trusted, built-in Windows tool, and CloudFront is a legitimate CDN that most firewalls trust by default. The C2 channel gives Frank persistent, interactive access to Homer’s machine.
What to Show in Your Diagram
Draw an outbound arrow from HS-CRBNBLB to the CloudFront domain/IP (52.85.89.218), labeled: PowerShell C2 beacon — HTTPS/443 to d35fkdjh4gt99.cloudfront.net. Show that this traffic passes outbound through your perimeter. Connect the CloudFront node to Frank’s EC2-hosted attacker machine to complete the C2 relay. This three-node chain (Homer → CloudFront CDN → EC2) is the relay architecture Frank uses to obscure the true C2 endpoint.
DNS filtering (e.g., Cisco Umbrella, Infoblox BloxOne Threat Defense) can block or flag lookups for newly registered or suspicious domains even when they sit behind legitimate CDNs — this is one of the few controls with a realistic shot at catching CloudFront-fronted C2. Egress filtering on the perimeter firewall should restrict which internal hosts can initiate outbound HTTPS connections and to which destinations, using allowlisting rather than blocklisting. Network traffic analysis tools (Zeek, Darktrace, ExtraHop) can identify beaconing behavior — the regular, low-interval outbound connections that C2 channels produce — even when the traffic uses encrypted HTTPS. PowerShell logging via Script Block Logging and Constrained Language Mode, enforced through Windows Defender AppLocker or Group Policy, would produce the evidence needed for detection even if prevention failed. A SIEM (Splunk, Microsoft Sentinel) correlating PowerShell script block logs with DNS query logs would surface this activity for a SOC analyst to investigate.
Step 3 — Privilege Escalation and Credential Harvesting With Mimikatz
Frank now has a foothold on Homer’s machine. His next problem: Homer might only be a standard user. Frank escalates to local admin — the scenario states he gains administrative access — and then runs Mimikatz to extract NTLM password hashes from the LSASS process. The critical detail here is the shared local administrator password. This is the specific vulnerability that enables the next step. If every machine on the network has the same local admin password hash, compromising one gives you all of them.
What to Show in Your Diagram
This step happens entirely on HS-CRBNBLB, so your diagram shows an action node on that host rather than a new network connection. Label it: Mimikatz — LSASS dump — NTLM hash extraction (local admin credentials). Use a different visual treatment — a dashed self-referential arrow or an annotation box on the host node — to show that this is a local action rather than a lateral movement. The Mimikatz tool name should appear explicitly; generic labels like “credential theft” lose you precision marks.
Microsoft Local Administrator Password Solution (LAPS) is the single most important control for this step. LAPS automatically generates and manages unique local administrator passwords for each domain-joined machine, storing them in Active Directory with access controls. If LAPS had been deployed, the hash Frank stole from Homer’s machine would only unlock Homer’s machine — it would not be valid on Smithers’ machine or anywhere else. Without LAPS, a shared local admin password is a skeleton key for the entire domain. For the Mimikatz-specific vector, Windows Credential Guard (a Virtualization-Based Security feature) prevents credential material from being extracted from LSASS memory even by code running with administrative privileges — this would have stopped Mimikatz in its tracks even after Frank gained local admin access. Endpoint Detection and Response (EDR) tools (CrowdStrike Falcon, SentinelOne, Microsoft Defender for Endpoint) have Mimikatz-specific signatures and behavioral detections that flag LSASS process access from non-system processes; this is one of the most heavily detected attacker techniques in the EDR ecosystem.
Step 4 — Lateral Movement to Wayland Smithers’ Machine
With the shared local admin hash in hand, Frank passes the hash to authenticate to WS-ULLMAN (172.16.10.42) — Wayland Smithers’ workstation. Pass-the-hash attacks work because NTLM authentication doesn’t require Frank to know the plaintext password — he only needs the hash, which Mimikatz already gave him. Smithers’ machine is the pivot point Frank needs because it contains something Homer’s doesn’t: an unprotected SSH private key.
What to Show in Your Diagram
Draw an arrow from HS-CRBNBLB (172.16.22.4) to WS-ULLMAN (172.16.10.42). Label it: Pass-the-hash — NTLM authentication — shared local admin credential (SMB/445 or WinRM/5985). Both machines are on the corporate LAN segment. This connection should be shown as staying within your corporate network boundary — which is the problem. Frank didn’t have to cross a security perimeter to reach Smithers’ machine because both machines are on the same flat network segment with no east-west controls.
Network segmentation is the architectural control here. A flat network where Homer’s workstation can directly authenticate to Smithers’ workstation over SMB or WinRM is inherently permissive. Implementing micro-segmentation — for example, using host-based firewalls enforced by Group Policy to restrict which hosts can connect to which services on which ports — would have blocked Frank’s lateral movement even after he had valid credentials. East-west network detection using tools like Vectra AI or Darktrace identifies anomalous internal authentication patterns, such as a workstation initiating SMB connections to multiple other workstations in a short timeframe. LAPS again is relevant here as the structural fix that would have made the stolen credential useless on the second machine. The principle of least privilege applied to local accounts — ensuring that local administrator accounts cannot authenticate remotely, enforced via the “Deny access to this computer from the network” Group Policy setting for local admin accounts — is a specific hardening measure that would have stopped pass-the-hash lateral movement regardless of whether LAPS was deployed.
Step 5 — SSH Key Abuse and the Jump Box That Wasn’t Jumping Anywhere Safe
On Smithers’ machine, Frank finds an unprotected SSH private key file. No passphrase. Full access to SCRATCHY (10.253.65.85), the SSH jump box that bridges the corporate network to the SCADA systems network. Frank opens PuTTY, points it at SCRATCHY, authenticates with Smithers’ private key, and he’s in. He’s now one network hop away from the reactor.
What to Show in Your Diagram
Draw an arrow from WS-ULLMAN (172.16.10.42) to SCRATCHY (10.253.65.85). Label it: PuTTY SSH authentication — unprotected private key (SSH/TCP 22). Show SCRATCHY in a separate network zone — it sits between the corporate LAN and the SCADA network, and your diagram needs to represent that boundary. SCRATCHY should be the only node that bridges the two network segments, which makes the absence of compensating controls on that bridge point a visible design failure in your diagram.
SSH private keys stored in cleartext on user workstations without passphrases represent a fundamental credential management failure. The fix has two layers: first, SSH private keys used for privileged access should always be passphrase-protected, reducing their usefulness even if stolen. Second, secrets management practices — policies and tooling (HashiCorp Vault, CyberArk) that control where privileged credentials can be stored and accessed — should have prevented Smithers from keeping an unmanaged private key on his endpoint at all. The jump box itself should be hardened: Privileged Access Workstation (PAW) architecture means that access to SCRATCHY should only be possible from a dedicated, hardened administrative workstation — not from a standard user workstation like WS-ULLMAN. Privileged Access Management (PAM) solutions (CyberArk, BeyondTrust) can enforce session recording and credential vaulting for all jump box sessions, meaning every login to SCRATCHY would have generated a logged, auditable session that a SIEM could alert on. Multi-factor authentication required at the jump box would have stopped Frank even with a valid private key.
Step 6 — Scanning the SCADA Network and Connecting to the Reactor via Telnet
From SCRATCHY, Frank runs Nmap against the SCADA network range (1.1.0.0/23) scanning for open TCP/666. He finds SIDESHOW-90 (1.1.1.230) listening. He connects via Telnet — no password required. He now has unauthenticated, interactive access to a nuclear reactor control system. This is the moment the exercise stops being funny.
What to Show in Your Diagram
Draw two elements for this step. First, show an Nmap discovery sweep from SCRATCHY across the 1.1.0.0/23 range — a broad scan arrow labeled Nmap port scan — TCP/666 — SCADA network. Second, draw a specific connection from SCRATCHY (10.253.65.85) to SIDESHOW-90 (1.1.1.230) labeled: Telnet/TCP 666 — unauthenticated access — no password. The SCADA network segment (1.1.0.0/23) should be visually isolated in your diagram — a clearly bounded, separated network zone that is only reachable via SCRATCHY. That isolation is supposed to be the security control. Frank just walked through it.
The SCADA/ICS security failures here are multiple and compounding. The foundational failure is that the SCADA network is reachable from a jump box that is itself reachable from the corporate LAN — even via jump box, this architecture violates the principle of defense-in-depth for critical infrastructure. ICS networks controlling physical processes should be air-gapped or at minimum defended by a unidirectional security gateway (data diode) rather than a bidirectional SSH-accessible jump box. The use of Telnet — an unencrypted, unauthenticated protocol — on a reactor control system in any production environment is an indefensible configuration. Telnet should be disabled and replaced with SSH or proprietary secure industrial protocols. The complete absence of authentication on TCP/666 is a critical configuration failure; SCADA system access controls should require authentication at minimum, and ideally certificate-based or MFA-backed authentication. Network intrusion detection specifically tuned for ICS protocols (Dragos, Claroty, Nozomi Networks) can detect anomalous traffic patterns on SCADA networks — including scan activity from the jump box, which should never be scanning for open ports on production control systems. The Nmap scan itself would have been detectable had any monitoring existed on the SCADA network segment.
Step 7 — Malware Placement on the Reactor System
With interactive access to SIDESHOW-90, Frank plants malware designed to manipulate the reactor’s core temperature over the next 30 days. This is a time-delayed, physically consequential payload. The 30-day delay is deliberate — it gives Frank time to exit the network and create deniability while the physical sabotage executes weeks later. This is the OT (Operational Technology) kill phase of the attack.
What to Show in Your Diagram
Show a malware artifact node on SIDESHOW-90 with an annotation: Time-delayed malware — core temperature manipulation — 30-day trigger. You can represent this as an action performed on the reactor node itself. The diagram should visually communicate that this is the objective of the entire attack chain — everything before it was movement, this is impact. Some students also add a secondary arrow showing the malware’s downstream physical consequence (reactor core → temperature control system) to illustrate the OT impact pathway.
Defending against malware on an ICS endpoint requires OT-specific endpoint security — traditional IT endpoint detection tools often cannot run on industrial control system hardware due to the constrained, deterministic operating environments of SCADA systems. OT-aware security platforms (Dragos Platform, Claroty, Fortinet OT Security) can monitor process control network traffic for anomalous commands and unexpected file transfers to ICS endpoints without requiring an agent on the controller itself. Application whitelisting on any general-purpose computing components of the SCADA system (using tools like Tripwire or Symantec Critical System Protection) would have prevented unauthorized file writes. Integrity monitoring on the SCADA system — hashing of system files and comparison against known-good baselines — would detect the malware placement within any subsequent baseline check cycle. The 30-day delay between placement and execution is a detection opportunity: regular OT system audits and configuration management processes (NERC CIP standards require this for critical infrastructure) should catch unauthorized files before the trigger fires. This step also reinforces the argument for network-level controls: if the Telnet connection to SIDESHOW-90 had been blocked, Frank could never have staged the malware regardless of what else he had compromised.
Step 8 — Ransomware on the Way Out
Frank steps back through his attack chain and drops ransomware at each node on the way out. This is a secondary objective — the reactor malware is the primary payload, the ransomware is chaos designed to complicate incident response, obscure forensic evidence, and extract additional value from the compromise. It also tells you something about Frank’s tradecraft: he’s burning the infrastructure on exit because he doesn’t plan to need it again, or he wants to make attribution and analysis as difficult as possible.
What to Show in Your Diagram
Show ransomware deployment as a reverse traversal of the attack chain — draw ransomware artifact icons (or “encrypted” annotations) on each node Frank passes through on exit: SIDESHOW-90 → SCRATCHY → WS-ULLMAN → HS-CRBNBLB. Use a different arrow style or color (dashed or red) to distinguish the ransomware deployment path from the initial attack path. Some students use a reverse-direction overlay on the original attack chain arrows with ransomware labels — either approach works as long as the bidirectional nature of Frank’s movement is clear.
Ransomware defense is a layered problem. Backup and recovery infrastructure — immutable, offline or air-gapped backups of all critical systems tested regularly for restoration — is the single most important recovery control. Without it, ransomware is catastrophic; with it, it’s painful but survivable. EDR platforms with behavioral ransomware detection (file encryption activity monitoring, honeypot files that trigger alerts when modified) can interrupt active ransomware execution in real time before full encryption completes. Network detection and segmentation controls are also relevant here: a ransomware payload that can propagate laterally across the same flat network Frank used for lateral movement in Step 4 will spread faster and further than one constrained to a single network segment. The fact that Frank can reach four separate hosts to deploy ransomware is itself a segmentation failure — proper micro-segmentation would limit blast radius. Incident response planning — specifically, having a pre-defined playbook for ransomware response that does not depend on systems that may themselves be encrypted — determines whether the organization can contain and recover quickly or is effectively paralyzed. The NIST Cybersecurity Framework’s “Respond” and “Recover” functions map directly to this step and are worth referencing in your mapping.
How to Write the Defensive Controls Mapping Section
The exercise asks for paragraph-form explanations, not bullet points. That’s an explicit signal about the depth expected. A bullet-point list of tool names is a starting point, not a completed answer. What the grader wants to see is that you understand the mechanism by which each control operates — not just that you know the name of the tool.
The structure that works best for each step: start with the attack technique and why it was successful (what failure enabled it), then name the primary preventive control and explain how it would have blocked the attack before it happened, then name the primary detective control and explain what signal it would have generated, then briefly address any recovery or response controls relevant to that step. That four-part structure — why it worked, how to prevent it, how to detect it, how to recover — gives each mapping section a complete analytical arc.
Prevention Controls
Controls that stop the attack before it succeeds: email filtering, macro policy enforcement, LAPS, application whitelisting, network segmentation, strong authentication. Name the specific product or configuration setting, not just the category.
Detection Controls
Controls that generate alerts when the attack is in progress: EDR behavioral alerts, SIEM correlation rules, network traffic analysis, ICS-specific monitoring. Explain what specific indicator the control would have generated and where it would have appeared.
Response & Recovery
Controls that limit damage and enable recovery: immutable backups, incident response playbooks, network isolation capabilities, OT system restoration procedures. Particularly important for Steps 7 and 8 where the physical and data impact is most severe.
Using the MITRE ATT&CK Framework in Your Mapping
Referencing MITRE ATT&CK technique IDs in your defensive controls mapping is not mandatory in most course specifications, but it significantly strengthens your answer. It demonstrates that you can map observed attacker behavior to the standardized taxonomy used by real SOC teams, threat intelligence analysts, and security vendors. It also makes your defensive recommendations more precise — ATT&CK-aligned defenses can be cross-referenced against vendor mitigations and detection guidance documented directly in the framework.
Where Students Lose Marks on This Exercise
Diagram Without IP Addresses or Hostnames
A diagram that shows “attacker → Homer’s PC → Smithers’ PC → Jump Box → SCADA” with no hostnames, no IP addresses, and no protocol labels is a flowchart of the scenario, not a network diagram. The specific identifiers given in the exercise (HS-CRBNBLB, 172.16.22.4, etc.) exist to be used. Every node needs its exact hostname and IP as specified.
Instead
Every node carries both its hostname and IP. Every connection carries the protocol or tool. Network boundaries are drawn as labeled containers: “Corporate LAN (172.16.0.0/16)”, “SCADA Network (1.1.0.0/23)”. The diagram should be readable by a network engineer who hasn’t read the exercise description.
Defensive Mapping That Only Names Tools
“Deploy a SIEM. Use EDR. Implement MFA.” These are category names, not analytical responses. They don’t explain which specific product feature addresses this specific attack technique, what signal it generates, or why it would have been effective at this step rather than a different one.
Instead
“CrowdStrike Falcon’s LSASS protection module maintains a behavioral baseline for which processes are permitted to read LSASS memory. Mimikatz triggers this baseline violation immediately and generates a prevention event with process tree context — including the parent PowerShell session — that allows a SOC analyst to trace the full execution chain back to the original macro.” That paragraph addresses one specific technique with one specific control mechanism.
Ignoring the SCADA/OT Context
Students who treat Steps 6 and 7 like IT security problems miss the point entirely. Suggesting “patch the system” or “install antivirus” on a nuclear reactor control system demonstrates no understanding of OT/ICS security constraints — these systems often cannot be patched or run endpoint agents due to certification requirements, real-time processing constraints, and vendor support restrictions.
Instead
Address OT-specific controls: network-level monitoring (Dragos, Claroty) that operates passively without agents on the ICS endpoints, unidirectional gateways that prevent any traffic from flowing from the corporate LAN toward the SCADA segment, and NERC CIP compliance requirements for critical infrastructure that mandate specific security controls for systems like SIDESHOW-90.
Mixing Prevention and Detection Without Distinguishing Them
A mapping that blurs together “you could use X to detect or prevent this” without specifying which function X serves in which configuration makes it impossible for the reader to understand the actual control architecture being proposed.
Instead
Explicitly label which controls are preventive (would have stopped the attack from succeeding), which are detective (would have generated an alert after the attack began), and which are corrective or compensating. Real security architectures combine all three — your mapping should reflect that layered thinking.
- Diagram includes all six named hosts with correct hostnames and IP addresses as specified in the exercise
- All three network segments (corporate LAN, jump box zone, SCADA network) are visually distinct and labeled
- Every connection between nodes is labeled with the specific protocol, tool, or technique used (not just “attack”)
- The CloudFront CDN relay architecture is shown as a two-hop chain, not a direct connection from Homer to Frank
- The defensive controls mapping covers all 8 attack steps — none skipped
- Each mapping entry is in paragraph form and explains the mechanism of each control, not just its name
- Prevention and detection controls are distinguished from each other in each mapping section
- SCADA/OT-specific controls (not generic IT security tools) are used for Steps 6 and 7
- LAPS is explicitly addressed in the Step 3 and/or Step 4 mapping as the specific control for the shared credential vulnerability
- Mimikatz is named specifically in Step 3 — not generically as “credential theft tool”
- The ransomware section addresses backup and recovery, not just prevention
- MITRE ATT&CK technique IDs are referenced where applicable (optional but recommended)