EXERCISE #039 • RANSOMWARE / CAD-DOWN

Nine Days Without CAD

The Bucks County ransomware attack and what it actually looks like to run a center by hand

📅 January 21–30, 2024 📍 Bucks County, Pennsylvania 🏛 Bucks County Dept. of Emergency Communications 🔒 Akira Ransomware • 130+ Agencies
CAD STATUS: OFFLINE  •  DAY 01 OF 09  •  FALLBACK: PEN/PAPER/RADIO  •  MDT: DARK  •  NCIC/CLEAN: NO ACCESS  •  RANSOM: NOT PAID
9
Days CAD Offline
130+
Agencies Affected
$0
Ransom Paid
0
911 Calls Missed
PA Guard
Deployed
Akira
Threat Actor
Ransomware CAD-Down Cyber Infrastructure Multi-Agency Pre-Incident Planning Pennsylvania Manual Dispatch
📋
What Happened

On January 21, 2024, the Bucks County Department of Emergency Communications in Ivyland, Pennsylvania, discovered its computer-aided dispatch system was offline. The cause was a ransomware attack. The culprit, identified days later, was Akira — a ransomware-as-a-service operation active since March 2023 that had already hit governments, hospitals, and financial institutions across multiple countries. Akira accounted for roughly 12 percent of all ransomware incidents globally in January 2024.

Bucks County is not a small operation. The center serves a population of about 650,000 people and handles 911 calls for more than 130 police, fire, and EMS agencies. When the CAD went down, so did the in-vehicle mobile data terminals in patrol cars and fire apparatus, the alert apps that notify firefighters of active calls, station printers, and access to the National Crime Information Center and Pennsylvania’s Commonwealth Law Enforcement Assistance Network. Dispatchers lost their map overlays, their unit tracking, their pre-planned response templates, and their ability to digitally log incidents in real time.

What they did not lose was their phone system, their radio, and their training. For nine days, Bucks County dispatchers ran their center by hand — pen, paper, and spreadsheets. They received 911 calls, sorted which companies should respond, and relayed dispatch information over the radio the way centers operated before CAD existed. Fire chiefs reported slowdowns and friction. Nobody reported a catastrophic failure of emergency response. The center’s director, Audrey Kenny, told the public and partner agencies plainly: “If you call us for an emergency response, our dispatchers will get you the help you need.”

The county refused to pay the ransom. Refused to negotiate. Rebuilt from its own backups with help from the Pennsylvania National Guard, federal law enforcement, and forensic consultants. A forensic investigation found no evidence that any data had been extracted from the CAD system. Core functionality was restored on January 30 — nine days after the attack began. Full restoration of all connected systems took longer.

“It basically brought us to our knees.”

— IT Manager, Henry County, Tennessee, describing the 2016 ransomware attack on that county’s 911 center — one of the first in the country. Eight years later, Bucks County dispatchers would say the same thing about a much larger-scale version of the same problem.

The Bucks County attack did not happen in isolation. A month before the January 21 attack, on December 21, 2023, the county’s 911 system had experienced a separate technical incident that was resolved in hours. The attack itself was the second significant disruption in thirty days. Attorneys for Bucks County government advised staff not to speak publicly about the incident during the active investigation — meaning partner agencies received very limited information about the timeline, the attacker’s demands, or the restoration plan for the first several days of the outage.

This exercise is not primarily about cybersecurity. It is about what happens in your comm center on day one, day three, and day nine when your CAD is a black screen and your job is still to get help to people who are calling for it.

🕛
The Pattern: This Is Not New
PSAP Ransomware Incidents — Selected Timeline
Jun 2016
Henry County, TN — First major PSAP ransomware attack. Entry point: weak password left by a deceased former admin. CAD shut down. $1,000 ransom demanded. Refused. Three days on pen and paper. Rebuilt from scratch.
Mar 2018
Baltimore, MD — CAD system targeted after a technician left a firewall opening while troubleshooting. Attack repelled by city IT. FBI investigation initiated.
May 2023
Dallas, TX — Citywide ransomware attack took CAD offline. Courts closed. Manual dispatch for duration of outage. Dispatchers hand-wrote instructions for responding officers.
Jan 2024
Bucks County, PA — Akira ransomware. CAD offline nine days. 130+ agencies affected. MDTs dark. NCIC/CLEAN access lost. Pennsylvania National Guard deployed. No ransom paid. Rebuilt from backup.
Jan 2024
Fulton County, GA — LockBit 3.0 ransomware. Not a direct 911 center attack but paralyzed county government systems for weeks, affecting court operations and public safety infrastructure.
Jul 2025
Pennsylvania (statewide) — Not ransomware: an operating system defect caused a statewide 911 outage for several hours, forcing residents across the entire state to use non-emergency numbers. A system failure, not an attack — but the effect on dispatchers was identical.

Between 2016 and 2018 alone, cybersecurity firm SecuLore Solutions documented 184 cyberattacks on public safety agencies and local governments, with 911 centers directly or indirectly targeted in 42 of those cases. The pattern is consistent: PSAPs are targets because they are critical infrastructure, often underfunded for cybersecurity, and frequently running legacy systems with known vulnerabilities. The entry points vary — weak passwords, unpatched systems, phishing, vendor access — but the result is the same: CAD goes dark, and someone has to figure out how to keep the phones answered and the units moving.

Operational Timeline
January 21, 2024 — DAY 1 ATTACK
Bucks County Department of Emergency Communications CAD system goes fully offline. Ransomware attack confirmed. Phone and radio systems remain operational. Dispatchers immediately shift to manual fallback: pen, paper, and spreadsheets. MDTs in field units go dark. Station alert apps stop functioning. Station printers go offline. Access to NCIC and CLEAN databases cut.
January 21–22 — DAY 1–2 OPERATIONS
Dispatchers manually sort responding companies for each call — a process normally automated by CAD. Information that would display on an MDT screen is relayed verbally over the radio. Fire companies report that dispatches are slower due to the manual process. County spokesperson confirms the outage but provides no timeline for restoration. Partner agencies are advised that 911 remains operational. Attorneys advise county staff not to discuss details publicly.
January 22–23 — DAY 2–3 RESPONSE
Federal law enforcement, including the FBI, begins investigating. Pennsylvania National Guard cyber units deploy to assist with response and restoration. Forensic and legal consultants brought in. County confirms no indication that other county systems have been compromised beyond the CAD environment. No ransom amount disclosed. No indication whether contact has been made with the attackers.
January 24–28 — DAY 4–8 EXTENDED OUTAGE
No timeline for restoration provided to partner agencies. Fire chiefs describe slowdowns but characterize the situation as manageable. Police departmental data systems, which run separately from the county CAD, remain operational. One fire chief says the situation is “certainly creating a hassle for all involved” but does not believe public safety is severely impacted. County staff praised by partner agencies for their efforts under difficult conditions. Investigation continues. Akira group officially identified and disclosed to law enforcement partners.
January 29 — DAY 9 RESTORATION BEGINS
County confirms Akira ransomware as the attack vector. Notifies state and federal partners for situational awareness. Restoration work continues. County confirms no ransom payment made, no ransom negotiation entered, and no evidence that data was copied or extracted from the CAD system during the attack.
January 30, 2024 — DAY 9 AFTERNOON PARTIAL RESTORE
Core CAD functionality restored. Dispatchers use automated dispatch system for first time since January 21. NCIC and CLEAN database access restored. Alert apps for firefighters, station printers, and in-vehicle MDTs still not fully functional — timeline for full restoration not established. County announces it rebuilt from its own backups without paying the ransom.
📌 The Question Nobody Can Answer for You

Bucks County dispatchers ran for nine days without CAD. They kept the phones answered and the units moving. But the question this exercise is really asking is: does your center know how to do that? Not in theory. Not in a written plan. Does your current shift, tonight, know your fallback procedures well enough to execute them for nine days? When was the last time you practiced manual dispatch? Does your newest hire know how to sort responding companies without CAD? Do your partner agencies know your backup radio protocols?

💬
Discussion Questions

No right or wrong answers. Click to expand.

1. Your CAD goes offline right now. Walk through the first 10 minutes. Who does what, in what order, and what does a call look like when it comes in?

This question is designed to surface whether your fallback procedures are practiced or theoretical. The first 10 minutes of a CAD-down event are the most disorienting because every habitual workflow breaks simultaneously. Callers keep calling. Units keep moving. The radio keeps going. But the screen that organizes all of it is dark.

Walk your team through it literally: who picks up the call? How does that person record the address, the call type, the caller name? Where does that information go next? Who determines the appropriate response? How do they know which units are available if unit status is not displaying? How does dispatch information get to field units whose MDTs are also dark? How do you track what has been dispatched and what has not?

The answers reveal your gaps. Centers that have practiced this can describe a specific, workable process. Centers that have not practiced this tend to describe a process that works for one or two calls before it breaks down under volume. The Bucks County team did this for nine days. That does not happen without prior training on the fallback.

2. The Bucks County attack also took out MDTs, station alert apps, station printers, and NCIC/CLEAN access. These are four separate capabilities your field units rely on. How does your center communicate a change in any one of these to 130 partner agencies simultaneously and in real time?

This is a mass notification problem disguised as a technology problem. When CAD goes down, you need to reach every chief, every shift supervisor, and every unit in the field to tell them what is still working, what is not, and what the alternate procedures are. At 3 AM. With your CAD-integrated alerting system also offline.

Bucks County had to notify law enforcement, fire, and EMS chiefs across 130+ agencies that NCIC and CLEAN access were lost — meaning officers in the field running a license plate had to assume no information was coming back. That is a significant officer safety issue that required immediate communication through alternative channels.

Questions for your team: What is your current contact tree for mass notification to partner agency leadership? Is it in CAD, which would also be offline? Is it in a physical binder? Does your shift supervisor have phone numbers for key contacts in a format that does not depend on the system that just went down? How do you confirm receipt?

3. The county’s attorneys advised staff not to speak about the incident during the active investigation. Partner agencies received very limited information for the first several days. What is the right communication posture during a CAD-down event, and who makes that call?

There is a real tension here. Legal counsel is right that public statements during an active cybersecurity investigation can compromise the investigation, reveal attack vectors to other threat actors, or create liability. At the same time, partner fire and EMS agencies operating without MDTs and without NCIC access need to know why, for how long, and what the alternate procedures are. Those two needs are not the same communication.

The answer most well-prepared centers settle on: separate the operational notification (what is down, what the fallback is, who to call with questions) from the public information (what is being said to media and the public). Operational notifications go to partner agency leadership immediately and continuously. Public information goes through the PIO with legal review. These run in parallel, not sequentially.

Who in your center has the authority to authorize operational notifications to partner agencies during a cybersecurity incident? Is that authority clearly documented, and does it survive a situation where your agency director is unreachable at 2 AM?

4. The county refused to pay the ransom and rebuilt from backups. That took nine days. What does your center’s backup infrastructure look like, and do you know how long restoration would take?

The decision not to pay the ransom was correct and ultimately successful. But it required nine days of manual operation. For some centers, nine days of manual dispatch at full call volume would be manageable. For others, it would be operationally catastrophic. The difference is almost entirely determined by two things: the quality of the backup infrastructure and the depth of the manual fallback training.

Ask your IT department: when was the CAD last backed up? Where are those backups stored, and are they isolated from the primary network (so ransomware that encrypts your CAD cannot also encrypt your backup)? How long would restoration take? Has that restoration ever been tested? What is the county or agency attorney’s guidance on ransom payment — is there a pre-established position, or will that decision be made in crisis by people who have never thought about it before?

The Pennsylvania National Guard deployed to help Bucks County. Does your state have an equivalent capability, and do you know how to request it?

5. A fire chief in Bucks County said the manual fallback “certainly was creating a hassle” but did not believe public safety was severely impacted. On what day does that assessment change, and what would change it?

This is the hardest question in the exercise. Manual dispatch works. PSAPs did it for decades before CAD existed. But it works differently at different call volumes, under different staffing conditions, and with different levels of staff experience in manual procedures. The experienced dispatcher who worked before CAD existed handles a manual shift differently than the three-year dispatcher who has never dispatched without a computer.

The assessment changes when cumulative fatigue among dispatchers begins to affect judgment. It changes when a mass casualty incident arrives and the manual tracking system cannot scale. It changes when a dispatcher who is the only one on shift who knows the manual procedures calls in sick on day six. It changes when a partner agency makes a decision based on absent NCIC information that results in a bad outcome.

Bucks County got to day nine without a reportable failure. That is a credit to their training and their people. But the margin matters. How thick is your margin?

6. The Henry County, Tennessee attack in 2016 was traced to a weak password left by a deceased former system administrator. The Baltimore attack in 2018 came from a firewall left open during maintenance. What do these entry points have in common, and what do they tell you about your center’s most likely vulnerability?

Both entry points are human and procedural failures, not technical ones. The best firewall in the world does not protect against a technician who leaves it open. The strongest encryption does not stop an attacker who has valid credentials. The pattern across PSAP attacks is consistent: the technology is breached through a gap in the human system around it. Former employee accounts not deprovisioned. Vendor access not monitored. Phishing emails clicked by staff who have not been trained to recognize them. Software patches deferred because the center cannot afford downtime for maintenance.

For your center specifically: what happens to system access when an employee leaves? Does your IT have a documented offboarding procedure that includes credential revocation? When was the last time someone audited who has remote access to your CAD system and whether all of those access grants are still appropriate? Does your CAD vendor have remote access, and when was that access last reviewed?

These are not glamorous questions. They are the questions that determine whether your center becomes the next entry in the historical table at the top of this exercise.

🎯
Supervisor Discussion Guide
When did your center last practice manual dispatch? Can you describe the procedure used, and would your current staff recognize it as something they have trained on?

Manual dispatch procedures exist at virtually every center on paper. The question is whether they have been practiced to the point where staff can execute them under stress, at volume, without coaching. Tabletop exercises where staff describe the procedure are not the same as actually running calls manually for two hours. If your most recent new hire has never dispatched without CAD, that is a training gap. If you have a CAD outage tomorrow, they will figure it out — but they will figure it out slower, and the margin for error during that learning curve is real.

Recommended action: schedule a two-hour manual dispatch drill at low-volume period. Have dispatchers run calls with CAD screens turned off, using only phone, radio, and paper. Debrief on friction points. Those friction points are your training priorities.

Does your center have a written CAD-down protocol that includes notification procedures for partner agencies, and is that document stored somewhere that is accessible when CAD is offline?

A CAD-down protocol that lives inside your CAD system, on a CAD-connected computer, or on a shared drive that is part of the same network being encrypted is not accessible when you need it. The protocol needs to be physically present in the center — printed, laminated, in a binder on the wall — and it needs to include phone numbers for partner agency leadership that do not depend on a system that is currently offline.

The protocol should cover at minimum: manual call documentation procedures; radio dispatch procedures for units without MDTs; contact list for partner agency leadership; mass notification procedure; IT contact hierarchy; and the decision tree for whether to request mutual aid dispatch support from a neighboring center.

✏️
Your Notes
Knowledge Check
Operational Judgment // Quiz Q 1 / 5
Question 1 of 5
Your CAD goes offline. A call comes in — structure fire, two-story residential, caller reports smoke visible. You have no CAD. What is your first dispatching action?
Question 2 of 5
During the Bucks County outage, NCIC and CLEAN database access was lost. An officer in the field asks dispatch to run a license plate. What is the correct dispatcher response?
Question 3 of 5
Bucks County refused to pay the ransom. Their CAD was offline for nine days. A neighboring center offers to take mutual aid calls during the outage. What is the most important information to provide that center before transferring call volume?
Question 4 of 5
The Henry County, Tennessee ransomware attack in 2016 was traced to credentials left behind by a deceased former system administrator. What is the most direct lesson for your center from that specific entry point?
Question 5 of 5
A fire chief in Bucks County said the manual dispatch situation was a hassle but did not believe it severely impacted public safety. From a training perspective, what does it mean when manual fallback is “manageable”?
0/5
Quiz Complete
🔗
Related Exercises
📚
Sources & Further Reading