Too Many Alerts and Unassigned Responsibility: When Alerts Pile Up and Ownership Disappears, Security Starts Guessing
- Abstract Security sees too many alerts not as a volume problem but as a trust problem.
- Camacho notes that, based on recent cyber incidents, things often break down first in unclear ownership.
- Teams can work through a lot of data if they know who acts, who makes the call, and what the escalation path looks like.
- A lot of incidents become harder than they needed to be because the signal was there, but the path to action wasn't.
- Without a solid playbook and tested IR plan, even a strong team with great tools would be making decisions without a plan under pressure.
Chris Camacho, Co-Founder and COO of Abstract Security, explains how common security challenges show up in day-to-day operations and why they persist across teams. Before Abstract Security, Camacho spent over a decade securing institutions like The World Bank and later led security operations at the Bank of America.
He says that analysts start their day already behind, spending more time sorting alerts than investigating them. Managers see different decisions being made on similar signals, and executives rely on dashboards that suggest coverage even when important issues remain buried in noise.
In many cases, the problem is not missing data but unclear ownership, where teams hesitate because responsibilities are not defined. Camacho points out that incidents often become harder than they need to be because the signal is present, but the path to action is not.
Camacho also highlights hiring gaps, noting that security work depends on judgment under pressure rather than certifications, yet hiring still focuses on keywords instead of operating ability.
At the same time, teams are working within systems and processes they did not design, which adds friction to already complex decisions.
Read on to see how raw login data stops being just a record and becomes useful, turning into a lead informed by signals like unusual access and impossible travel.
Vishwa: You've led security operations at Bank of America and now focus on reducing noise at Abstract. What does "too many alerts" look like in a real workday, based on who sees it?
Chris: It depends on where you sit.
- For an analyst, it means starting the day already behind.
- You spend more time sorting than investigating.
- You're clicking through duplicates,
- chasing low-context signals,
- trying to decide what's real before you can even begin to respond.
- For a manager, it shows up as inconsistency.
- One person closes something fast,
- another escalates the same thing, and
- nobody feels confident that the team is applying the same judgment every time.
- One person closes something fast,
- For an executive, it usually looks like a false sense of coverage.
- The dashboards say a lot is being detected, but volume is not the same as clarity.
- If the team is overwhelmed, important things still sit in the noise.
- The dashboards say a lot is being detected, but volume is not the same as clarity.
That's why "too many alerts" isn't really a volume problem. It's a trust problem. When people stop trusting what's in front of them, speed drops, quality drops, and the whole operation becomes reactive.
Vishwa: Looking at recent cyber incidents, where do you think things break down first: too much data, slow response, or unclear ownership?
Chris: Most of the time, unclear ownership.
Teams can work through a lot of data if they know who acts, who makes the call, and what the escalation path looks like. They can even recover from slower tooling if responsibilities are clear. But when ownership is vague, every other weakness gets amplified.
That's when you hear the familiar questions.
- Is this security's issue or IT's?
- Who owns the identity layer?
- Who approves containment?
- Who contacts the business?
A lot of incidents become harder than they needed to be because the signal was there, but the path to action wasn't. Data matters and speed matters, but ownership is what turns information into movement.
Every company is going to have incidents. That's the reality. The difference is whether you've done the work ahead of time. A solid playbook and a tested IR plan are what separate a rough day from a real crisis.
Without those, even a team with great people and great tools is improvising when it matters most.
Vishwa: At Abstract, you talk about turning security data into something usable. Can you walk us through an example of raw data vs what useful looks like?
Chris: Raw data is activity without enough context to support a decision.
Take a login event. On its own, it tells you a user authenticated from a certain IP at a certain time. Technically correct. Doesn't tell an analyst much about whether it matters.
Now shape that into something operational. The same event, but now you know it was a privileged user, from a device the company has never seen, tied to impossible travel, followed by unusual access to sensitive resources, outside normal behavior for that person. That's not a record anymore. It's a lead.
Security teams don't need more records. They need better starting points. The goal is to shrink the distance between collection and judgment.
Vishwa: You've said the problem isn't just threats but the disconnect between data and action. Where does that disconnect happen?
Chris: It happens at the handoff points.
- First, between collection and interpretation. Organizations gather huge amounts of telemetry, but much of it arrives incomplete, duplicated, or disconnected from business context. The data exists but it's not ready to support action.
- Second, between detection and ownership. An alert fires, but the next step is unclear.
- Who validates it?
- Who enriches it?
- Who decides whether it's urgent? If that chain isn't disciplined, good detections still stall.
- Third, between the security team and the rest of the business. Security may know something is wrong, but if the operational owner of the affected system is hard to find or slow to engage, response drags.
It's rarely one dramatic failure. It's a series of small gaps between systems, teams, and decisions that add up. This is why having a data strategy matters so much. A lot of organizations have never really defined one for security.
At Abstract, that's where we start. We focus on understanding where an organization's data strategy is today and helping them mature it.
When you get the data foundation right, the handoffs get cleaner and the gaps start to close.
Vishwa: You've worked across both enterprise and vendor sides. What changed for you when you saw the same problem from the outside looking in?
Chris: I saw how often security teams were being asked to compensate for design decisions they didn't make.
Inside an enterprise, you live the operational pain directly. The staffing limits, the tool sprawl, the political friction, the pressure to keep the business moving while reducing risk.
From the outside, I saw that many of these teams weren't failing because they lacked talent. They were working inside architectures and processes that made good outcomes harder than they should have been. Stitching together data, translating between tools, and filling gaps manually.
That reinforced something I've believed for a long time. Security teams don't need more complexity. They need fewer barriers between signal, decision, and action. When you see the same pattern across enough organizations, you realize the problem is structural, not individual.
Vishwa: You've spent over two decades in this space. If you walked into a security team tomorrow that's clearly struggling, what's the first thing you would fix?
Chris: I'd go back to the basics first.
A lot of teams have gone so deep with outdated processes and legacy vendors that they've lost sight of the fundamentals. They're maintaining complexity instead of managing risk. So the first thing I'd do is ask simple questions. What matters most? What are we actually getting value from? Where are we just going through the motions?
From there, I'd take a business view of the program. Security has to be a business enabler, not a blocker. That's always been true, but it's especially urgent right now with how fast AI is moving into enterprises.
If the security team is seen as the department that slows everything down, they lose influence and they lose trust. The teams that stay relevant are the ones that figure out how to protect the business while keeping pace with it.
The fastest improvement usually doesn't come from adding more tooling. It comes from cutting what isn't working, getting back to basics, and making sure the program is aligned with where the business is headed.
Vishwa: Having built NinjaJobs and worked closely with hiring, what mistakes do companies make when hiring for security roles? Does it reflect in their work?
Chris: The most common mistake is hiring for keywords instead of operating ability.
A resume can check every box and still not tell you whether someone can make decisions under pressure, write clearly, work across teams, or separate signal from noise. Security is full of situations where judgment matters more than certifications.
Another mistake is writing job descriptions for unicorns. Companies combine three or four roles into one, then wonder why the person struggles or burns out. That usually reflects a lack of clarity about what the team actually needs.
Those hiring mistakes absolutely show up in the work. Brittle processes, weak escalation habits, inconsistent investigations, and teams that are technically busy but operationally stuck.
The best security teams I've seen are built around people who think clearly, communicate well, and stay calm in messy situations. Tools and training matter, but those core traits show up fast when things get real.










