Scaling AI Without Losing Control: Ownership, Identity, and Governance in Multi Cloud Environments

Published
Written by:
Vishwa Pandagle
Vishwa Pandagle
Cybersecurity Staff Editor
Key Takeaways
  • Admin access is granted temporarily to enable innovation, but those permissions are rarely revisited or revoked.
  • Lacking clear visibility into who deployed what and when, teams are forced to dig through logs.
  • Parulekar believes that controls need to operate in real time.
  • Security should be embedded into the development process with clear expectations about quality.
  • Invi Grid Inc. says that AI is easy to demonstrate but much harder to trust in production.

In this International Women’s Day interaction under our LeadHer in Security series, Yogita Parulekar, CEO and Founder of Invi Grid Inc., discusses how organizations lose control over access, ownership, and configuration as AI and traditional workloads expand across multiple cloud environments.

Parulekar previously led security and IT at Suki and held leadership roles at Oracle, Pear Therapeutics, and ThreatMetrix, with earlier experience in technology risk at Ernst & Young. She explains that temporary admin access, and fragmented tooling quietly erode visibility.

Over time, exceptions become embedded in operating models, and ownership blurs. What begins as speed and innovation gradually turns into sprawl and weakened governance.

She highlights how AI agents operate with their own identities and why governance must operate in real time. From a board perspective, she reframes AI oversight around accountability and measurable value. 

Vishwa: How do teams lose control over access, ownership, or configuration changes as organizations run AI and non-AI workloads across multiple cloud environments?

Yogita: As organizations accelerate both AI and traditional workloads across multiple clouds, control over access, ownership, and configuration breaks down, leading to compounding sprawl and operational chaos. 

Teams are pressured to deliver quickly, so admin access is granted “temporarily” to enable innovation, but those permissions are rarely revisited or revoked, so resources accumulate without clear ownership. 

Over time, exceptions become embedded in the operating model, reinforced by multiple ways to provision infrastructure: 

This makes it difficult to maintain a single source of truth. Lacking clear visibility into who deployed what and when, teams are forced to dig through logs, a reactive, time-consuming process that does not scale and offers little context about ownership or intent. 

As people transition roles or leave, remaining teams hesitate to touch unfamiliar resources, accelerating a downward spiral of sprawl, loss of control, and weakened governance.

Vishwa: How do existing infrastructure processes change when AI workloads are introduced?

Yogita: For AI workloads, provisioning decisions have a direct impact on model and agent performance. Change management must evolve as continuous training and retraining drive constant updates at speed and require controls that can keep pace. 

Governance and cost management quickly become mission-critical as resource consumption grows less predictable. Identity must also be rethought as AI agents and automated workloads operate with their own identities and make decisions on behalf of users.

Beyond infrastructure itself, organizations must strengthen data provenance and ownership by tracking what data is used and how. Further context and usage have to be tracked to support transparency, explainability, reliability, and trust of outcomes and results. 

This requires new approaches to observability, logging, and monitoring built directly into AI workloads.

Vishwa: What approach can be used to define what actions AI can take, under which conditions, when deploying agentic AI in infrastructure workflows?

Yogita: When deploying agentic AI, the starting point is recognizing that agents act with real autonomy and on our behalf. Organizations remain accountable for their actions and must design oversight into how they operate. 

The concept of “human in the loop” needs to be defined more broadly. In some cases, it means humans are directly approving decisions or actions within workflows. In others, it means establishing the policies, constraints, and guardrails under which agents can operate independently.

Safeguards must exist for when agents behave unexpectedly, including rollback mechanisms and, where necessary, emergency stop capabilities. We want to ensure AI operates within clearly defined boundaries, with accountability, resilience, and human oversight built in.

Vishwa: Could you describe a possible approach to address governance gaps when automation executes faster than controls are reviewed or updated?

Yogita: When automation moves faster than governance can be reviewed or updated, the sustainable approach is to embed governance directly into the automation itself. Controls need to operate in real time. 

The best analogy is everyday automation: a door that locks automatically or a camera built into a doorbell. Security is built into how the system functions. Infrastructure and developer workflows need the same model where guardrails, approvals, and policy enforcement are integrated into the automation so teams can move quickly without losing control. 

Vishwa: How do control failures affect efforts to reduce alert fatigue without losing visibility into risk?

Yogita: Alert fatigue is often the downstream effect of upstream control failure.

Yogita Parulekar

As alerts multiply, noise rises, teams lose clarity, and it becomes harder to distinguish what truly matters. Critical issues can hide in plain sight simply because the volume becomes overwhelming. Many organizations respond by layering on more tooling or AI to triage the flood, but that approach treats the symptoms rather than the cause.
The sustainable path is prevention.

Yogita Parulekar
CEO and Founder of Invi Grid Inc.

Governance controls reduce the volume of risk entering the environment to keep visibility intact and alerts meaningful. Instead of forcing teams to sift through growing noise, strong preventive controls limit what generates alerts in the first place. That preserves focus, improves response, and allows visibility systems to highlight real threats rather than every preventable misstep. It’s the difference between continuously filtering noise and designing systems that prevent it from building at all.

Vishwa: From a board perspective, what questions should be asked about AI and cloud security risk, including ownership and how exceptions are approved?

Yogita: Boards have a responsibility to oversee AI and cloud risk on behalf of shareholders by focusing on accountability and value. Boards are expected to maintain “noses in, fingers out” and asking the right questions without operating the business. 

Two questions matter most. 

These questions anchor AI in governance, not hype.

Vishwa: What makes a security leader effective when working with engineering teams?

Yogita: An effective security leader understands how engineering teams operate: 

Engineers are rewarded for shipping features and delivering customer value, so security has to align with that reality rather than compete with it. Most teams want to build things that work and that customers trust rather than ship unstable products or spend time fixing avoidable issues. 

The role of a security leader is to reframe security as an enabler of that outcome, not a blocker. Security should be embedded into the development process with clear expectations for how quality is measured. 

In the best engineering cultures, security issues are addressed as a normal part of development rather than separate work. This requires leadership alignment and frictionless execution. CEOs and CTOs must reinforce that secure code is a leadership priority and a shared responsibility. Security leaders then reinforce that message through partnership.

Automation, low-noise tooling, and workflows that minimize false positives allow security to operate at engineering speed. 

Vishwa: What problem do you think aspiring cybersecurity leaders should focus on addressing?

Yogita: Aspiring cybersecurity leaders should focus on the scale and trajectory of cyber risk itself. According to Statista, the global cost of cybercrime is now over $10 trillion (larger than the GDP of most countries outside the U.S. and China!), and AI is accelerating both the speed and sophistication of attacks. We risk falling further behind if we continue relying on the same tools and approaches while expecting different outcomes

AI is reshaping how every business operates, and it creates an opportunity to rethink how security operates as well. The next generation of successful leaders will focus on embedding security directly into business processes rather than treating it as a separate function layered on afterward. Preventing risk earlier, reducing friction for builders, and enabling innovation safely will define the future of effective cybersecurity leadership.


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: