How a Teen Found Cars in a Farming Community, Discovered a Passion for Networking, and Now Leads Security Where Employees Love to Experiment With AI Tools

Published
Written by:
Vishwa Pandagle
Vishwa Pandagle
Cybersecurity Staff Editor
Key Takeaways
  • Robinson says that there shouldn’t be any usage or deployments of AI agents without security oversight. 
  • Nekskope is aware that employees love to experiment with AI tools, often without letting IT or security teams know.
  • Robinson believes that the biggest risk in security is not the hacker, but the way we use tools.
  • To build a sound cybersecurity strategy, one must understand the attackers’ minds, putting oneself in their shoes.
  • Threat modeling involves monitoring apps and systems to understand how they work and to identify all potential risks. 

James Robinson, CISO at Netskope, did not start in cybersecurity but in fixing cars. Led by a curiosity for how things work, he started with wiring and networking, eventually moving into enterprise security. Robinson brings experience from Emerson and Optiv, with roles spanning product security, risk management, and enterprise security operations.

Today, he says the problem is that people work faster than policies can keep up. Employees use generative AI tools, send prompts, upload documents, and experiment in ways security teams cannot always see. No one intends harm, but the exposure still occurs.

Robinson warns that when visibility is lost, control becomes guesswork, and the pattern repeats with third-party access. New technologies are always a vector of new threats, making these gaps harder to manage.

Encryption protects data today, but attackers are already collecting what they plan to decrypt later. Security teams respond by adding tools, but more tools often mean less clarity.

Robinson’s approach stays consistent across these scenarios. Understand how systems are used, limit access to what is necessary, monitor behavior, and assume that human error will happen, then build around it.

Read on to follow a journey that begins in a farming community, moves through hands-on mechanical work, and leads to Cisco Net Academy and a happy career in cybersecurity.

Vishwa: Can you tell us a little bit about your journey to becoming a security professional and CISO?

James: As a teenager, I was living in a farming community, and because I liked to help with maintaining and fixing the family car, I ended up in mechanical school. At this stage, I was not exactly destined for a career in cybersecurity, but wiring quickly became the part that interested me most in mechanics, which led to a broader opportunity in computer networking. 

I started reading networking books with a passion few would probably have for this kind of “literature”, was able to join the Cisco Net Academy, and then landed a tech role at an insurance company. 

I progressed in the echelons of Enterprise IT, focusing on networking and security architectures and strategy, and have been a happy security leader at Netskope for more than seven years! 

Vishwa: What is a key security threat or trend you think your counterparts should address as a priority this year?

James: I think genAI and Agentic AI security is becoming really urgent. Our annual threat report shows that enterprise genAI usage is skyrocketing, with the proportion of workers using genAI tools on a monthly basis and the amount of prompts sent to genAI tools growing threefold and sixfold, respectively, in 2025. 

As a result, attempts from employees to share sensitive data with genAI tools via prompts or documents doubled in the same period. Organisations that have no visibility and control over the data their employees are sharing with genAI tools operate in a high-risk environment. And for those who do, they still need to quash Shadow AI. Almost half of AI users are still using personal genAI accounts at work. 

Security teams have limited to no visibility and control over this, so they need to ensure their staff shift to using corporate genAI accounts as soon as possible. 

Vishwa: Are the risks and threats associated with the emergence of AI agents similar in nature to those of generative AI?

James: Yes and no. The broader objectives are the same, which are keeping sensitive data safe and eliminating Shadow AI. But the vectors of data exposure, and the way security teams address them, are different.

First of all, there shouldn’t be any usage or deployments of AI agents without security oversight. We know employees love to experiment, often without letting IT or security teams know. Gaining visibility on where AI agents are being deployed and by who, and eliminating shadow AI, is an essential first step. 

James Robinson

From a data loss perspective, we have to consider AI agents’ behaviours. These machines have no ethical compass, so if you give them a mission and unleash them without security guardrails, they have the possibility to do whatever is necessary to succeed, even through destructive means like exposing or erasing sensitive data. 

James Robinson
Chief Information Security Officer at Netskope

Security teams need to be involved in designing AI agents’ roles, adjusting their access levels to only the information they need to achieve their tasks, and deploy tools that can monitor their behaviours and detect any signs of compromise.

Vishwa: Third-party compromise is often cited as the cause of cyber incidents and data breaches at major organisations. As digital estates and supply chains continue to expand with AI adoption, what kind of initiatives can security leaders take to reinforce their ecosystem’s security?

James: Addressing third-party compromise is complex. In many cases, attackers target partners, vendors, service providers and applications that collect, process and store large volumes of sensitive data from other organisations, but they can also leverage smaller partners with permissions to their end target’s systems to penetrate the organisation or deliver malware. 

Investigating new vendors’ and applications’ security and data protection standards should be systematic and non-negotiable if they collect and process data. Security teams should ask the tough questions, including on the policies of potential data sub-processors. 

A combination of strict access controls and authentication policies, a zero-trust framework, behaviour and threat monitoring, and data loss prevention will ensure that compromising a partner with permissions will be harder for threat actors, and if it occurs, that they will be detected quickly and have a limited ability to move and exfiltrate data in the meantime.

Vishwa: As a CISO, are you looking closely at quantum security? What can organisations do today to prepare for the quantum computing era?

James: I am getting systems and security ready for the quantum era, as it will be a long process. Security professionals should know about the “harvest now, decrypt later” threat by now. Attackers steal encrypted data to decrypt them with quantum computers in the future, so preventing data breaches at all costs should be an immediate priority. 

They should also start an audit of their encryption standards–and those of their service and technology providers–and plan gradual upgrades to quantum-resistant algorithms, prioritising the most sensitive systems and databases. 

There are already five encryption standards available to organisations, thanks to a post-quantum cryptography standardisation project kicked-off by the US National Institute of Standards and Technology in 2016. 

But this process can be cumbersome because these algorithms have new power and memory requirements, and building a quantum-resistant organisation will demand a migration away from legacy systems.

Vishwa: You’ve spoken about the value of thinking maliciously. Alongside that mindset, what other skills or qualities do you think help security leaders and practitioners make better decisions and sustain long-term careers?

James: Sun Tzu said that if you want to win a battle, you have to understand yourself, but also your enemy. You can’t build a sound cybersecurity strategy without understanding the mind of an attacker, and putting yourself in their shoes. Developing the ability to think maliciously allows us to challenge our assumptions and strategies, and improve them accordingly. 

Another essential skill, in my opinion, is threat modeling, which very much complements malicious thinking. Threat modeling is the ability to look at an app, system, or service, understand how it works, and identify all the potential risks and vulnerabilities to either eliminate them, or develop a plan of action for each. 

The ability to adopt an attackers’ mindset in this process maximises the chances that we don’t leave any stones unturned, though threat modeling also requires organisational skills and being good at articulating the different attack vectors to a non-technical audience if you need your plan of action approved by business executives. 

Vishwa: Looking across your career, including building a product security organisation at Emerson, what experiences most shaped how you evaluate risk today? What problem were you trying to solve at the time, and what guided your approach?

James: Emerson was really interesting in this regard because we broke the siloes between IT and product engineering to strengthen the security of our products. At the time, close collaboration between those two teams wasn't that common, and we had to learn each other’s working language and to align our priorities to make it work. 

The security of a product and the security of a network are two different things, so you have to change your mindset, and the experience added a whole new layer to how I evaluate risk. 

At Netskope, I added another string to my bow, because being a cybersecurity leader at a cybersecurity company, you have this dual role of vendor and user. Our team helps design and put Netskope security solutions to the test before we release them to our customers, and when groundbreaking technologies like genAI and agentic AI emerge, you have to constantly challenge what you think you know about risk to identify all the potential vulnerabilities and build products that address them.

Vishwa: Many organisations invest heavily in security controls. From your experience, how can leadership ensure those investments translate into better threat awareness across the business?

James: Employees are both cybersecurity’s most important and weakest component, and because of that, I think security investments that act as safety nets to human error are equally important to investments towards education and awareness. 

The latter is essential, and regular training should be combined with ongoing communication on security initiatives and to educate on emerging threats and social engineering techniques. And employees’ knowledge should be regularly tested, with phishing simulations, for example. 

But in practice, human error is the source of more than two-thirds of cyber incidents, and this figure keeps increasing. This is not really a surprise with hyperconnected employees using data for various purposes and an increasingly larger amount of digital tools, including in blue-collar jobs. So deploying tools that prevent humans from making mistakes, such as preventing them from sharing sensitive data with generative AI, is critical to a sound strategy. 

Vishwa: As digital environments expand through cloud and AI adoption, how should CISOs think about evolving threat management approaches?

James: New technologies are always a vector of new threats. With the adoption of cloud, many organisations kept adding new security tools designed to address cloud-specific risks, and with AI, we’re seeing the same pattern replicating. Unfortunately, this is not sustainable. 

Most organisations already have too many security tools that often overlap and can’t integrate well with each other. It is not only cumbersome to manage, but also financially inefficient, and more importantly, it creates a fragmented view of your overall security which comes in the way of efficient detection and response. 

Less is more, so I think in a context of the Internet, then cloud, and now AI sprawl, security teams should aim for leaner security operations through simplification and consolidation. Security has evolved a lot in recent years, and new frameworks and platforms can achieve what previously required multiple solutions. Opting for cloud-based security is also a way to ensure security capabilities continue to evolve as new AI and cloud threats emerge because the vendor is now responsible for those upgrades. 

Ultimately, a more integrated and simpler security stack will facilitate visibility, control, and threat management. 

Vishwa: Based on your interactions with different security teams over the years, what distinguishes teams that consistently learn from incidents from those that struggle to improve?

James: I believe a hundred percent of the teams that suffer cyber incidents learn lessons, but there’s a range of factors that influence whether they act on them or struggle to do so, and they often have little control over this. 

Cybersecurity should absolutely be a strategic priority, and some organisations do show complacency in that regard, but the majority, due to their financial health, size, or lack of skills, simply can’t afford it, or are not sure where to start. 

For those in this situation, there are a number of initiatives they can undertake:


For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: