Celebrating women leaders in cybersecurity as part of TechNadu’s International Women’s Day campaign, we spoke with Harriet Farlow, CEO and Founder of Mileva Security Labs, about committing to AI security years before it became a boardroom priority.
Farlow was delivering dedicated AI security talks at mainstream conferences in 2021 when few were focused on the issue. She later led the development of the first AI security framework and training program in an Australian Government department.Â
Her career spans the Australian Signals Directorate, Deloitte Australia, and senior roles across the Australian Public Service, grounding her AI security work in national security and enterprise risk.
Drawing on her experience as acting Technical Director at the Australian Signals Directorate, Farlow now works to translate national security standards into practical AI controls for civilian organizations.
In this conversation, she discusses leadership, executive literacy gaps, and why securing the social layer of AI may matter as much as defending models from attackers.
Vishwa: Looking back on your career, what has been the most difficult hurdle you had to overcome?
Harriet: One of the hardest parts of my career was speaking about AI security long before the market was ready to hear it. I was often the only person giving dedicated AI security talks at mainstream cybersecurity conferences back in 2021, and it often felt like I was shouting into a void. Building a company in that environment is challenging — I knew the demand will come, but was trying to bootstrap my way there organically without VC backing, riding the very lumpy growth curve of an emerging field.Â
In 2025 we won a major government contract to develop the first AI security framework and mandatory training program in an Australian Government Department outside national security, which was an enormous milestone — but because payment was only received at the end of delivery, I had to sell my house to keep the company afloat and ensure my team were paid.Â
That was a massive personal sacrifice. But I did it because I genuinely believe AI security is one of the defining challenges of our generation, and I was not prepared to give up.
Vishwa: How did your cybersecurity journey and founding Mileva Security Labs change your perspective on leadership, and the kind of problems worth solving?
Harriet: Leadership at the frontier is very different from leadership in an established field — there is no template, no playbook, and often no validation for years. Founding Mileva forced me to learn how to lead when the market itself is still forming. I’ve been deeply committed to building an impact-driven business, which is why we are not VC-backed; I didn’t want pressure to prioritise short-term profit over long-term security outcomes.Â
That decision shapes everything — from how we grow to how we hire. I’ve built the company very intentionally, and throughout our journey, we have consistently been comprised of more than 80% women. For me, leadership is about setting a direction based on values and then having the courage to stay the course, even when it would be easier to compromise.Â
The problems worth solving are the ones that matter even if they’re not yet fashionable.
Vishwa: What factors or experiences inspired you, and what mark do you want to leave on the field?
Harriet: My time as acting Technical Director at the Australian Signals Directorate profoundly shaped me. In national security environments, adversary behaviour around AI is taken seriously — exploitation is assumed, not dismissed.Â
What unsettled me was seeing how far behind the civilian sector was, even as organisations rapidly deployed powerful AI systems. That disconnect inspired me to build something that translated national security rigour into practical controls for everyone else. Speaking on the DEF CON main stage in 2024 about my original research attacking computer vision models was a surreal moment — a signal that the field had matured.Â
Being profiled in Vogue that same year was equally unexpected, but important in a different way: it showed that technical women can occupy multiple spaces. The mark I want to leave is simple — I want AI security to be embedded, practical, and globally accessible.
Vishwa: For women entering cybersecurity, which skills or mindset would help them?
Harriet: I always find this question difficult because women are not a homogenous group — everyone arrives with different motivations, concerns and ambitions. What I would say, for anyone entering cybersecurity, is that the ability to take a risk and then commit to it fully is crucial. When you step into a new or emerging area, it will take time — often longer than feels comfortable — for others to see what you see.Â
I can absolutely relate to that. It took three years of sustained field-building before the market genuinely felt ready for AI security in a meaningful way. There were moments when it would have been easier to pivot, but real change requires persistence. Don’t give up too early just because validation is slow. Innovation often looks lonely before it looks obvious.
Vishwa: When Fortune 500 organizations adopt AI quickly, what controls are most often missing at the start?
Harriet: The most common gap isn’t technical — it’s literacy. Organisations rush to deploy AI without ensuring that leaders and practitioners genuinely understand what these systems are, how they behave, and where they fail. Without education, security controls become superficial because the underlying strategy isn’t informed.Â
AI security cannot be operationalised effectively if executives don’t grasp the risk landscape or if practitioners don’t understand adversarial behaviour. More broadly, we also need to focus on helping people understand their uniquely human skills — judgement, ethics, creativity, critical thinking — so that AI adoption strengthens rather than destabilises the workforce.Â
Education is the foundation that makes every other control more effective and more sustainable.
Vishwa: What does AI security training look like for executives? How to measure whether training is changing behavior in an organization?
Harriet: Executive AI security training should be engaging, scenario-driven and thought-provoking — not dull click-through slides. I’ve never understood why we assume technical or leadership training must be boring. When leaders experience realistic attack scenarios or adversary tradecraft in an interactive way, it changes how they think.Â
The goal isn’t to make them engineers; it’s to sharpen their judgement. We measure impact not through quizzes, but through behavioural change:Â
When training is done properly, you see a cultural shift — AI security moves from being someone else’s problem to being a board-level responsibility.
Vishwa: How do you help organizations move from general AI governance to actionable risk mitigation?
Harriet: We always start with empathy. No organisation feels like an expert in AI security — even the most sophisticated teams are navigating something new. Rather than overwhelming them with an over-engineered solution, we assess where they are on a maturity spectrum and move step by step. Not every organisation needs an elaborate AI security apparatus on day one.Â
The goal is not to slow innovation or discourage AI use; it’s to put guardrails in place so AI can be adopted with confidence. As maturity grows, controls become more sophisticated. Sustainable risk mitigation comes from meeting organisations where they are and helping them progress steadily, rather than imposing perfection from the outset.
Vishwa: Looking ahead, what AI security threats do you expect to emerge next?
Harriet: Agentic AI systems will certainly introduce new technical attack surfaces, but what concerns me most is the societal impact of rapid AI deployment. Workforce displacement, widening inequality, and a lack of social structures to support transition could become destabilising if we ignore them.Â
We need serious conversations about mass upskilling in uniquely human capabilities, about how mechanisms like universal basic income might operate, and about how the economic returns of AI are distributed globally rather than concentrated.
We need serious conversations about mass upskilling in uniquely human capabilities, about how mechanisms like universal basic income might operate, and about how the economic returns of AI are distributed globally rather than concentrated. AI security isn’t just about defending systems from attackers — it’s about building a future in which the benefits of AI are shared responsibly and equitably.Â
If we don’t secure the social layer as well as the technical one, we will face risks far larger than model exploits.