Artificial intelligence is on everyone’s minds — both for its vast potential and its inherent risks.
To lessen those risks, governance and regulation are critical.
With the EU AI Act entering into force in June, we spoke with Dev Stahlkopf, Cisco’s executive vice president and chief legal officer, for her thoughts on how regulation can contribute to safe, responsible, and trustworthy AI.
Thank you, Dev! There is so much talk about using AI responsibly. What is the role of regulation in making that happen?
Thanks, Kevin. At Cisco, we feel very strongly that AI must be used responsibly. Trust and safety are on the line. And regulation is a powerful tool in preserving both. Cisco supports the important work of governments around the world as they move with urgency to both create protections and foster innovation.
That’s a tall order. Can regulation actually do both?
That’s the intent. And it’s where public and private sector collaboration comes in. We believe the most effective way to ensure safe and trustworthy AI use is through globally compatible and interoperable regulations. That’s why Cisco is engaged in policy development and advocacy around the world, such as serving on the U.S. Department of Homeland Security’s AI Safety and Security Board to advise on the safe and secure development and deployment of AI in U.S. critical infrastructure, participating in the G7 Hiroshima Process to develop international guiding principles for AI and a code of conduct for advanced AI developers, and signing the Rome Call to push for the development of AI technologies that are transparent, accountable, and socially beneficial. We also participated in the development of the EU AI Act, the first set of rules around AI with global reach.
The EU AI Act takes a risk-based approach to AI regulation. What does that mean?
It's an approach where the specific requirements vary based on the level of risk the technology or system poses. For example, AI applied to biometrics, especially in something like facial and voice recognition, would be considered high risk. As would using AI to filter resumes for employment decisions. In those cases, bias could creep in and impact outcomes. But something like a simple FAQ chatbot would be a lower-risk use case. This means companies will need to be keenly aware of how they integrate AI into their processes and systems, understand the related level of risk, and build in the appropriate risk mitigation and protections.
Does this align with Cisco’s approach to AI governance?
Yes, we support thoughtful, risk-based approaches to AI regulation. In fact, it mirrors our own internal approach to AI governance, which has been a journey over several years.
In 2018, we published our commitment to proactively respect human rights in the design, development, and use of AI. We formalized this commitment in 2022 with Cisco’s Responsible AI Principles. They are based on transparency, fairness, accountability, privacy, security, and reliability and document in more detail our position on AI. We operationalized these principles through our Responsible AI Framework, built in alignment with the NIST AI Risk Management Framework, which we believe sets a strong industry standard, and a solid foundation for interoperability.
Then, in 2023, we used our Principles and Framework as a foundation to build an AI Impact Assessment process. We use this Assessment for all of our AI use cases whether it be when our engineering teams are developing a product or feature powered by AI, or when Cisco engages a third-party vendor to provide AI tools or services for our own, internal operations.
But good governance is a journey, not a destination. We’ll continue to update and adapt our approach to reflect new use cases — and of course, emerging standards and regulations.
It’s an ongoing journey, as you say. But what have we learned from earlier tech regulations with global impact, like the EU’s General Data Protection Regulation (GDPR)?
I think there are two main takeaways. First, Responsible AI starts with a foundation in privacy. As part of our GDPR efforts, we embedded privacy as part of the Cisco Secure Development Lifecyle and created a dedicated team to implement and oversee this work. Much of this work has been an accelerator, and a roadmap, as we look to build a similarly robust approach for AI that is rooted in safety and trust.
Second, Cisco has been studying the impact of GDPR and other privacy laws since 2019. And year after year, our Cisco Data Privacy Benchmark survey respondents have shared a positive impact on their organizations from privacy laws. Our annual Cisco Consumer Privacy Survey has also shown that consumers want governments to play a leading role in protecting data. And it confirmed that strong privacy regulations boost customer confidence that organizations are handling their data appropriately. I believe we’ll see a similar level of support for AI legislation.
Getting back to the EU AI Act, it will take effect over the next 6-36 months. How should organizations prepare?
Again, this is the first piece of AI legislation with global implications, but it most certainly won’t be the last. If you are engaging with a customer, partner, or vendor who is using AI, it will be crucial to understand their approach to protecting customers and mitigating risk when it comes to AI. And if organizations don’t have a robust AI governance program in place, now is the time to build one.
Thoughtful governance is an enabler, not an obstacle. It can boost innovation, brand reputation, and consumer confidence. With proper AI principles and frameworks in place, organizations can quickly identify, understand, and mitigate potential risks and help preserve the trust of employees, customers, and stakeholders.
So, it all comes down to building trust.
Yes! Approaching AI in a safe, trustworthy, transparent, reliable, and fair way and documenting your process — such as through risk assessments — will be key. AI has vast potential to do good. But only if we get it right. By working together — tech, customers, governments, academia — I believe we can do just that.