Article

Building trust in the age of AI

Dev Stahlkopf, Cisco’s chief legal officer, on key results from the company’s latest Data Privacy Benchmark Study.
Building trust in the age of AI

Nothing undermines innovation like a lack of trust.  

Especially in the age of Generative AI.  

That’s why Cisco’s 2025 Data Privacy Benchmark Study is so important. The eighth global report of its kind, this year’s results are based on surveys of more than 2,600 respondents in 12 geographies. What emerged was a collective focus on data privacy and its critical role in building AI readiness.

To learn more about this year’s Data Privacy Benchmark study — and its essential lessons for today’s organizations — we spoke with Dev Stahlkopf, Cisco’s chief legal officer.   

 

Welcome, Dev, and thank you for joining us. With Generative AI, the digital economy is rapidly changing. How has this transformation impacted privacy?

Thank you for having me. It's indeed a fascinating time. Generative AI has garnered significant attention from both businesses and society, on a level comparable to the excitement we experienced during the early days of the internet. Privacy plays a pivotal role in this new era of transformation, particularly around trust and safety. AI relies heavily on data, and ensuring data privacy is integral to maintaining that trust.

Which brings us to the 2025 Cisco Data Privacy Benchmark Study.  

Yes, our most recent study underscores the importance of privacy in today's digital landscape. Ninety-five percent of organizations indicated that their customers won’t buy from them if their data isn't sufficiently protected. Additionally, 97 percent of these organizations acknowledge their responsibility to use data ethically. As GenAI becomes increasingly woven into everyday operations, the demand for robust privacy measures intensifies, which makes this conversation particularly timely.

Concerns around the storage and management of data emerged in the survey. How are they affecting attitudes around data governance?

With GenAI's reliance on data, it's not surprising to see heightened attention on how data is stored, secured, and utilized. In recent years, driven by privacy and security concerns, many countries have implemented data localization laws requiring sensitive data to be stored and processed locally.  

Our study, however, highlights an interesting duality. A significant 90 percent of respondents feel that data is inherently safer when stored locally within their country's borders. At the same time, 91 percent also believe that global providers offer superior data protection, which marks an increase of five percentage points from the previous year. Though these preferences may seem contradictory, they reflect the growing demand from customers and governments for robust data protection measures and a recognition that global companies have established measures and the expertise to navigate differing regulatory regimes.  

How will that duality play out? Should we expect to see more calls for data localization?  

Yes, there is a noticeable trend towards data localization driven by both customer expectations and regulatory demands, especially for sensitive data, government data, and critical infrastructure. However, the broader conversation should really center on safety and security. How can organizations effectively protect data within or beyond borders? With more jurisdictions implementing localization requirements and restrictions on transfers, compliance becomes essential for operating within those regions, demanding significant time and resources. Regulatory frameworks that are predictable, interoperable, and compatible across borders can enable global businesses to thrive more effectively.

How are governments adapting policies to support secure and effective data flows?

Some governments are proactively establishing digital agreements to facilitate smooth data flow across countries while prioritizing protection. Initiatives such as the G20's Data Free Flow with Trust (supported by the OECD), Global Cross Border Privacy Rules, and the EU-UK Trade and Cooperation Agreement are leading the way for seamless data sharing. Our survey results affirm this approach, with 85 percent of respondents agreeing that secure, interoperable data flows can significantly drive growth.

Given that AI relies on data, how should organizations think about privacy regulation in the context of GenAI?

As Cisco, we believe that establishing comprehensive and interoperable privacy regulations is a critical step in navigating the complexities of AI. Privacy is foundational to Responsible AI, as it directly impacts the secure and safe use of data. Robust data protection measures lay the groundwork for trust and transparency.  

Our survey findings reinforce this perspective, with 90 percent of respondents agreeing that robust privacy laws boost customer confidence in sharing their data with GenAI tools. This highlights the importance of prioritizing strong privacy practices and AI governance as we navigate the evolving digital landscape.

Is there a gap between the need to capture the benefits of GenAI and the preparedness to manage its implications? 

That’s what the survey results indicate. Almost half (48 percent) of surveyed organizations reported very significant business value from GenAI — up from 37 percent last year. But user concerns around its unintended use remained relatively steady year over year. Organizations are still working toward AI readiness. 

One major issue is the risk of inadvertently sharing sensitive information, with 64 percent of respondents worried about this possibility. Despite these concerns, nearly half still admit to inputting personal or non-public data into GenAI tools. This emphasizes the crucial role of AI governance. By prioritizing governance, organizations can effectively leverage GenAI's benefits while minimizing risks. 

How does Cisco approach AI governance?  

At Cisco, we are committed to the responsible development and deployment of AI technologies. Our strategy is built around three interconnected pillars that guide our actions. 

We start by ensuring compliance with global regulatory requirements. By actively engaging with regulators, external working groups, and standards bodies, we not only adhere to existing policies but also contribute to shaping future AI regulations. This proactive engagement helps us navigate the rapidly evolving legal landscape and stay ahead of changes.

We are also focused on identifying and mitigating potential risks associated with AI applications. This involves conducting thorough AI impact assessments for various use cases, guided by Cisco's Responsible AI principles. By systematically evaluating these risks, we strive to ensure that AI technologies are deployed safely, reliably, and securely.

Finally, we emphasize building awareness and engagement across our organization. We believe that Responsible AI use begins with education and active participation. By fostering a culture of awareness, we empower our teams to make informed decisions that align with our policies and values. 

What’s next in data privacy, for Cisco, for the industry, and for society?

Effective AI governance is a continuous journey, not a destination. As the landscape evolves, we’re committed to adapting our approach to meet new challenges and opportunities. Looking ahead, the emergence of agentic AI — that is, AI that acts autonomously — could further reshape our approach. This will require additional considerations in governance frameworks to address new dimensions of autonomy and decision-making. AI can bring tremendous advances to society, in science, medicine, sustainability, and so much more. But only if we ensure that it’s developed and deployed in a manner that maintains trust and integrity. I’m optimistic that we can make that happen.