Article

Privacy, governance, and AI: global trends on data

Cisco Chief Legal Officer Dev Stahlkopf shares key insights from the company’s 2024 Data Privacy Benchmark Study.
Privacy, governance, and AI: global trends on data

In recent years, privacy has evolved from solely a compliance matter to a business imperative and customer demand. And with the explosive growth of artificial intelligence, the focus on privacy only continues to increase.

That's why Cisco's annual Data Privacy Benchmark Study is so important. Together with the company’s Consumer Privacy Survey, it picks up key trends in data governance, investments, and the impact of fast-changing technologies.

For more insights on Cisco’s 2024 Data Privacy Benchmark Study, which is based on a survey of 2600 security and privacy professionals in 12 countries, we turned to Dev Stahlkopf, Cisco’s Chief Legal Officer. 

 


Thank you, Dev! Cisco has been researching data privacy for seven years, including our latest report, the 2024 Data Privacy Benchmark study. What are some key trends that have emerged?

Thanks, Kevin. Two themes stand out for me. First and foremost, organizations see a connection between privacy and trust. Customers increasingly want to buy from organizations they can trust. In fact, 94 percent of respondents said their customers would not buy from them if they did not adequately protect data. Secondly, organizations believe the return on privacy investment exceeds the cost. Since we’ve started our research, privacy spending has more than doubled, but customers say the ROI remains strong. Our data shows organizations are getting an estimated $160 in benefits for every $100 they spend on privacy. 

The research also shows support for privacy laws has increased, both from organizations and consumers. What is driving that trend?

Yes, privacy laws do put additional costs and requirements on organizations.  And yet, 80 percent of our respondents shared that privacy laws have had a positive impact on their organizations, and only 6 percent said the impact has been negative.

So, why would organizations be so positive about regulations that add cost and effort? It comes back to trust. Organizations recognize privacy as a trust driver for their customers.  And globally interoperable privacy laws help drive a more consistent approach to handling personal data throughout the data lifecycle and ecosystem.

Our research has also shown that consumers want governments to play a leading role in protecting data. Strong privacy regulations boost customer confidence and trust that the organizations are handling their data appropriately.

Technology is changing fast, and with those changes come new privacy concerns. How do you see privacy laws and regulations evolving in the coming years?

Today, more than 160 countries have omnibus privacy laws. And dozens more being drafted and enacted as we speak. The next generation of privacy laws will continue to drive transparency, fairness, and accountability in spaces like data collection and use, cross-border data flows, and verifiable compliance. While each of these areas is broader than just privacy, privacy is at the core of a lot of these issues.

Not surprisingly, AI was a major topic in this year’s Data Privacy Benchmark Survey. As Cisco’s Chief Legal Officer, how do you navigate the changing intersection of AI and privacy?

Privacy is foundational to AI. Much of what we’ve built in privacy over the past decade – our policies, standards, tools, and frameworks – are being leveraged to build our Responsible AI program. While some of the biggest risks of AI emanate from the collection and use of personal data, AI risks extend far beyond privacy – IP, human rights, accuracy and reliability, bias, to name a few. Our research indicates that 60% of consumers have already lost trust in organizations due to AI use. So, building a governance program at Cisco tailored to the novel use cases and implications of AI was a business imperative.

How is that put into practice when developing a product?

We have a dedicated privacy team to embed privacy by design as a core component of our product development methodologies, leveraging the Cisco Secure Development Lifecycle (CSDL). As the use of AI became more pervasive, we developed an AI Impact Assessment — based on our Responsible AI Principles —  to evaluate Cisco’s development, use and deployment of AI and include the assessment as part of CSDL and vendor due diligence. These assessments look at various aspects of AI and product development, including the model, training data, fine tuning, prompts, privacy practices, and testing methodologies. The idea is to identify, understand, and manage AI risks and help to preserve the trust of our employees, customers, and stakeholders.  

What are some of the main risks you see associated with AI and how can we mitigate them?

In our latest survey, we found that 92 percent of organizations see GenAI as a fundamentally different technology with novel challenges and concerns, requiring new techniques to manage data and risk. Among their top concerns, 69 percent cited the potential for GenAI to hurt their organization’s legal and intellectual property rights. Sixty-eight percent were concerned that the information entered could be shared publicly or with competitors. And another 68 percent feared that the information it returns to the user could be wrong. These are real risks, but they are manageable with a thoughtful approach to governance.

In an AI-driven environment, how can companies ensure that they leverage its possibilities while protecting the privacy rights of customers and employees?

Companies need to do their own risk assessments. But with governance in place, I believe, there is a path forward. Using Cisco as an example, we have both Responsible AI Principles and a Framework to guide our approach. We also developed a Generative AI Policy on the acceptable use of these novel tools. Before we allow the use of GenAI tools with confidential information, we conduct an AI Impact Assessment to identify and manage AI-specific risks. Once we’ve validated that a tool sufficiently protects our confidential information and we’re comfortable with the security and privacy protections in place, the tool is opened for employees to explore and innovate further.   

Any final advice for companies trying to navigate this current environment?

We’re still in the very early days of AI. We need to approach this new technology with excitement and humility – there’s so much we don’t know yet. New concerns are being raised every day. Companies will need to be agile and adaptable to changing regulations, consumer concerns, and evolving risks. There will also need to be a strong partnership across the public and private sectors. AI has tremendous potential for good, but it will take industry, government, developers, deployers, and users all working together to promote responsible innovation without compromise to privacy, security, human rights, and safety.