Avatar

Tech companies and governments will join forces to develop advanced AI detection systems, ensuring a safer online environment.

Human ingenuity boosted by AI capabilities offers new pathways to address complex societal problems—such as more efficient agricultural production, medical research, and sustainable energy production. AI may even harness the power of networks at scale to finally tip the balance in favor of defenders over cyber threat actors.

However, to realize this enticing vision, more work must be done to counter the rising threat that AI-enabled disinformation poses to people, companies, and society. Advances in AI technology make it faster, easier and cheaper than ever to manipulate and abuse digital content with the aim to mislead and deceive on a massive scale. This is an area where those developing, using, and regulating the technology all have an important role to play if we hope to achieve the potential AI benefits while effectively managing for the new risks it inevitably introduces.

Despite efforts among the public and private sectors to detect and prevent the effects of this shadowy disinformation war, its ramifications are no longer theoretical. The material and reputational impacts of AI-powered manipulation are clear as AI plays an increasing role in scams, fraud, and opinion campaigns.

AI-powered attempts to influence hearts, minds and election processes have been well-publicized, however, public awareness has done little to address this growing concern. Neither has it prevented lesser-known individuals or organizations from being targeted. While motives vary, the weaponization of AI to discredit private and public figures, harm organizations, steal money, and distort perceptions of reality demands a coordinated, global content-provenance response.

As more and more data are generated through the normal operations of modern business and our digital-driven lives, the attack surface along with sources for AI disinformation widens. Whether used to generate clicks or profit, large volumes of data increase AI accuracy.

For example, this puts a target on recognizable individuals who have been the subject of hours of high-quality video footage, as well as holders of large amounts of data such as social media platforms, corporations, and governments.

Part of the answer is to ensure companies developing AI-powered technologies do so responsibly. At Cisco, we have extensive experience in secure software development, including the development of AI-powered technologies. And we are an industry leader in developing responsible AI principles and practices that ensure transparency, fairness, accountability, reliability, security and privacy.

We have also seen examples of governments engaging with the industry to better understand both the promise and the risk that comes from widely available AI-powered content generation tools, including the Biden administration’s Safe AI Executive Order and the UK government’s AI Safety Summit. But more work needs to be done by technology developers, implementers, users, and governments working together and in parallel.

Picking up the pace

Cisco’s recent Cybersecurity Readiness Index revealed that only 15% of organizations were in a mature state of readiness to remain resilient when faced with a cybersecurity threat. Just 22% are in a mature state of readiness to protect data. While it’s clear that the pressure is on to leverage AI capabilities, the 2023 Cisco AI Readiness Index showed that 86% of organizations around the world are not fully prepared to integrate AI into their businesses.

In 2024, we will see organizations take considerable strides to address these dual challenges. In particular, they will focus their attention on developing systems to reliably detect AI and mitigate the associated risks.

In her 2024 tech predictions, Cisco Chief Strategy Officer and GM of Applications Liz Centoni summed it up: “Inclusive new AI solutions will guard against cloned voices, deepfakes, social media bots, and influence campaigns. AI models will be trained on large datasets for better accuracy and effectiveness. New mechanisms for authentication and provenance will promote transparency and accountability.”

To date, detecting AI-generated written content has proven stubbornly difficult. AI detection tools have managed only low levels of accuracy, often interpreting AI content as human-generated and returning false positive results for human-written text. This has obvious implications for those in areas that may disallow AI. One such example is education, where students may be penalized if the content they have personally written ‘fails’ an AI detector’s algorithm.

To strengthen their guard against AI-based subversion, we can expect tech companies to invest further in this area—improving the detection of all forms of AI output. This may take the form of developing mechanisms for content authentication and provenance, allowing users to verify the authenticity and source of AI-generated content.

Leveraging a collective response

In 2024, we anticipate a significant increase in public-private interactions aimed at combating the misuse of AI-generated content. According to Centoni, “In keeping with the G7 Guiding Principles on AI regarding threats to democratic values, the Biden administration’s Safe AI Executive Order, and the EU AI Act, we’ll also see more collaboration between the private sector and governments to raise threat awareness and implement verification and security measures.”

That’s likely to include sanctions against those responsible for digital disinformation campaigns. To address regulatory concerns, businesses will need to double down on protecting their data and detecting threats before the effects of any damaging impact can be felt. This will mean constant vigilance, regular vulnerability assessments, diligent security system updates and thorough network infrastructure auditing.

Moreover, AI’s dual role both in exacerbating and mitigating AI-powered disinformation requires transparency, and a broad approach to protect democratic values and individual rights. Addressing both sides of the equation involves rethinking IT infrastructure. In fact, business leaders are now realizing that their technical infrastructure is their business infrastructure.

Early detection through monitoring and observability, for example, over the complex tapestry of infrastructure, network components, application code and its dependencies, and the user experience can be part of the solution. Identifying and linking potential outcomes to an effective, efficient response is crucial.

AI-powered technologies may finally unlock the answers to problems that have outpaced human innovation throughout history, but they will also unleash new problems outside the range of our own experience and expertise. Carefully developed, strategically deployed technology and regulations can help but only if we all recognize the responsibility we share.

Tech companies have an integral role to play in assisting governments to ensure compliance with new regulations. This is both in terms of developing the capabilities that make compliance possible and fostering a culture of responsible AI use. Private-public collaboration as well as the implementation of robust verification mechanisms and cybersecurity measures are emerging as the backdrop for mitigating the risks and threats posed by AI-generated content in the year ahead.

 


 

With AI as both catalyst and canvas for innovation, this is one of a series of blogs exploring Cisco Executive Vice President Liz Centoni’s tech predictions for 2024. Her complete tech trend predictions can be found in The Year of AI Readiness, Adoption and Tech Integration ebook.

Catch the other blogs in the 2024 Tech Trends series

 



Authors

Jeff Campbell

Senior Vice President & Chief Government Strategy Officer

Government Affairs and Public Policy