Feature Story

How the cloud we know is changing forever

A look at the new cloud model: simple, secure, agile, programmable-and agnostic.
Jun 02, 2021

Guest post by Dave Vellante

Like the universe itself, cloud infrastructure is accelerating at an ever-expanded pace.

Once considered an opaque set of remote services, somewhere “out there” in a mega data center, today’s cloud is extending down to on-premises data centers. Data centers are connecting into the cloud through adjacent locations that create hybrid interactions. And clouds are being meshed together across regions and eventually will stretch to the far edge. Suffice it to say, this new cloud will be hyper-distributed.

But, as always, the success of infrastructure is about how well it supports applications. And we’re entering a new era of application innovation. As I often say, the next ten years won’t be like the last ten.

Cisco is unveiling a comprehensive cloud strategy that leverages public-cloud resources combined with world-class converged infrastructure and can support a wide spectrum of workloads.

Applications are evolving as the cloud infrastructure to support them becomes more intelligent and localized. Developers are not only building new, cloud-native apps. They’re also strategically modernizing key parts of their existing application portfolios using containers — so they won’t have to think about where the application lives.

But amid all of this complexity, application architects need an open platform that brings consistency of operations for developers, operators, admins and the security team. Where they’re not locked into a proprietary system and their output can be used with any other platform — i.e. any cloud as it’s being defined here.

See also: Cloud complications, and how to fix them

Many of modern systems are multi-dimensional in that they leverage data from several different platforms. For example, modern apps often comprise inputs from core transaction systems, analytics data, Web data, social inputs, telemetry and other metadata. Increasingly AI is being injected into these applications to extract insights and automate critical functions related to operations, security, compliance and governance.  

This new cloud model is defined by a unified operating experience, independent of location. The system will optimize based on KPIs around workload characteristics, geographic location, local laws, latency, and economics. These factors, along with organizational governance edicts, will determine the ideal allocation of resources and optimally configure the system. The bottom line is the cloud is moving rapidly from a remote compute, storage and network infrastructure resource to an intelligent, ubiquitous, anticipatory platform for application innovation. And this new platform is the underpinning of digital business transformation.

The customer need

Here’s the mandate: This new cloud must be simple, secure, agile, and programmable — not to mention cloud agnostic. The real value for customers comes from tapping into a layer across clouds and data centers that abstracts the underlying complexity of the respective clouds. But at the same time, it needs to be location-aware and able to  accommodate a spectrum of workloads, from mission-critical to general-purpose applications. And to do it cost effectively, using open API’s that enable interactions with external software components and microservices, which can programmatically be invoked as needed.

That sounds complex...and it is. But organizations demand simplicity and expect vendors to invest in R&D and create solutions that will minimize labor costs so they can shift resources to more strategic digital initiatives. This requires world-class infrastructure that integrates hardware, software, tooling, machine intelligence/AI and partnerships within an ecosystem. And it must accommodate a variety of applications and deployment models like serverless and containers as well as support for traditional work running on VMs.

It also requires a roadmap that will take us well into the next decade.

Cisco’s approach to Future Cloud

“Cloud 1.0” was largely about making compute more efficient within large data centers where the data came to the compute. In “Cloud 2.0” compute is becoming less expensive and moves to the data, which is by its nature, distributed. 

In 2009, when I heard that Cisco was “getting into the server business” I thought to myself,  “why bother?” What I didn’t understand at the time was Cisco’s vision to disrupt the traditional server business by providing a unified compute and networking platform, with best-of-breed storage partnerships. Cisco’s decade-long efforts were the first steps toward mimicking the public cloud experience.

Cisco’s new X-Series architecture goes to the next level and promises to deliver a horizontal platform for a spectrum of workloads. It does so by embracing shared memory, heterogeneous compute and offloaded networking and storage. It will provide low-cost, high-performance compute for new applications and allow compute to be placed close to where data is created.

But the announcements at Cisco's Future Cloud event underscore that this new vision of cloud requires more. That’s why Cisco has combined organic and acquired assets to begin to deliver on the next generation of cloud, which is defined by a consistent operating model across locations and infrastructure as we’ve described.

See also: Solving the tangled web around app innovation

In many respects the hyperscale cloud companies have given the industry a gift. The big four cloud vendors spent more than $100B last year on CAPEX globally. This infrastructure, like the Internet, is a resource that can be tapped to deliver new value to customers in the form of a ubiquitous, sensing, intelligent platform that can run any workload, anywhere in the world.

With this announcement, Cisco is unveiling a comprehensive cloud strategy that leverages public-cloud resources combined with world-class converged infrastructure and can support the wide spectrum of workloads we’ve discussed. This is critical because Cisco doesn’t own a public cloud and generally, the public cloud has not been able to support the most mission-critical workloads due to their special requirements. Cisco’s system can be configured to support these workloads, while at the same time running cost-sensitive applications. This gives customers choice and provides competitive advantage for Cisco relative to public-cloud alternatives.

But Cisco is going further by combining several other assets in its portfolio to attack this hybrid-cloud opportunity. By packaging key acquisitions including Intersight, AppDynamics, Thousand Eyes, and Banzai, the company hopes to deliver on this new vision of cloud. It comprises visibility/observability not only inside but beyond corporate networks. Moreover,  the company promises to deliver automated operations and support the orchestration and management of cloud-native applications at scale.

While these systems are not deployed as a single platform, Cisco is sticking its neck out as the single throat to choke — or perhaps they’d prefer to think of it as extending a single hand to shake.

Congratulations to Cisco on pulling together this detailed strategy, backing it up with real product, and doing your part to define the future of cloud computing.


David Vellante is a long-time industry watcher and serves as chief analyst at Wikibon Research. He is the co-founder of SiliconANGLE Media, home of theCUBE TV. He produces a popular weekly program called Breaking Analysis that provides data and opinions on a range of technology topics including cloud, infrastructure, enterprise software, AI, cybersecurity, semiconductors, crypto and global competitiveness.