Two Cultures Drive Data Center Development in Higher Education
September 7, 2010
By Dave Trowbridge
Like their counterparts in business, data center managers in colleges and universities are struggling to keep costs under control while meeting ever-growing demands for computing resources.
But higher-education data center managers face a unique challenge as well: They must balance the competing demands of two vastly different sets of users university researchers, who push the limits of the computing infrastructure, and university administrators, who require reliable and predictable performance.
Data Center Challenges in Higher Education
University researchers, who may be pursuing such complex topics as bioinformatics, atmospheric modeling, or genetic algorithms, put heavy demands on computing infrastructures.
"On the research side, they'll consume all the cycles you can give them," says Gerry McCartney, Vice President for IT, CIO and Olga Oesterle England Professor of Information Technology at Purdue University, one of the leading research institutions in the United States, with 40,000 students and 15,000 faculty and staff.
Researchers' demands for computing power may be unpredictable varying by the availability of research grants or new research directions dictated by experimental results, for example but they tend to be flexible about availability. Their focus is raw power.
"Researchers understand that we're really pushing our capabilities to meet their demands, and they know that when you're on the cutting edge, sometimes you're going to bleed a little," says Derek Masseth, Senior Director, Infrastructure Services at the University of Arizona, a leading U.S. public research university with 38,000 students and over 14,000 staff.
Data Center Virtualization on the Rise
University administrators are just the opposite. Functions like course management systems, ERP, streaming educational video, student services and building control have predictable but inflexible demands. "Availability is key. It's got to be there when they want it," says McCartney.
"The disruption a strong data center effort necessarily involves can be a real challenge for your people. You have to keep your eyes open to the impact on their livelihoods and address their concerns up front.
That means more sleepless nights for IT managers, says Masseth. "If a student service isn't working, or the course management system goes down or the ERP system is slow, thousands of users may be affected, and all of it boils down to a revenue hit, either immediate or long term. The pressure for 24/7 reliability is tremendous."
The ever-increasing computing demands of both types of users are driving higher-education data centers towards the cloud and a service-oriented architecture. To this end, IT shops in these institutions are moving towards 10 Gigabit Ethernet to support increasing virtualization of every aspect of the infrastructure via unified fabric and unified computing strategies: network, servers, storage and more.
The fundamental motivations behind this move are no different in higher education than they are in any other business, although in some cases they may be more pressing.
"Cost containment is certainly a priority in education," says Peter Brusco, Director, Information Technology Division at Deakin University, a public institution with 34,000 students and over 2,600 faculty and staff on four campuses in or near Melbourne, Australia. "At the same time, the number of services demanded is growing fast, meaning ever more applications and infrastructure to deploy and support. Our staff and students are demanding both faster service deployment and better performance."
Data Centers and Computing as a Service
Because any given set of users or projects may fall anywhere along a continuum between the two cultures, data center upgrades can't take a "one size fits all" approach, and it may be either research or administration that drives particular upgrades or new construction.
At Purdue, research has been a big data center driver, as illustrated by the university's recent adoption of Cisco Nexus switches for a 1280-node 10 Gigabit research server cluster, called the "Coates Cluster" after the former head of Purdue's electrical engineering department.
Grant-driven purchases of research computing assets had historically been made without coordination between departments, leading to islands of computing all over the campus.
"Researchers would get a grant and use part of it to build their own cluster, dedicated to their own work," says McCartney. "That's horribly inefficient: We can get 95 percent utilization from our data centers, but only 35 percent from individual office/lab racks, and they're sitting idle most of the time. It's also costly and puts a huge burden on IT staff."
As a result, he says, the campus still has more than 60 data centers, despite consolidation efforts.
Controlling Data Center Costs
Such islands impose an often-overlooked cost.
"The heating, ventilating, and cooling (HVAC) systems in general-use buildings can't handle the hot spots they create," McCartney says. "You end up with rooms that are hot or cold all the time, and much higher cooling costs than a dedicated data center."
Of course, reducing the energy consumed to cool servers and other equipment can be an important part of the green initiatives launched by virtually every higher-education institution.
With the new cluster, research computing power is basically a service at Purdue.
"Faculty research grants go a lot farther, because all they have to buy is the servers," says McCartney. "We supply everything else: the Internet, working fabric, the racking systems, stable power, cooling, and support. The efficiency we gain saves IT a lot of money, too, so it's a win on both sides."
Beyond the up-to 60 percent increase in application performance this new approach delivers, perhaps the major benefit is rapid deployment. "We do our initial deployments for clusters as sort of the computing equivalent of an Amish barn-raising, but with a lot more razzle-dazzle," he says. "In the case of the Steele cluster, with a couple of hundred volunteers working together we went from unpacking the boxes for over 800 servers to bringing up the first research application in only about four hours."
Virtualization and Unified Computing
The University of Arizona runs an average of 20 virtual servers per host using a unified fabric based on Cisco Nexus switches to converge the data and storage networks in its upgraded data center. There, virtualization is driven primarily by administrative demands.
The University expects virtualization to provide a 50 percent reduction in infrastructure costs, amounting to $1.2 million over two years. Virtualization also can greatly improve availability.
"A lot of virtualization can be justified in terms of high availability, greater fault tolerance, and faster recovery," he notes. "It's good insurance, and the administrative side of the institution will buy that."
At Deakin University in Australia, a new data center based on the Cisco Unified Computing System (UCS) will not only reduce service delivery time from months to hours or minutes, and promote sustainability, but also enable vastly improved disaster recovery (DR). "We operate across four campuses over a 300-kilometer stretch of Victoria's south-west coast," says Brusco. "The cost savings we expect from this project will enable us to provide very robust disaster recovery capabilities that we simply haven't been able to afford before. In effect, we're getting world-class disaster recovery for free."
Making the Most of Data Center Upgrades
As usual in IT, some the greatest obstacles are not technical but social or political. The drive to service-oriented architectures and cloud computing involves a lot of disruptive technologies, and CIOs cannot afford to lose sight of the fact that what's being disrupted are human attitudes and relationships.
One of the most critical components in data center upgrades and new builds is staff training. Yet, despite the educational environment, training may be scanted in higher education. Gene Kern is Executive Vice President and CTO at WAKE Technology Services, Inc., an infrastructure consulting firm with a large higher-education practice. "It's really strange, but especially in smaller universities, the hardest thing to get may be training dollars for IT staff."
He notes that training involves more than "course work." "Something that's rarely well-enough funded is attendance at more of the annual events where IT professionals can network and get more training in new technologies."
Keeping Data Center Staff Aligned
Gerry McCartney at Purdue stresses developing a sense of community as part of the critical path to data center upgrades and new deployments.
"A community cluster build like we did with Coates and Steele is a great way to convince your clients of how important IT is for their future by making them part of the process. As for your own IT staff, it helps the technical types to understand how important project management is as a skill, which makes things run much more smoothly in the long run."
In the end, data center success depends on a CIO's ability to communicate the benefits of the many changes needed in people and processes, and to start doing so very early in the process.
"You have to be willing to look insane to your staff, because they won't buy it at first," says University of Arizona's Masseth. "The disruption a large data center effort necessarily involves can be a real challenge for your people. You have to keep your eyes open to the impact on their livelihoods and address their concerns up front."
Dave Trowbridge is a freelance writer based in Boulder Creek, CA