WASHINGTON'S LEADING BUSINESS MAGAZINE

Cray stays relevant by reinventing itself

Seattle firm targets Big Data applications and a new generation of supercomputer users.
Skip Ferderber |   August 2013   |  FROM THE PRINT EDITION
CEO Peter Ungaro with a display of compute modules from Cray’s earliest supercomputers.

related software required in crunching Big Data, including storage systems to hold all that data.

By creating a new storage division and developing new supercomputer models that target commercial customers such as Exxon, Boeing and GE, Cray has boosted sales to nongovernment customers to a point where they account for roughly 10 percent of total revenue.

Cray’s expertise in dealing with very large-scale problems—the mother’s milk of the supercomputer business—is the key to the company’s recent growth spurt. “This whole Big Data problem is increasing the number of customers who need our class of fast data access and fast data processing,” says Barry Bolding, Cray’s vice president of corporate marketing.

Traditional Cray installations cost tens of millions of dollars. Blue Waters, a petascale supercomputer at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana- Champaign campus, cost $188 million, including five years of support. One of the many Cray computers at Tennessee’s Oak Ridge National Laboratory cost roughly $100 million. Today, the company is offering lower-cost, “entry-level” Cray systems in the $300,000 range. And with its acquisition last year of Californiabased Appro International for $25 million, Cray has added a new line of highperformance computers in the $200,000 range. Those systems appeal to financial services companies as well as managers of high-end server installations.

The rising star on Cray’s horizon may well be the California-based YarcData subsidiary, which produces the Urika graph analytics computer system that is being used to tackle Medicare fraud. The computer has a breakthrough hardware architecture that can deal with 10 to 20 times more data at once than other computer systems. “If you can’t hold that data inside [your computer] for that big a problem, you can’t deal with the problem,” says Steve Conway, IDC’s research vice president for high-performance computing. The Urika system also has the ability to move data around and process it quickly, taking into account the newest data. Conway believes Cray will emerge as one of the leaders in Big Data.

Supercomputing systems in the past have excelled at data mining information, as long as the data were already within its database. (Remember IBM’s Watson supercomputer on Jeopardy!?) YarcData’s computer is designed to continue to search through more and more information, looking for “patterns in shifting sands” with the goal of uncovering relationships among the data that the system can later focus on to find more quickly what it is looking for.

The

    Subscribe Free     Free Insight Newsletter