Cray stays relevant by reinventing itself

| FROM THE PRINT EDITION |
 
 

When the federal government wanted to develop a system to root out fraud, it turned to the Urika, a platform skilled at uncovering relationships among disparate data even experts might not see. The Centers for Medicare & Medicaid Services, which makes a half-trillion dollars a year in health care payments, wanted to find a way to identify and go after more than $60 billion annually in fraudulent claims.

The Urika graph appliance comes from a subsidiary of Seattle-based Cray Inc. and is part of a broader effort by Cray to reinvent itself as it seeks to respond to explosive growth in the new market widely referred to as “Big Data.”

Big Data, which seeks to pull meaningful intelligence on consumers, trends and markets from massive amounts of information, is all the buzz these days. But for Cray, it’s old hat. For decades, the company has been supplying powerful computers and ultra-sophisticated software to address intensive computing needs of a broad range of sectors that have long handled huge amounts of data, from medical research, weather forecasting and oil exploration to aircraft modeling and government analysis.

With the advent of the internet, e-commerce and cloud computing, hundreds of companies now have access to data they can mine for competitive advantage. IDC, the market research firm, valued the supercomputer market, including hardware, software, storage and services, at $21.2 billion last year. The hardware portion is worth $11.1 billion and grew by 29 percent in the same period. Cray has a 19 percent share of the $3.76 billion market for supercomputers that cost $3 million and up. IBM has a 45 percent share of that market.

IBM could easily eat Cray for lunch— its market cap is more than 20 times Cray’s $816 million. So how does Cray survive? “The answer is focus,” says Cray CEO Peter Ungaro.

“The way companies stay ahead of the game is by offering very specific products and at Cray, our whole focus is on supercomputing.”

Cray made its name and fortune by being the go-to supercomputer company for government agencies and research institutions. But increasingly, crunching large amounts of data has shifted from computers that used a handful of superfast processors to those that enlist hundreds or thousands of processors to sift through information. Cray is going after those new markets by offering customers a large lineup of systems at a broad range of price points. It is also responding to customer demand for related software required in crunching Big Data, including storage systems to hold all that data.

By creating a new storage division and developing new supercomputer models that target commercial customers such as Exxon, Boeing and GE, Cray has boosted sales to nongovernment customers to a point where they account for roughly 10 percent of total revenue.

Cray’s expertise in dealing with very large-scale problems—the mother’s milk of the supercomputer business—is the key to the company’s recent growth spurt. “This whole Big Data problem is increasing the number of customers who need our class of fast data access and fast data processing,” says Barry Bolding, Cray’s vice president of corporate marketing.

Traditional Cray installations cost tens of millions of dollars. Blue Waters, a petascale supercomputer at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana- Champaign campus, cost $188 million, including five years of support. One of the many Cray computers at Tennessee’s Oak Ridge National Laboratory cost roughly $100 million. Today, the company is offering lower-cost, “entry-level” Cray systems in the $300,000 range. And with its acquisition last year of Californiabased Appro International for $25 million, Cray has added a new line of highperformance computers in the $200,000 range. Those systems appeal to financial services companies as well as managers of high-end server installations.

The rising star on Cray’s horizon may well be the California-based YarcData subsidiary, which produces the Urika graph analytics computer system that is being used to tackle Medicare fraud. The computer has a breakthrough hardware architecture that can deal with 10 to 20 times more data at once than other computer systems. “If you can’t hold that data inside [your computer] for that big a problem, you can’t deal with the problem,” says Steve Conway, IDC’s research vice president for high-performance computing. The Urika system also has the ability to move data around and process it quickly, taking into account the newest data. Conway believes Cray will emerge as one of the leaders in Big Data.

Supercomputing systems in the past have excelled at data mining information, as long as the data were already within its database. (Remember IBM’s Watson supercomputer on Jeopardy!?) YarcData’s computer is designed to continue to search through more and more information, looking for “patterns in shifting sands” with the goal of uncovering relationships among the data that the system can later focus on to find more quickly what it is looking for.

The Medicare/Medicaid fraud issue is a good example of how YarcData’s analytics approach a problem. Each year, the Centers for Medicare & Medicaid Services process billions of claims, which typically must be paid within 14 days. While the organization has all the information it needs to do health care fraud analysis, it doesn’t have the time to do a good job, says Jeff Nichols, associate laboratory director at Oak Ridge. “In a perfect world,” he says, “it would be great if they could do real-time analysis of the claims that are coming in, and check for fraud and abuse.”

To make that happen, Oak Ridge is mining through years of beneficiary claims to understand the relationships between beneficiaries and providers or the correlations between the health of an individual and the various costs associated with the patient’s care to find patterns that might be used in the future to identify areas where fraud could be occurring. The lab is also setting up a system that would allow administrators to ask “what if?” questions about the Medicare and Medicaid operation.

One anticipated result of the project, which could take 10 years or more to complete, is to create a system that could identify health care fraud in real time, thus enabling the center to spot fraudulent payment requests upon submission and to stop payment on those submissions.

Nichols says YarcData’s computer is one of the few massively threaded architectures that can handle the graphic analytics needed. “We’re still in a learning mode in what [YarcData] can provide and how to use it,” he adds. Market researcher Gartner put YarcData on its 2013 Gartner Cool Vendor list for content and social analytics.

All this demand for Big Data analysis has helped push Cray’s stock price to more than double where it was a year ago, although the stock fell back after the company recently released disappointing earnings. IDC’s Conway remains upbeat on the company and believes the stock’s price drop is temporary.

Five years ago, says Cray’s Bolding, the firm was dependent on serving longtime customers of its old legacy systems. By investing in new software for its traditional systems, acquiring such innovative companies as YarcData and Appro, and building its storage business, Cray has found new sources of growth that will take it into new markets and bring in new customers.

“We’re a fairly conservative company,” he says, “picking markets for growth carefully and expanding year by year. But that’s our plan: to grow all these businesses to be significant.”

Tech Diversity Champions

Tech Diversity Champions

News from WTIA
 
 

The Seattle area faces a severe shortage of qualified coders. One way local organizations such as the Washington Technology Industry Association are addressing that shortage is by helping to train women, minorities and veterans currently underrepresented in the region’s technology workforce. To use the interactive graphic pictured here, use this link.