It’s sometimes easy to assume that the large clusters of commodity servers commonly associated with open source big data and NoSQL approaches like Hadoop have made supercomputers and eye-wateringly expensive high performance computing (HPC) installations a thing of the past.
But Adaptive Computing CEO Robert Clyde argues that the world of HPC has evolved, and that the machines in HPC labs now look an awful lot more like regular computers than they used to. They use the same x86-based chipsets, and they run the same (often Linux) operating systems. Furthermore, Clyde argues that techniques and ideas developed in the world’s elite HPC facilities have much to offer those running today’s enterprise data centres and grappling to cope with the new challenges posed by dealing with large volumes of data.
In this podcast, Clyde discusses the lessons that HPC experience can bring to a new generation of big data problems, before going on to outline today’s software releases from Adaptive Computing.

Image of a CRAY-XMP48 supercomputer at the EPFL (Lausanne, Switzerland) shared on Wikimedia Commons under Creative Commons licence. Original image by ‘Rama,’ cleaned by ‘Dake.’
Related articles
- The green supercomputer: Adaptive Computing is ensuring fast doesn’t mean wasteful (venturebeat.com)
- Video: How Adaptive Computing helps Power the COSMOS (insidehpc.com)

Pingback: Adaptive Computing CEO Robert Clyde talks about...()