How Affordable Supercomputers Fast-Track Data Analytics & AI Modeling - Spiceworks (2024)

The world’s most powerful supercomputers may cost $5-7 million, but there is a gradual rise of affordable alternatives that bring similar performance at lower costs often as little as $10,000. Given the close correlation of supercomputing with analytics and artificial intelligence, this opens up a wide variety of use cases. Learn the five key trends driving this industry revolution.

Despite the incredible potential of supercomputers, adoption has traditionally lagged beyond a few niche use cases. One of the key reasons behind this is the formidable cost involved, from the price of the device itself to the housing infrastructure and recurring energy costs. Purchasing and running a supercomputer can set you back by several millions of dollars a year, which usually made it privy to only a select community (funded research, governments, multinational enterprises, and the like).

Yet, demand for supercomputers seems to be steadily growing at a pace of 9.5%Opens a new window , and some reportsOpens a new window even suggest that demand could even be outpacing supply. So, what’s changed in the last few years? One answer could be the hope of cost-effective supercomputing technology that’s affordable to a wider cross-section of users. In 2020, Japan-based IT company NEC announced that it would be providing supercomputers for as little as $10,000 – geared for small to mid-sized companies looking to use the technology for advanced big data analytics and artificial intelligence (AI) modeling.

In 2021, we are at a strategic bend on the evolution journey of supercomputing: faster and more powerful computers will continue to emerge, with a correspondingly high-cost component. Another branch of the industry will focus on affordable supercomputers that can be used by all, sacrificing a little bit of performance for massive cost savings.

Learn More: ARM Processors are Key for New Supercomputers

Understanding the Role of Supercomputers in Analytics and AI

Supercomputers and high-performance computing (HPC) technologies improve the traditional data processing approach by using a large number of processor cores. We are talking about 10,000+ cores in a single machine, backed by enormous GPU capability and energy supply. This makes it possible to churn huge amounts of complex data, be it meteorological information from around the world to predict the weather in one town or analyzing images collected from Mars to design a jet propulsion system.

Supercomputers are game-changing for use cases where you need to study millions of datasets of different types to arrive at a singular conclusion.

Given the heterogeneous nature of Big Data and AI training data, it is easy to see why supercomputers would be a good fit. They would be able to analyze information from unstructured text, pre-processed data, voice, video, computer visioning, and more to create highly accurate forecasting models. And the growing affordability brings it within reach of non-government use cases. For example, a cosmetics company in Europe used the technology to develop a winning shampoo formulaOpens a new window . It replaced iterative lab-based experiments with machine learning (ML) algorithms and cognitive simulations, increasing the precision of the results by 10,000x times.

Learn More: Will Red Hat Rule the Supercomputing Industry with Red Hat Enterprise Linux (RHEL)?

5 Factors Leading to Affordable and Accessible Supercomputers

Affordable supercomputers and their accessibility among companies of all sizes can be attributed to five important trends:

1. The rise of P2P computing technology

Peer-to-peer (P2P) computing takes advantage of a distributed landscape to drive exponentially more powerful computing processes. Users and distributed multi-GPU clusters are connected via a bridge that also acts as the front-end interface. Using this interface, you can deploy ML and deep learning frameworks while also writing AI models that can be trained on these high-performance computing nodes. A distributed landscape means that computing resources can be deployed on-demand, fetching GPU power from remote machines to create virtual supercomputers connected to the front-end. And the absence of a centralized infrastructure means that costs are a fraction of usual cloud-based supercomputing systems.

2. The move from high-performance computing (HPC) to HPCaaS

HPC is a subset of supercomputing. It is an alternative to typical desktops and mainframes, utilizing parallel computing techniques to aggregate resources in a manner that enables supercomputing level performance. HPC is squarely focused on enterprise applications, such as advanced analytics, AI, ML, big data, and even IoT. In fact, AI and Deep Learning are projected to drive HPC growth, reaching $43 billion in valuation by the end of 2021 – but access to these systems was traditionally limited due to cost barriers. HPC-as-a-Service (HPCaaS) addresses this by hosting infrastructure on the cloud and offshoring the need for skilled resources. You can consider providers like IBM, Dug, Advania, atNorth, and other aaS vendors for no-locking, affordable supercomputing.

3. The normalization of multiple processor types in one architecture

This is a technical feat that has helped to cap supercomputer costs while improving performance. Different types of processors are suitable for different tasks – for example, one might have hundreds of cores for parallel processing, the other might have fewer cores that are optimized for sequential serial processing, while another type could be best for floating-point calculations. Modern-day supercomputer architects yoke these various processor types together, to drive better performance at lower costs.

4. The development of software purpose-built for supercomputing

Another way of looking at the supercomputing cost curve is in terms of value. The greater your returns from supercomputers, the lower will be your long-term total cost of ownership. That’s why purpose-built software platforms are so important, particularly in AI and big data analytics use cases. These platforms allow data scientists with little to no knowledge of supercomputing architectures to feed data, enable ingestion, gather results, and apply the insights to their domain of expertise.

For example, last year, a cost-efficient supercomputing platform called Exscalate helped 18 pharmaceutical institutions from the E.U. analyze a library of 500 billion molecules at 3 million molecules per second capacity to identify molecules that could target the SARS-CoV-2 virus. Microsoft has also announcedOpens a new window that it is working to make AI models and supercomputing infrastructure available as a platform.

5. The increase in demand among the AI community

Finally, Microsoft’s efforts towards a supercomputer for AI speaks of a larger trend. AI’s data-intensive nature and the correlation between data volumes and AI accuracy means that supercomputing could help build much more effective, versatile, and reliable artificial intelligence systems. NVIDIA is also working on “Leonardo”, the world’s fastest AI supercomputer to be situated in Europe. A company called Lightmatter is working on a supercomputer that uses photonic technology to deliver AI ten times faster using 90% less energy, dramatically reducing your costs. As demand and adoption among the AI community increases, we can expect more game-changing (cost-saving) innovations.

Learn More: Microsoft Build Goes Big on AI, Debuts New Supercomputer in Azure

Assessing the Long-term Impacts of This Trend

What does this growing affordability of supercomputers mean for enterprises? For one thing, your initial CapEx when starting a supercomputer-based analytics or AI project will shrink, even if recurring costs stay the same for a while. You will also have more options for adoption, from multi-million dollar systems that promise the most optimal bytes per flop (i.e., floating-point operations per second) to affordable solutions geared for your industry and even as-a-Service alternatives. The result is a more democratized market where supercomputers are more widely adopted, if not mainstreamed, reaching its benefits to a larger cross-section of society.

What does the future hold for supercomputers in enterprises? Comment with your thoughts below or tell us on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!

How Affordable Supercomputers Fast-Track Data Analytics & AI Modeling - Spiceworks (2024)

FAQs

How much does 1 supercomputer cost? ›

The world's most powerful supercomputers may cost $5-7 million, but there is a gradual rise of affordable alternatives that bring similar performance at lower costs — often as little as $10,000.

How much does the fugaku supercomputer cost? ›

US$1 billion

What are the fastest and most powerful computers used for applications requiring intensive data manipulation called? ›

supercomputer, any of a class of extremely powerful computers. The term is commonly applied to the fastest high-performance systems available at any given time. Such computers have been used primarily for scientific and engineering work requiring exceedingly high-speed computations.

How fast can a supercomputer process data? ›

Performance metrics

Petascale supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS.

Top Articles
Latest Posts
Article information

Author: Tuan Roob DDS

Last Updated:

Views: 6079

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Tuan Roob DDS

Birthday: 1999-11-20

Address: Suite 592 642 Pfannerstill Island, South Keila, LA 74970-3076

Phone: +9617721773649

Job: Marketing Producer

Hobby: Skydiving, Flag Football, Knitting, Running, Lego building, Hunting, Juggling

Introduction: My name is Tuan Roob DDS, I am a friendly, good, energetic, faithful, fantastic, gentle, enchanting person who loves writing and wants to share my knowledge and understanding with you.