Cross Architecture Performance Prediction

Yuliana Zamora; Bethany Lusch; Murali Emani; Venkat Vishwanath; Ian Foster; Henry Hoffmann. 9 March, 2021.
Communicated by Henry Hoffmann.
Supersedes: TR-2020-12 (updated 03/09/21)


With the rate at which hardware evolves, predicting application performance on new architectures is extremely valuable. Prior work building frameworks to predict application performance on new architectures has required expert input,considered only a subset of applications, or required source code changes. Our goal is to create a general framework that can automatically predict application performance on a target hardware, given performance metrics from a different hardware architecture, without expert input. In this thesis, we propose such a framework and use it to compare classical machine learning approaches to advanced Deep Neural Networks generated through Neural Architecture Search (NAS). We implement a NAS workflow on the Theta supercomputer at Argonne National Laboratory, and use it create over 1 million deep learning models that predict performance on a target hardware architecture. When comparing these final models, we see little difference when employing a massively scaled neural architecture search compared to a random forest model. The results suggest that general application cross architecture prediction requires significantly more training data than used in this study data and/or additional feature engineering.

Original Document

The original document is available in PDF (uploaded 9 March, 2021 by Henry Hoffmann).