Sixth Annual Workshop on Modeling, Benchmarking and Simulation
Held in conjunction with the 37th Annual International Symposium on Computer Architecture
June 20, 2010
With few exceptions, simulation is the quantitative foundation for virtually all computer architecture research and design projects – from microarchitectural exploration to hardware and software trade-offs to processor and system design. However, its continued efficacy is limited by the need to model or compensate for problems such as increasing complexity (e.g., multiple cores and peripherals), additional critical constraints (e.g., power consumption, reliability, etc.), an ever-expanding design space (e.g., chip, system, and data center scale modeling), and benchmark suite quality and coverage.
Accordingly, the goals of this workshop are to accelerate the development of technologies that are necessary to support the research of future generation architectures and to encourage the advancement of “under-researched” areas in computer architecture measurement. Accordingly, this workshop places a special premium on novelty and on preliminary work. Topics of interest include, but are not limited to:
· System-level architecture modeling and measurement
· Data center level modeling and measurement
· Performance/energy/temperature/reliability measurement and analysis tools
· New or efficient techniques to model performance, power, temperature, reliability, etc.
· Simulation methodologies for multi-core and many-core architectures
· Simulator validation
· Development of parameterizable, flexible benchmarks
· New benchmark suites for emerging application areas
· Analytical and statistical modeling
The special emphasis of MoBS-6 will be on performance analysis of emerging applications, in particular applications that are not amenable to conventional detailed simulation because of their scale (e.g., data intensive applications), their performance is determined by interactions in a multi-tiered system (e.g., three-tier web services) or where CPU-level modeling is insufficient to analyze their performance (e.g., I/O intensive workloads).
· 2:00 - 2:05: Welcome by the organizers
· 2:05 - 3:30: Session 1
Keynote: Six blind men and the elephant: benchmarking and simulation in the Exascale era
Paolo Faraboschi (HP Labs)
As we leave the Petascale milestone behind us, the computing industry is changing rapidly to address the next challenges. Energy, dependability, cost pressure and economy of scale are pushing IT consolidation into large "cloud" datacenters where new workloads and legacy applications coexist. Data is growing at a higher exponential rate than computing, and novel data-centric architectures are starting to emerge. Heterogeneity and specialization, within and across instruction sets, are reemerging to address energy efficiency.
In light of these secular shifts in the IT industry, the benchmarking and simulation techniques are lagging behind and need to deeply transform to address the upcoming challenges. Like the ancient Hindu parable, current practices in the architecture community suffer from an excessive focus on narrow metrics, none of which is either completely correct or totally wrong, but often risk missing the big picture.
This talk will discuss how computer architecture simulation and benchmarking must evolve to provide better quality decision support data for datacenter-level computing. Speed, full-system, validation and modularity are some of the fundamental characteristics of a scalable simulator. Dynamically trading off speed and accuracy, running unmodified software, and the flexibility to interface with multiple tools are other key aspects that should drive the development of the next generation simulators. As a case study, the talk will cover some of the design considerations behind COTSon, an open-source scalable full-system simulation infrastructure targeting fast and accurate evaluation of current and future computing systems.
Paolo Faraboschi is a Distinguished Technologist in the Exascale Computing Lab of HP Labs, working on next-generation data center research. He has been at HP since 1994 and recently led the COTSon full-system simulation infrastructure (http://cotson.sourceforge.net/). In the past, he was the principal architect of the Lx/ST200 family of embedded VLIW processor cores (http://en.wikipedia.org/wiki/ST200_family). Paolo is a co-author in over 30 papers, 16 patents, and the book "Embedded Computing, a VLIW approach to architecture compilers and tools". He is an active member of the architecture community, was recently program co-chair of HiPEAC'10 and MICRO'41, and is an associate editor for TACO. Paolo received his PhD in EECS from the University of Genoa (Italy) in 1993. For more information see http://www.hpl.hp.com/people/paolo_faraboschi
Jainwei Chen, Lakshmi Kumar Dabbiru, Murali Annavaram and Michel Dubois (University of Southern California)
· 3:30 - 4:00: break
· 4:00 - 5:40: Session 2
Mieszko Lis, Keun Sup Shim, Myong Hyon Cho, Pengju Ren*, Omer Khan and Srinivas Devadas (MIT and *Xian Jiaotong University, China)
Jinho Suh, Murali Annavaram and Michel Dubois (University of Southern California)
Erven Rohou and Thierry Lafage (INRIA Rennes)
Hadi Esmaeilzadeh, Stephen Blackburn*, Xi Yang* and Kathryn McKinley (The University of Texas at Austin and *Australian National University)
· 5:40 - 5:45: Concluding remarks
Paper Submission: April 16, 2010
Notification Date: May 12, 2010
Final Version Due: June 1, 2010
Workshop Date: June 20, 2010
Lieven Eeckhout, Ghent University (email@example.com)
Thomas Wenisch, University of Michigan (firstname.lastname@example.org)
David August, Princeton
Carl Beckmann, Intel
Derek Chiou, UT Austin
Hyesoon Kim, Georgia Tech
Benjamin Lee, Stanford University
Kevin Lim, U. Michigan/HP Labs
Onur Mutlu, CMU
Tim Sherwood, UC Santa Barbara
Anand Sivasubramaniam, Penn State
Call for Papers: