Modeling performance variation due to cache sharing
2013 (English)In: Proc. 19th IEEE International Symposium on High Performance Computer Architecture, IEEE Computer Society, 2013, 155-166 p.Conference paper (Refereed)
Shared cache contention can cause significant variability in the performance of co-running applications from run to run. This variability arises from different overlappings of the applications' phases, which can be the result of offsets in application start times or other delays in the system. Understanding this variability is important for generating an accurate view of the expected impact of cache contention. However, variability effects are typically ignored due to the high overhead of modeling or simulating the many executions needed to expose them.
This paper introduces a method for efficiently investigating the performance variability due to cache contention. Our method relies on input data captured from native execution of applications running in isolation and a fast, phase-aware, cache sharing performance model. This allows us to assess the performance interactions and bandwidth demands of co-running applications by quickly evaluating hundreds of overlappings.
We evaluate our method on a contemporary multicore machine and show that performance and bandwidth demands can vary significantly across runs of the same set of co-running applications. We show that our method can predict application slowdown with an average relative error of 0.41% (maximum 1.8%) as well as bandwidth consumption. Using our method, we can estimate an application pair's performance variation 213x faster, on average, than native execution.
Place, publisher, year, edition, pages
IEEE Computer Society, 2013. 155-166 p.
Research subject Computer Systems
IdentifiersURN: urn:nbn:se:uu:diva-196181DOI: 10.1109/HPCA.2013.6522315ISI: 000323775000014ISBN: 978-1-4673-5585-8OAI: oai:DiVA.org:uu-196181DiVA: diva2:612407
HPCA 2013, February 23-27, Shenzhen, China