Thursday, 2021-08-12

Sponsors



Gold Corporate Support



Silver Corporate Support



Bronze Corporate Support






Corporate Sponsor

ICPE 2014 Tutorials

 

The tutorials will take place on March 23, 2014, with two simultaneous tracks of programming.

 


Track One Track Two

Energy Efficiency Benchmark Framework (Chauffeur)
Jeremy Arnold
(4 hours)
Pattern-Driven Performance Engineering for Multicore Systems
Jan Treibig
(4 hours)

Kieker Monitoring Framework
Andre van Hoorn, Nils Ehmke
(2 hours)
Performance Unit Testing
Vojtech Horky
(2 hours)

Glinda: a framework for performance tuning on heterogeneous platforms
Ana Lucia Varbanescu
(2 hours)

Network Performance Engineering
Manoj Nambiar
(2 hours)

 

Energy Efficiency Benchmark Framework (Chauffeur WDK)
Author: Jeremy Arnold, IBM, USA

Abstract:The Chauffeur framework was designed to provide a way to measure the energy efficiency of computers. Chauffeur also serves as a general framework for measuring performance of a variety of workloads on one or more servers, with possible applications for virtualization and cloud environments. It was originally implemented as part of SPEC's Server Efficiency Rating Tool (SERT); however, Chauffeur was also intended to be useful for research and for development of future benchmarks. This tutorial will include an overview of Chauffeur and the attendees will learn how to customize Chauffeur for their own purposes.

 

Pattern-Driven Performance Engineering for Multicore Systems
Author: Jan Treibig - Erlangen Regional Computing Center (RRZE), University of Erlangen-Nuremberg, Germany

Abstract: The advent of multi- and many-core chips has led to a further opening of the gap between peak and application performance for many codes. We convey the architectural features of current processor chips and multiprocessor nodes as far as they are relevant for the practitioner. Performance optimization due to its complexity is often regarded as an expert only task. Still most performance problems can be put into a small number of categories. This tutorial introduces performance patterns as a way to present experience in an accessible form. The most common patterns are explained using examples. So-called signatures are introduced as a reproducible way to detect applicable patterns. Diagnostic performance modeling is used as a powerful tool to gain quantitative estimates on performance for a particular code. The overall performance engineering process is explained in detail on several practical examples.

 

Kieker Monitoring Framework
Authors: Andre van Hoorn - University of Stuttgart, Germany, Nils Ehmke - Kiel University, Germany

Abstract: Kieker is an extensible open source framework for monitoring and analyzing the runtime behavior of software systems, e.g., focusing on performance and availability. It is primarily developed as a research tool in corporation between Kiel University and University of Stuttgart, but also contains contributions by our industrial partners. It is designed to provide reusable and easily extendable components and has been evaluated in several scientific and industrial projects. This tutorial introduces the Kieker framework and its application for software performance engineering and management. In a combination of lecture and hands-on experience, we start by instrumenting sample systems to gather monitoring information. Later, this information will be analyzed, both by Kieker provided components and by extending Kieker for custom analyses. Presented examples include performance model extraction and anomaly detection. We encourage the audience to bring their own laptops (with Java) for the hands-on experience. Kieker and the sample systems are provided.

 

Performance Unit Testing
Author: Vojtech Horky – Charles University, Czech Republic

Abstract: Despite claims in the source code that certain functions are optimized and were performance tested, we rarely see functions accompanied by their performance tests. In this tutorial we demonstrate our tool and approach for writing performance unit tests. We use a Stochastic Performance Logic as formalism for expressing performance requirements and assumptions. We believe this formalism allows the developers to capture practical requirements in an elegant manner and still allows using robust statistical testing for evaluation. The formulas are then binded with the source code by use of annotations. This combination allows the developer to write the assumptions together with the source code but it also allows completely automatic performance testing. The tools we want to present are available for Java and allow integrating performance requirements tests as well as regression tests with any Java project.

 

Glinda: a framework for performance tuning on heterogeneous platforms
Author: Ana Lucia Varbanescu - University of Amsterdam, The Netherlands

Abstract: Heterogeneous platforms integrating different processors like GPUs and multi-core CPUs become popular in high performance computing. Despite this, most applications still use only one of the components of such platforms - either the CPU or the GPU. This leads to a potential performance loss. In many cases, this loss is either considered insignificant, or the implementation effort needed for using both devices is considered too large for the expected gain. Glinda is a framework built for enabling applications to run efficiently on heterogeneous platforms. Applicable to massively parallel applications written in OpenCL (or applications easily parallelizable in OpenCL), Glinda is able to correctly detect the application workload characteristics, make choices on the hardware to be used (based on the available parallel solutions and hardware configurations), and automatically perform the optimal workload decomposition and distribution. In this tutorial, we will show how Glinda can be used as a tool to enable applications to gain performance (by enabling a partitioning that better uses the resources of the underlying platform), as well as how Glinda can help further tuning the partitioning for additional (though typically limited) gains. Finally, we discuss several tips and tricks for users knowing when to stop tuning, because there is little benefit to be expected.

 

Network Performance Engineering
Author: Manoj Nambiar – TCS, Performance Engineering Research Center (PERC). India

Abstract: Although one may not be conscious of it, networks are an integral part of most enterprise systems and applications. Thus any perception of performance will have a network component involved. In this tutorial we will look closely at TCP/IP performance and how it is affected by network characteristics. Also covered will be the design on applications for maximum performance over the network. This will be followed with principles of various network devices in the data center, and network capacity planning. Network Traffic inspection is a powerful method to debug performance issues of almost any networked application. As a part of troubleshooting application performance issues we will cover use of network sniffers and analysis of packet information. All topics covered will include case studies based on the author’s experience which will help reinforce concepts.