Toni Cortes, Universitat Politecnica de Catalunya
Toni Cortes is the manager of the storage-system group at the BSC (since 2006) and is also an associate professor at Universitat Politecnica de Catalunya (since 1998). He received his M.S. in computer science in 1992 and his Ph.D. also in computer science in 1997 (both at Universitat Politecnica de Catalunya).
Since 1992, Toni has been teaching operating system and computer architecture courses at the Barcelona School of Informatics (UPC) and from 2000 to 2004 he also served as vicedean for international affair at the same school. His research concentrates in storage systems, programming models for scalable distributed systems and operating systems. He has published 98 technical papers (23 journal papers and 75 international conferences and workshops), 2 book chapters, and has co-edited one book on mass storage systems. In addition, he has also advised 10 PhD thesis since 1997.
Dr. Cortes has been involved in several EU projects (Paros, Nanos, POP, XtreemOS, Scalus, IOlanes, PRACE, MontBlanc, EUDAT, Big Storage, IOStack, Rethinkbig, and Severo Ochoa) and has also participated in cooperation with IBM (TJW research lab) on scalability issues both for MPI and UPC. He is also editor of the Cluster Computing Journal and the coordinator of the SSI task in the IEEE TCSS.
He has served in many international conference program committees and/or organizing committees and was general chair for the Cluster 2006 conference, LaSCo 2008, XtreemOS summit 2009, and SNAPI 2010. Since 2011, he is also the chair of the steering committee for the Cluster conference series. He has been awarded the "Certificate of appreciation" in 2007 for his involvement in IEEE CS.
dataClay: The Integration of Persistent Data, Parallel Programming Models, and True Sharing
Since the beginning, persistent data and non-persistent data have been treated as two separate abstractions. A clear example
is that the model used to store data into volatile memory (mainly objects an their relations) is completely different from the
model used to store the same data into a persistent storage (mainly tables or files). This differentiation between data has many
negative side effects because persistent data cannot be integrated into the programming model. This lack of integration causes,
among others, the following problems:
i) moving computation to the data becomes a complex task (deployment can become an arduous task),
ii) the extraction of potential data parallelism by the programming model is very difficult (the programming model is unaware of where the data really is)
iii) offering a mechanisms to really share data without taking the control from the data owner becomes nearly impossible (we will show that today data is not really shared).
In this talk, we will present dataClay, a new-generation object storage and its integration with the COMPSs programming model. This new way to handle data (and code), and its perfect fit with a parallel programming model will eliminate all the aforementioned problems easing the task of implementing data-centric programs while taking full advantage of the available parallelism.
Pawel Gepner, Intel Corporation
Pawel Gepner is an Intel Corporation Platform Architect focused on High Performance Computing.
Pawel Gepner in his role as Platform Architect is to ensure customers of server and HPC products receive world-class technical service and support, technology training and other technology services.
Pawel Gepner has joined Intel in 1996 as Field Application Engineer for Central and Eastern Europe.
In 2001, Pawel Gepner became EMEA Architect focused on HPC area. He led couple of server development projects including first Fault Tolerance Systems based on IA-32 from Stratus Technology. He was responsible for driving Pentium III server project at IBM Development Center in Greenock. He also led the team of Intel architects that developed Bull’s Itanium 2 system. He was also involved in Itanium 2 projects at Siemens AG and Eriksson.
Pawel Gepner led the development team for the first teraflop computing projects in EMEA and first Itanium 2 teraflop installations. He was driving many of the HPC projects in including TASK, SKODA, VW, CERN, and many others.
In addition to the Platform Architecture Specialist role he is also Intel Corporation spoke person responsible for communication with the press reg. technical and technology aspect of Intel’s products and technology.
Pawel Gepner is a graduate in Computer Science and he holds master's and Ph.D. degrees from Warsaw University of Technology, Poland.
Pawel Gepner has written 50 technical papers on Computer Science and Technology. He is also a board member and technology advisor for many international scientific and commercial HPC projects.
Intel architecture and technology for future HPC system building blocks
Intel Corporation developed several new and enhanced technologies bolstering its leadership in high-performance computing. These new products include the future generation Intel Xeon Phi processor, code-named Knights Hill, and new high-speed interconnect technology Intel Omni-Path Architecture for HPC. We also provide new software releases and collaborative efforts designed to make it easier for the HPC community.
During the talk we will disclosed details of third-generation Intel Xeon Phi product family, code-named Knights Hill, and discuss Intel’s 10nm process technology and integrated Intel Omni-Path Fabric technology.
The new Intel Xeon Phi Knights Hill will follow the upcoming Knights Landing product. The first commercial systems based on Knights Landing expected to begin shipping end of 2015.
Intel Omni-Path Architecture is expected to offer 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives. The Intel Omni-Path Architecture will use a 48 port switch chip to deliver greater port density and system scaling compared to the current 36 port InfiniBand alternatives. It will deliver up to 33 percent more nodes per switch chip and also it is expected to reduce the number of switches required, simplifying system design and reducing infrastructure costs at every scale.
We believe that new Intel Xeon Phi processor in conjunction with Intel Omni-Path Architecture provide next generation building blocks for Exascale computing.
Pedro Trancoso, University of Cyprus
Pedro Trancoso is an Associate Professor at the Department of Computer Science at the University of Cyprus, which he joined in 2002. He has a PhD and MSc. in Computer Science from the University of Illinois at Urbana-Champaign, USA.
His research interests are in the area of Computer Architecture and include Multi-core Architectures, Memory Hierarchy, Parallel Processing and Programming Models, Database Workloads, and High-Performance Computing.
Currently his research team, Computer Architecture, Systems and Performance Evaluation Research - CASPER (www.cs.ucy.ac.cy/carch/casper) is composed of 2 PhD students, 1 MSc student and 3 undergraduate students.
The latest funding for his research include the participation in the TERAFLUX EU FP7 IP project (4 years) and the lending of a 48-core experimental processor, the Intel SCC, by the Intel Corporation. He is also a member of the HiPEAC Network of Excellence.
Getting Ready for Approximate Computing: Trading Parallelism for Accuracy for DSS Workloads
Processors have evolved dramatically in the last years and current multicore systems deliver very high performance. We are observing a rapid increase in the number of cores per processor thus resulting in more dense and powerful systems. Nevertheless, this evolution will meet several challenges such as power consumption, and reliability.
It is expected that, in order to improve the efficiency, future processors will contain units that are able to operate at a very low power consumption with the drawback of not guaranteeing the correctness of the produced results. This model is known as Approximate Computing. One interesting approach to exploit Approximate Computing is to make applications aware of the errors and react accordingly.
For this work we focus on the Decision Support System Workloads and in particular the standard TPC-H set of queries. We first define a metric that quantifies the correctness of a query result - Quality of Result (QoR). Using this metric we analyse the impact of relaxing the correctness in the DBMS on the accuracy of the query results. In order to improve the accuracy of the results we propose a dynamic adaptive technique that is implemented as a tool above the DBMS.
Using heuristics, this tool spawns a number of replica query executions on different cores and combines the results as to improve the accuracy. We evaluated our technique using real TPC-H queries and data on PostgreSQL with a simple fault-injection to emulate the Approximate Computing model. The results show that for the selected scenarios, the proposed technique is able to increase the QoR with a cost in parallel resources smaller than any alternative static approach. The results are very encouraging since the QoR is within 7% of the best possible.
14th ISPDC Conference
Dates29 / Jun - 01 / Jul, 2015
VenueSt. Raphael, Limassol, Cyprus