GPI-Space

From Wikipedia, the free encyclopedia
GPI Space
Developer(s)Fraunhofer ITWM
Initial release2010
Operating systemLinux
TypeDistributed run-time system
Websitegpi-space.com

GPI-Space is a parallel programming development software, developed by the Fraunhofer Institute for Industrial Mathematics (ITWM). The main concept behind the software is separation of domain and HPC knowledge and leaving each part to the respective experts while the GPI-Space as framework integrates both parts together.

GPI-Space is making use of GPI to solve big data problems more efficient than current solutions.[1]

GPI-Space was first introduced in a domain-specific version for geology, under the name SDPA (Seismic Development and Programming Architecture) at SEG 2010 in Houston.[2]

Core layers[edit]

"Core Layers of GPI-Space"
GPI-Space Core Layers

GPI Space comes with several layers, that make up the core of the parallel programming development software.

Runtime engine[edit]

The runtime engine is responsible to distribute the available jobs across the available systems. In a large scale HPC clusters, these can be heterogeneous and consist of traditional compute nodes as well as nodes with accelerator cards, such as GPUs or Intel's Xeon Phi. Besides the mere scheduling and distribution of jobs, the runtime engine is also adding fault-tolerance. Jobs are monitored after they have been assigned and reassigned to different resources, in case the initially assigned hardware fails. New hardware can be added dynamically.

Workflow engine[edit]

The workflow engine translates instructions from an existing workflow in XML format with special GPI-Space tags into the runtime environments internal instructions which are based on Petri nets. Workflows can be arbitrary modular and use other workflows as elements, thus allowing users to predefine building blocks once and then use them in future, more complicated workflows. A graphical editor for workflows is available.

Autoparallelization engine[edit]

"GPI Architecture"
GPI Architecture

The autoparallelization engine decides about how to ideally execute code that is fed into the system in parallel. This relieves domain programmers from the need for parallelizing their own code and leaves them focusing on their domain. HPC knowledge and experience by Fraunhofer ITWM's Competence-Center High-Performance Computing (CC-HPC) is an essential contributor to the engine's capability of generating highly optimal parallel codes.

Virtual memory layer[edit]

All computation with GPI-Space can be done using a fast parallel file system, such as BeeGFS, which is very similar to other Big Data solutions available. But beyond this, GPI-Space is capable of doing all computation in memory, as well, thus omitting the higher latencies and performance bottlenecks of traditional I/O. Using Fraunhofer GPI (see also graphic "GPI Architecture"), one big block of a partitioned global address space is dynamically allocated. The RDMA capability allows for fast, single sided communication. Disk transfers to and from the virtual memory are completely asynchronous and hidden behind computation.

Seismic Development and Programming Architecture (SDPA)[edit]

The GPI-Space core plus the domain specific HPC-modules for seismic make up SDPA to execute user codes.

To showcase the validity of the GPI-Space approach, Fraunhofer first introduced it as part of the Seismic Development and Programming Architecture (SDPA) during SEG 2010 in Houston, TX to the community. In the seismic domain exist countless legacy algorithms and codes in a variety of programming languages that have been developed over years, but that are not parallelized. Due to limited resources, it is often not feasible to rewrite those codes from scratch in a parallel version and one single programming language.

Developers at the CC-HPC have put together domain specific solutions for seismic data that includes:

  • highly optimized algorithms for parallel I/O,
  • fault tolerance,
  • parallelization patterns for seismic data, such as traces, gathers (which consist of several traces), or stacks which enable the autoparallelization engine to work efficiently, and
  • general data management routines to handle seismic data.

In addition, there is a set of basic workflows that can be used as building blocks for more sophisticated workflows by the end user. All these components solve the parallelization problem for the seismic domain, so the domain developer can focus on his problem, without having to deal with it.

An end user of SDPA can then simply execute existing legacy codes and modules in any language in parallel with SDPA, reducing turnover time for projects significantly. SDPA is also used as a fast way to prototype new ideas and algorithms for parallel execution.

SDPA is used by several of Fraunhofer's industry partners in a production environment.

See also[edit]

References[edit]

  1. ^ Tiberiu Rotaru; Mirko Rahn; Franz-Josef Pfreundt (2014). "MapReduce in GPI-Space". In Dieter an Mey et alt. (ed.). Euro-Par 2013: Parallel Processing Workshops. Springer Berlin Heidelberg. pp. 43–52. ISBN 978-3-642-54419-4.
  2. ^ McNaughton, Neil (11 January 2010). "Fraunhofer SDPA". OilIT.com.

External links[edit]