• Algorithms (Wednesday 3:30-5:00PM)
    Room A102/104/106
    Chair: Olaf Lubeck, Los Alamos National Laboratory

    • Title: Stable, Globally Non-iterative, Non-overlapping Domain Decomposition Parallel Solvers for Parabolic Problems
    • Authors:
      Yu Zhuang (Texas Tech University)
      Xian-He Sun (Illinois Institute of Technology)
    • Abstract:
      In this paper, we report a class of stabilized explicit-implicit domain decomposition (SEIDD) methods for the parallel solution of parabolic problems, based on the explicit-implicit domain decomposition (EIDD) methods. EIDD methods are globally non-iterative, non-overlapping domain decomposition methods which, when compared with Schwarz alternating algorithm based parabolic solvers, are computationally and communicationally efficient for each simulation time step but suffer from time step size restrictions due to conditional stability or conditional consistency. By adding a stabilization step to the EIDD methods, the SEIDD methods are freed from time step size restrictions while retaining EIDD's computational and communicational efficiency for each time step, rendering themselves excellent candidates for large-scale parallel simulations. Three algorithms of the SEIDD type are implemented, which are experimentally tested to show excellent stability, computation and communication efficiencies, and high parallel speedup and scalability.

    • Title: Stochastic Search for Signal Processing Algorithm Optimization
    • Authors:
      Bryan Singer (Carnegie Mellon University)
      Manuela Veloso (Carnegie Mellon University)
    • Abstract:
      This paper presents an evolutionary algorithm for searching for the optimal implementations of signal transforms and compares this approach against other search techniques. A single signal processing algorithm can be represented by a very large number of different but mathematically equivalent formulas. When these formulas are implemented in actual code, unfortunately their running times differ significantly. Signal processing algorithm optimization aims at finding the fastest formula. We present a new approach that successfully solves this problem, using an evolutionary stochastic search algorithm, STEER, to search through the very large space of formulas. We empirically compare STEER against other search methods, showing that it notably can find faster formulas while still only timing a very small portion of the search space.

    • Title: A Hypergraph-Partitioning Approach for Coarse-Grain Decomposition
    • Authors:
      Umit V. Catalyurek (The Ohio State University)
      Cevdet Aykanat (Bilkent University)
    • Abstract:
      We propose a new two-phase method for the coarse-grain decomposition of irregular computational domains. This work focuses on the 2D partitioning of sparse matrices for parallel matrix-vector multiplication. However, the proposed model can also be used to decompose computational domains of other parallel reduction problems. This work also introduces the use of multi-constraint hypergraph partitioning, for solving the decomposition problem. The proposed method explicitly models the minimization of communication volume while enforcing the upper bound of p+q-2 on the maximum number of messages handled by a single processor, for a parallel system with P = p x q processors. Experimental results on a wide range of realistic sparse matrices confirm the validity of the proposed methods, by achieving up to 25 percent better partitions than the standard graph model, in terms of total communication volume, and 59 percent better partitions in terms of number of messages, on the overall average.