• Material Science Applications (Tuesday 1:30-3:00PM)
    Room A201/205
    Access Grid Enabled
    Chair: Robert Eades, Pacific Northwest Laboratory

    • Title: Scalable Atomistic Simulation Algorithms for Materials Research
    • Authors:
      Aiichiro Nakano (Louisiana State University)
      Rajiv K. Kalia (Louisiana State University)
      Priya Vashishta (Louisiana State University)
      Timothy J. Campbell (Logicon Inc. and Naval Oceanographic Office Major Shared Resource Center)
      Shuji Ogata (Yamaguchi University, Japan)
      Fuyuki Shimojo (Hiroshima University, Japan)
      Subhash Saini (NASA Ames Research Center)
    • Abstract:
      A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD) simulations and quantum-mechanical (QM) calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. The production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

    • Title: An 8.61 Tflop/s Molecular Dynamics Simulation for NaCl with a Special-Purpose Computer: MDM
    • Authors:
      Tetsu Narumi (RIKEN)
      Atsushi Kawai (RIKEN)
      Takahiro Koishi (RIKEN)
      Gordon Bell Prize Finalist
    • Abstract:
      We performed molecular dynamics (MD) simulation of 33 million pairs of NaCl ions with the Ewald summation and obtained a calculation speed of 8.61 Tflop/s. In this calculation we used a special-purpose computer, MDM, which we have developed for the calculations of the Coulomb and van der Waals forces. The MDM enabled us to perform large scale MD simulations without truncating the Coulomb force. It is composed of MDGRAPE-2, WINE-2 and a host computer. MDGRAPE-2 accelerates the calculation for real-space part of the Coulomb and van der Waals forces. WINE-2 accelerates the calculation for wavenumber-space part of the Coulomb force. The host computer performs other calculations. With the completed MDM system we performed an MD simulation similar to what was the basis of our SC2000 submission for a Gordon Bell prize. With this large scale MD simulation, we can dramatically decrease the fluctuation of the temperature less than 0.1 Kelvin.

    • Title: Multi-teraflops Spin Dynamics Studies of the Magnetic Structure of FeMn/Co Interfaces
    • Authors:
      A. Canning (Lawrence Berkeley National Laboratory)
      B. Ujfalussy (University of Tennessee)
      T. C. Shulthess (Oak Ridge National Laboratory)
      X. G. Zhang (Oak Ridge National Laboratory)
      W. A. Shelton (Oak Ridge National Laboratory)
      D. M. C. Nicholson (Oak Ridge National Laboratory)
      G. M. Stocks (Oak Ridge National Laboratory)
      Yang Wang (Pittsburgh Supercomputer Center)
      T. Dirks (IBM)
      Gordon Bell Prize Finalist
    • Abstract:
      We have used the power of massively parallel computers to perform first principles spin dynamics (SD) simulations of the magnetic structure of Iron-Manganese/Cobalt (FeMn/Co) interfaces. These large scale quantum mechanical simulations, involving 2016-atom super-cell models, reveal details of the orientational configuration of the magnetic moments at the interface that are unobtainable by any other means. Exchange bias, which involves the use of an antiferromagnetic (AFM) layer such as FeMn to pin the orientation of the magnetic moment of a proximate ferromagnetic (FM) layer such as Co, is of fundamental importance in magnetic multilayer storage and read head devices. Here the equation of motion of first principles SD is used to perform relaxations of model magnetic structures to the true ground (equilibrium) state. Our code is intrinsically parallel and has achieved a maximum execution rate of 2.46 Teraflops on the IBM SP at the National Energy Research Scientific Computing Center (NERSC).