• Adventure Project (Graduate School of Frontier Science)
    Booth R0565
    Shinobu Yoshimura, yoshi@q.t.u-tokyo.ac.jp

      The ADVENTURE project is one of the “Computational Science and Engineering”; projects within the JSPS Research for the Future program, developing an advanced general-purpose finite element analysis system ADVENTURE that can solve a model with 10-100 million DOFs and is to be freeware. The ADVENTURE system employs the module based architecture. Each of the modules is an independent application program, which can be operated either alone or by cooperating with other modules. The domain decomposer, solvers, and visualizer are fully parallelized with domain decomposition-based parallelization techniques, and can be operated in various heterogeneous parallel and distributed environments. The system also includes several optimization modules for design. The I/O format among the modules is standardized as the ADVENTURE I/O. In the exhibition, its practical analysis and design capabilities are demonstrated together with several industrial applications such as a full-scale 3D model of a nuclear pressure vessel with 60 million DOFs.

  • Today's Discoveries Benefit Humanity Tomorrow (Albuquerque High Performance Computing Center)
    Booth R0127
    Candace A. Shirley,cshirley@mhpcc.edu

      The Maui High Performance Computing Center (MHPCC) and the Albuquerque High Performance Computing Center (AHPCC) are national supercomputing centers managed by the University of New Mexico (UNM). Established under a Cooperative Agreement with the Air Force Research Laboratory (AFRL), MHPCC is a leader in scalable computing technologies and is uniquely chartered to support the Department of Defense (DOD), government, commercial, and academic communities. AHPCC provides an environment for research and education in advanced high-performance computing, interdisciplinary applications, and state-of-the-art communications. MHPCC is a Distributed Center of the DOD High-Performance Computing Modernization Program (HPCMP), and both MHPCC and AHPCC are SuperNodes of the National Computational Science Alliance. Projects featured at SC2001 include advanced image enhancement research, analyses of entity-based simulations of land combat, quantum chemistry-nanomaterials, advanced computing methods to enhance education, development of Linux Superclusers, and Access Grid demos including cyber art and 3D virtual realty environments.

  • High-Performance Cluster Computing (Ames Laboratory, Scalable Computing Lab [DOE])
    Booth R0337
    David Halstead, halstead@ameslab.gov

      The Scalable Computing Laboratory in the DOE Ames Laboratory will be showcasing work on assessing and improving communication of real-world parallel HPC applications, and on large cluster computer systems. In addition to the performance evaluation of multiple high-speed dedicated system area networks, we will be presenting an efficient, threaded, message passing benchmark to optimize the utilization of clustered SMP machines. Included in this work is the option of trading latency performance for bandwidth by using data compression. This has particular relevance to cluster computing, in which compute cycles are cheap, but internode and intersite communications are limited. Finally, research into improving real-world HPC application performance with shared memory emulation APIs and lightweight message passing techniques will be presented together with sophisticated parallel resource management tools.

  • Arctic Region Supercomputing Center
    Booth R0101
    Jenn E. Wagaman, wagaman@arsc.edu

      The Arctic Region Supercomputing Center (ARSC) supports the computational needs of researchers within the Department of Defense, the University of Alaska Fairbanks, other academic institutions and government agencies by providing high performance computing, visualization and networking resources, programming and technical expertise, and training. Areas of specialty supported by ARSC include ocean modeling, atmospheric sciences, climate/global change, space physics, satellite remote sensing, and civil, environmental and petroleum engineering. ARSC collaborates in a number of partnerships, including a joint effort with the U.S. Army Engineer Research and Development Center Major Shared Resource Center and the Albuquerque High Performance Computing Center. ARSC will also be participating in the SC Global event. The Arctic Region Supercomputing Center operates a Cray T3E, a Cray SV1ex, and an IBM Winterhawk II system as well as various visualization resources, including a Pyramid Systems ImmersaDesk and a network of SGI workstations located in a video production/training lab and three additional access labs on campus.

  • Tools and Technology for High-Performance and Collaborative Computing (Argonne National Laboratory)
    Booth R0352
    Lori Freitag-Diachin, freitag@mcs.anl.gov

      Researchers at Argonne National Laboratory are developing powerful collaborative tools and technologies that will advance the state of the art in large-scale computing and will make scientists more productive. The exhibit showcases work in the following areas: numerical libraries for large-scale computational applications; parallel programming tools; collaborative tools; scalable superclusters; advanced visualization environments; software infrastructure for the national computational grid; servers enabling problem solving over the Internet; and associated scientific computing applications in such areas as computational chemistry, computational astrophysics, and climate modeling. Closely tied with these projects is an emphasis on collaborations, including the ASCI program and the NCSA PACI Alliance.

  • ASCI DOE Tri-Lab Exhibit
    Booth R0375
    Jean Shuler, jshuler@llnl.gov

      The Accelerated Strategic Computing Initiatve (ASCI) exhibit will present current research and development in such key subject areas as future working environments for computations professionals, wireless communications, and novel strategies for deploying break-through research. These presentations and demonstrations will exploit innovative technologies designed, developed, and implemented within ASCI. The website http://www.asci.doe.gov will feature SC2001 output. Booth visitors can expect to find a mix of general ASCI information and personally selected highlights. Expert ASCI researchers will share their enthusiasm and experience with all visitors, regardless of skill level. Knowledgeable program generalists will greet visitors courteously and promptly, while specialty scientists and engineers demonstrate or discuss program specifics. ASCI personnel will wear ASCI logo shirts to readily identify them, in and beyond the Booth.

  • R&D Activities on the Asia Pacific Grid (Asia Pacific Grid (ApGrid) / Electrotechnical Laboratory)
    Booth R0665
    Yoshio Tanaka, yoshio.tanaka@aist.go.jp

      Asia Pacific Grid (ApGrid) is a grid infrastructure around the Asia-Pacific region. The ApGrid provides a meeting point for all Asia-Pacific HPCN researchers, and it acts as a communication channel to the Global Grid Forum and other Grid communities. A regional wide testbed for global computing (Grid and/or Meta) can be established on the ApGrid. This exhibit demonstrates various HPCN research and development activities on the ApGrid such as High-performance computing with supercomputers including Hitachi SR8000, IBM RS6000/SP, some large-scale PC clusters, etc. provided by TACC/AIST and TITECH; building a virtual supercomputer center on the ApGrid; global computing on the ApGrid using Ninf; and Grid Data Farm for petascale data-intensive computing.

  • Boston University
    Booth R0201
    Glenn Bresnahan, glenn@bu.edu

      Boston University's research exhibit features its NSF-funded project, MARINER: Mid-level Alliance Resource In the North East Region. MARINER is a partner in the National Computational Science Alliance and extends the university's efforts in advanced scientific computing and networking to organizations throughout the region. Demonstrations of current research and educational projects developed through the Center for Computational Science and the Scientific Computing and Visualization Group will be shown using graphics workstations, posters, and videos in the exhibit Booth. We will also be demonstrating distributed computing, collaboration, and visualization software with our Alliance and other partners.

  • Brigham Young University
    Booth R1152
    Quinn Snell, snell@cs.byu.edu

      Brigham Young University has recently established the Ira and Marylou Fulton Supercomputer Center. The center is home to a 188 processor IBM SP-2, a 32 processor Origin 3000, a 64 processor Origin 2000 and a 16 processor Origin 2000 with 3 Infinite Reality graphics pipes. At BYU, we are doing research in Computational Biology, Computational Chemistry, Mechanical Engineering, and Remote Sensing and Sattelite image processing for weather prediction etc. BYU has also been selected as a PACE partner with General Motors and has been involved with modeling the new Camaro and Hummer designs. The Booth will contain demos, posters, flyers describing the research and projects, and a scale model of a Camaro.

  • HP Scientific Computing at Brookhaven National Lab (Brookhaven National Laboratory)
    Booth R0749
    John Spiletic, spiletic@bnl.gov

      Brookhaven National Laboratory proposes to exhibit new computational science developments in four research areas: The Center for Data Intensive Computing (CDIC) is pursuing research in advanced scientific computing and its application to high-energy and nuclear physics, biological and environmental studies, and materials and chemical science. The Relativistic Heavy Ion Collider and future proposed ATLAS experiments require massive computational facilities for collecting and analyzing petabytes of data. We will highlight the current state of the project. BNL is a "DOE topical computing site" with the installation of the QCDSP 600 Gflop supercomputer, the Gordon Bell Prize winner in 1998. The QCDSP machine will be succeeded by QCDOC, a 10-Teraflop supercomputer, whose architecture will be described. The Brookhaven Data Visualization group will demonstrate advances in the areas above, along with research in parallel and remote visualization of large data sets. A distance learning project with two NY colleges will be highlighted.

  • Caltech Center for Advanced Computing Research
    Booth R0340
    Chip Chapman, chip@cacr.caltech.edu

      For almost two decades, the Center for Advanced Computing Research (CACR) and its predecessors at the California Institute of Technology have simultaneously provided leading-edge capabilities for computational science and engineering research collaborations and experimented with new technologies to help define the technical computing environment of the future. Recently, CACR has focused on the convergence of data-intensive applications with numerically intensive computing and the associated storage, networking, and visualization challenges. Interactive demonstrations will illustrate progress in research collaborations including Caltech's Center for Simulation of Dynamic Response of Materials, the GriPhyN and Particle Physics Data Grid (PPDG) projects, the Digital Sky and Virtual Sky projects, the Laser Interferometer Gravitational-Wave Observatory (LIGO), and CACR's participation in the National Partnership for Advanced Computational Infrastructure (NPACI). System and architectural issues, including the role of Beowulf-class clusters spanning these applications, will also be featured.

  • Research Activities in CCSE (CCSE of Japan Atomic Energy Research Institute)
    Booth R0471
    Toshio Hirayama, hirayamt@koma.jaeri.go.jp

      CCSE of the Japan Atomic Energy Research Institute was established in April 1995 with governmental guidance to promote computational science and engineering among the national and other semi-governmental research organizations. Since 2000, CCSE has started to construct ITBL system that integrates computing resources in geographically distributed research organizations seamlessly as well as securely. The project is proceeding in cooperation with other research organizations affiliated to the Ministry of Education, Culture, Sports, Science and Technology. We will present our research activities on the project.

  • NESNEX: Nuclear Energy Simulation, the Next Generation (CEA/DEN)
    Booth R0779
    Thierry Nkaoua, thierry.nkaoua@cea.fr

      Presentation of the French System Codes for Nuclear Industry and Research Applications. Demonstration of these codes Presentation of the NESNEX: development of a new generation of integrated codes, from user interface to advanced numerics and physics.

  • Center for Computational Physics, University of Tsukuba
    Booth R0684
    Taisuke Boku, taisuke@is.tsukuba.ac.jp

      Center for Computational Physics is a dedicated center for research on computational physics including particle physics, condensed matter physics, and astrophysics as well as computer science for high-performance parallel processing. The main resource of the center is a massively parallel processing system named CP-PACS equipped with 2048 processing units to provide over 600 GFLOPS of peak performance. In this exhibition, we will present current research on the component technologies for the new generation MPP system for very large-scale scientific calculations based on novel processor architecture, optical interconnection network, high-performance I/O system, and real-time visualization system. We will also provide on-line demonstration on our Heterogeneous Multi-Computer System that combines general purpose MPP (CP-PACS for continuum simulation) and special purpose one (GRAPE-6 for particle simulation) with very high performance parallel network channels to perform realistic astrophysics simulations. Other results on various field of computational physics are also displayed.

  • PROMIS Compiler System (Center for Supercomputing Research and Development)
    Booth R0508
    Steven Carroll, scarroll@csrd.uiuc.edu

      The PROMIS compiler system is a highly retargetable, modular and extensible compiler infrastructure. The Internal Representation is well suited to loop level and task level parallelism. Current project include static performance analysis, advancd symbolic analysis, system level machine description, incremental compilation, and static scheduling of hierarchical parallelism.

  • Computational Science & Engineering at CLRC (CLRC Daresbury Laboratory)
    Booth R0861
    Mike Ashworth, m.ashworth@dl.ac.uk

      The Computational Science and Engineering Department at CLRC acts as a UK focus for the development, application, and support of research in computational science and engineering. We will overview our work with the UK academic community, focusing in particular on scientific highlights from the collaborative computational projects and our high performance computing activities, including: high performance quantum chemistry applications; modeling mechanisms for DNA fragment transport across cell membranes; first principles molecular dynamics simulations of water adsorption on oxide surfaces; modeling high temperature superconducting properties; reynolds stress laminar flamelet models of turbulent pre-mixed combustion; parallelization of FLITE3D: an irregular grid whole aircraft Euler solver; PARASOL: an integrated environment for parallel sparse matrix solvers; Computers by Design: virtual benchmarking of parallel systems in real applications; grid computing; and micro-fluidics simulations.

  • Large-scale Windows Computing (Cornell Theory Center [CTC])
    Booth R1059
    L. Callahan, cal@tc.cornell.edu

      CTC will highlight a number of large-scale scientific projects that are running on our Velocity, Velocity+, and CMI clusters. These projects included multiscale materials modeling, structural biology, and genomics. We will also feature ongoing work at the ARS/USDA Center for Agricultural Bioinformatics, located at CTC, and a new NASA project focused on revitalizing engineering education. We will demonstrate several e-science, or Web-computing, projects—one relating to materials research and another relating to genomics. We will also demonstrate a number of Windows-based tools for high-performance computing, a Windows-based CAVE desktop development environment, as well as our scientific outreach through Web-based 3-D virtual worlds. We will discuss services we provide to sites interested in moving to a Windows-based environment.

  • Grid-enabled MEG Data Analysis System (Cybermedia Center, Osaka University, Japan)
    Booth R0567
    Susumu Date, date@rd.center.osaka-u.ac.jp

      Our project team's final goal is to reveal brain functions. The human brain is complex in comparison with other internal organs. For the revelation of unknown brain functions, a variety of computationally intensive signal processing are essential. These signal processing techniques take too long in time for realistic analyses and diagnoses. For the goal, we have been building a brain data analysis system using grid technologies over a few years. In the system, the seamless integration of a data acquisition process, a data analysis process, and an implementation process of analysis result is aimed on the Internet. The brain data is analyzed on multiple high-performance computers in the Internet. Then, the results of analyses are transferred and visualized on a computer on the desk of system users. In our Booth, MEG data analysis is planned to be performed.

  • Department of Defense High Performance Computing Modernization Program
    Booth R0309
    Ralph A. McEldowney, ralph.mceldowney@wpafb.af.mil

      The U.S. Department of Defense (DoD) High-Performance Computing Modernization Program (HPCMP) was created to modernize the DoD's high performance computing and networking resources. The program's vision is to provide DoD's scientists and engineers with advanced computational environments to solve the most challenging problems, effectively delivering science to the warfighter. The theme for this year's exhibit is “2001: A DoD HPC Odyssey.” The exhibit will highlight the DoD's nearly decade-long period of growth in HPC capabilities and reveal future HPC plans. The exhibit will also showcase the program's three major initiatives: high performance computing centers, high-speed networking, and software development. In addition, it will highlight significant DoD research conducted in ten computational technology areas. Interactive presentations, videos, demonstrations, and posters will illustrate how the DoD's HPC odyssey is successfully delivering science to the warfighter.

  • Dancing Beyond Boundaries (Digital Worlds Institute, University of Florida)
    Booth R1070
    Joella Walz, joella@ufl.edu

      “Dancing Beyond Boundaries” is a project exploring whether internationally distributed dancers, musicians, graphic artists, videographers, and choreographers can create, rehearse, and perform a new collaborative work using the Internet2, the AccessGrid, and a select number of high-quality video and audio streams. The main performance stage is located near our Booth. From New York City, an internationally renowned choreographer will interactively create the new piece, conduct the rehearsals, and oversee the final performance. In Brazil, Master Percussionists will compose, collaborate, perform, and transmit surround sound audio to all performers. At the University of Florida, a dance troop will rehearse and perform with the dancers on the floor in Denver, their images displayed onto large rear-projected screens on the exhibit floor stage. And finally, a computer graphics artist in another remote geographic location will visually accompany the dance and send real-time, broadcast-quality, processed video and animations to the Denver stage. The Digital Worlds Institute, located at the University of Florida, is an interdisciplinary extension of both the Colleges of Fine Arts and Engineering. The Institute's mission is to advance digital worlds technologies by drawing on the diverse talents and skills of the artist, scientist, and engineer.

  • High Performance Computation of Intelligent Optimization (Doshisha University, Afiis Project)
    Booth R0581
    Mitsunori Miki, mmiki@mail.doshisha.ac.jp

      Academic Frontier of Intelligent Information Science and it’s applications to problem solving in engineering (AFISS) project is supported by Doshisha University and the Ministry of Education, Science, Sports and Culture, Japan. Problems to find design variables that make a value of objective function maximum or minimum are called optimization problems. To solve optimization problems automatically, iterations between an optimizer that decides the next searching points and an analyzer that determines the value of objective function should be performed. These iterations may cause the high calculation cost. Therefore, the optimization problem is one of the important applications in HPC. In our research exhibition, you can find the intelligent optimization methods and the results. There are the following three subtopics: Intelligent Optimization Design of huge structures (IOD), Prediction of protein structures by Evolutionary Optimization (EO), and Optimization Design in Global Computation Environment (OD/GCE).

  • Platform Architectures for Embedded HP Computing (Embedded High Performance Computing Project)
    Booth R0680
    Murakami Kazuaki, murakami@c.csce.kyushu-u.ac.jp

      The Embedded High-Performance Computing (EHPC) project, which is funded by the Ministry of Education, Culture, Sports, Science and Technology of Japan, is a collaboration among six universities, two govermental institutions, and four coorporations. The primary goal of the project is to develop a platform architecture that can be customized easily to provide semi-special purpose computers for many scientific applications. High cost-performance will be achieved by using the system LSI technologies, FPGA (Field Programmable Gate Array), and other embedded system technologies. The Booth will show the first prototype of the EHPC platform and some scientific applications, including molecular-orbital calculation, the density functional calculation, and drug design.

  • EPCC: Edinburgh Parallel Computing Centre
    Booth R0
    Alan D. Simpson ,a.simpson@epcc.ed.ac.uk

      EPCC has been one of the leading HPC centres in Europe since 1990. Today, it has 45 full-time staff and, as well as providing research support and training for academic users and European visitors, we produce business solutions for the UK and European industry. At SC2001, we will have videos and interactive demonstrations of the results of a number of projects in Grid and network computing. We are working with Cisco systems on a simulator for differentiated services on the Internet. We have also recently produced a toolkit for Grid portals for HPC applications and a prototype infrastructure for a pure Java Grid built on top of Jini. In addition, EPCC currently leads the benchmarking activities of the Java Grande forum, including language comparisons for real, parallel codes. The exhibit will highlight EPCC's European projects: the TRACS visitor programme; technology transfer; and the ENACTS consortium of HPC and Data Centres.

  • Swiss HPCN Grid (ETH-CSCS)
    Booth R0773
    Dr. D. Maric, maric@cscs.ch

      Swiss national High Performance Computing and Networking (HPCN) Vision, Strategy and its Implemetation in the frame of the Swiss HPCN Grid are featured. The Swiss HPCN Grid comprises following four sites of the Swiss ETH (Federal Institute of Technology) domain: ETH-CSCS (Swiss Center for Scientific Computing, Leading Site), ETH-Zuerich, EPF-Lausanne and PSI-Villigen. The Swiss HPCN Grid is open and serves all national academic, industrial, and governmental HPCN user communities. The presentation comprises the architecture of the Grid, the resources and competencies at all four sites and the examples of the projects both in the field of the computational science and engineering applications and HPCN technologies.

  • Applications Testbed for European GRID Computing (EUROGRID Project)
    Booth R0871
    Daniel Mallmann, d.mallmann@fz-juelich.de

      The EUROGRID project is a shared cost Research and Technology Development project (RTD) granted by the European Commission (grant No. IST 20247). It is part of the Information Society Technologies Programme (IST). The grant period is November 1, 2000 till October 31, 2003. Within the project, a European GRID network of leading High-Performance Computing centres from different European countries will be established. The EUROGRID software infrastructure that uses the existing Internet network and offers seamless and secure access for the EUROGRID users will be operated and supported. Important GRID software components like fast file transfer, resource broker, interface for coupled applications and interactive access, will be developed and integrated into EUROGRID. Distributed simulation codes from different application areas (biomolecular simulations, weather prediction, coupled CAE simulations, structural analysis, real-time data processing) are demonstrated. After the project end, the EUROGRID software will be available as a supported product.

  • European Center for Parallelism of Barcelona
    Booth R0765
    Jordi Torres, torres@cepba.upc.es

      The Booth will present the developments and results achieved by CEPBA in research and development projects duringthe last few years. The main project will be Paraver, a visualization and analysis tool for MPI, OpenMP, and Java programs. Other projects at the Booth will be Nanos (cooperation between OpenMP compiler and OS scheduling on multiprogrammed multiprocessors) and Dimemas (a simulator of Distributed MEmory MAchines that is being successfully used in tuning MPI applications). We intend to show that a careful design of different tools enables their integrated use, supporting methodologies and practices that lead to very high productivity of the parallelization activity. The people interested in these topics that come to the Booth, will be able to see demonstrations of the different projects and obtain explanations about CEPBA developments and activities.

  • UPC: Unified Parallel C (George Washington University)
    Booth R547
    Tarek El-Ghazawi

      This research exhibit will demonstrate the underlying concepts of UPC, an explicitly parallel extension of ANSI C designed to provide both good performance and ease of programming for high-end parallel computers. UPC provides a distributed shared-memory programming model and includes features that allow programmers to specify and exploit memory locality. Such constructs facilitate explicit control of data and work distribution among threads so that remote memory accesses are minimized. Thus, PC maintains the C language heritage of keeping programmers in control of and close to the hardware. Among the advanced features offered by UPC are shared and private pointers into the shared and private address spaces, shared and private data, efficient synchronization mechanisms including non-blocking barriers, and support for establishing different memory consistency models. In addition to its original open-source implementation, UPC has gained acceptance from several vendors who are producing exploratory compilers. For more information see upc.gwu.edu.

  • The Grid is Not Enough (High Performance Computing Center Stuttgart [HLRS])
    Booth R0761
    Matthias Mueller, mueller@hlrs.de

      The High Performance Computing Center Stuttgart (HLRS) is a national HPC Center in Germany for research. In addition, together with debis Systemhaus GmbH and Porsche, it has formed a joint company to provide access to supercomputers for research and industry. These supercomputers comprise a wide range of platforms, RUS/HLRS is actively pursuing the goal of achieving a distributed working environment for its users that allows them to see and use all resources in a seamless way. At SC2001, HLRS will demonstrate its activities in the field of Grid Computing for science and industry. Our presentation will show the main building blocks HLRS is working with. Several projects highlight how these blocks are put together. Examples from industry include applications from the car and aerospace sector. Scientific research is demonstrated in the fields of medicine, biology, chemistry and physics. The results will be visualized by our own collaborative visualization tool COVISE.

  • Research@Indiana (Indiana University, Purdue University, University of Notre Dame–Rose Hulman Institute of Technology)
    Booth R1161
    David C. Hart, dhart@indiana.edu

      Indiana has become increasingly important as a center of Information Technology research, development, and commerce. Indiana is home to the Abilene and TransPAC NOCs, its universities are consistently represented in the Top 500 list, and computer scientists in Indiana are developing important new software technology. Much as the research activities of Indiana's research universities cover a great diversity of disciplines, so do accomplishments of Indiana-based researchers making use of HPCC applications. The Research@Indiana display will showcase computer science developments, including developments in areas such as cluster computing technology, collaboration, grid computing, and massive data storage systems; as well as applications in areas such as astronomy, bioinformatics, chemistry, engineering, medicine, and physics.

  • INRIA: Institut National de Recherche en Information
    Booth R0868
    Jean-Louis Pazat, Jean-Louis.Pazat@irisa.fr

      This research exhibit presents an overview of INRIA's activities in the area of high-performance cluster and Grid computing. Examples of recent accomplishments that will be demonstrated are code coupling tools; .PADICO: an environment to face the heterogeneity of Grid computing and to achieve high performance that supports both CORBA and MPI; .MOME/CL: a coupling library for parallel codes based on the MOME software distributed shared memory (DSM); Java oriented tools; .CONCERTO/Do: a tool that automaticaly generates distributed programs from multithreaded java programs; .PROACTIVE PDC: a Java library for Parallel, Distributed, and Concurrent computing; .IC2D a tool to transparently monitor, control, and graphically visualize communications; clusters management and programming tools; .KA: efficient tools for operating system, files, and Unix commands broadcasting on large clusters and grids; .ATHAPASCAN: a high level data-flow language; PAJE: a scalable visualization framework for MPI and Athapascan threaded programs; and .TAKAKAW: a molecular dynamics application.

  • Advanced Fluid Information Research Center (Institute of Fluid Science, Tohoku University)
    Booth R0583
    Shigeru Obayashi, obayashi@ieee.org

      Institute of Fluid Science, Tohoku University, devotes its supercomputing facility to solve complex flow phenomena for the progress of basic science and engineering. This exhibit will display our latest achievements based on NEC SX-5 and SGI Origin 2000.

  • Special-Purpose Hardwares for Linear Sys. & Stat Analysis (Institute of Statistical Mathematics)
    Booth R0463
    Makoto Taiji, taiji@ism.ac.jp

      We are developing special-purpose computers for dense matrix calculations. It can accelerate LU and QR decomposition, Gram-Schmidt orthonormalization, and other calculations of dense matrices. We have developed a new parallel CPU designed for these applications. In the exhibition, we will demonstrate the machine and will display our other activities, including our statistical analysis software packages and a physical random number generator.

  • Internet2
    Booth R0849
    Elaine Lauerman, ekl@internet2.edu

      Internet2, a project of the University Corporation for Advanced Internet Development provides leadership and direction for advanced networking development within the U.S. university community. Internet2 is focused on network research, technology transfer, and collaborative activities in related fields such as distance learning and educational technology. Internet2 is a collaborative project by over 160 U.S. research universities, in partnership with industry leaders and U.S. federal agencies, to develop a new family of advanced applications to meet emerging academic requirements in research, teaching, and learning. Internet2 is addressing this challenge by creating a leading-edge network capability that includes the nationwide high-performance Abilene network for use by its members.

  • The Earth Simulator Project (Japan Marine Science and Technology Center)
    Booth R0475
    Kiyoshi Otsuka, otsukak@jamstec.go.jp

      Japan Marine Science and Technology Center(JAMSTEC) is an oceanographic research institution established in October 1971. JAMSTEC introduced supercomputers NEC SX-4 and SX-5 for studying global change. These supercomputer systems are indispensable to understand and predict phenomena such as El Niño event, global warming, weather disasters, and tectonic structure around plate boundaries. In 1997, the Earth Simulator project was started as a cooperative project among JAMSTEC, JAERI, and NASDA under the direction of the Science and Technology Agency (STA) of Japan. The Earth Simulator is a distributed memory parallel supercomputer that is composed of 640 processor nodes, and each node consists of eight vector processors. The total peak performance and main memory capacity are 40Tflops and 10TB, respectively. The Earth Simulator will be in operation in the first quarter of 2002 in Yokohama. The Earth Simulator is expected to implement a coupled atmosphere-ocean general circulation model with high resolution, which is being developed by the Frontier Research System for Global Change (FRSGC).

  • Japan Science and Technology Corporation (JST)
    Booth R0574
    Naoko TATARA, tatara@jst.go.jp

      Japan Science and Technology Corporation (JST) is a semigovernmental organization to enhance the overall science and technology of Japan by organizing fundamental environment for scientific and technological information and by activating advanced and creative research and development. Since 1996, it has been operating a supercomputer complex, the JST Super Computer Complex (SCC). The system is used for two projects: (1) ‘HOWDY’, a database system for retrieving human genome information in the Bioinformatics field, and (2) a ‘Database System for Electronic Structures’ in the material science field. JST has also undertaken three-dimensional visualization of calculation results on SCC and developed a technique of visualization of those results in the web environment. In the exhibition, several applications on SCC will be presented.
      Related URLs:

  • Making Supercomputers Global (John von Neumann Institute for Computing)
    Booth R0769
    Norbert Attig, n.attig@fz-juelich.de

      The John von Neumann Institute for Computing (NIC), mainly carried by the Research Center Juelich’s Central Institute for Applied Mathematics (ZAM) is one of three national HPC Centers in Germany. Its task is to support and further develop scientific computing in Germany in cooperation with other centers, universities, and research institutes by providing supercomputer resources nationwide, developing computational methods and conducting interdisciplinary research. We will showcase the capabilities of uniform access to different supercomputers in Germany; the necessary software system is developed within the government-funded UNICORE project. R &D work on recent activities in the performance analysis of parallel programs will be introduced and demonstrated. On posters the architecture and software environment of the latest generation special-purpose supercomputer APEmille operated at DESY-Zeuthen and of SMP-clusters operated at ZAM will be explained. Furthermore, we will demonstrate recent activities in the design of parallel algorithms and the steering and visualization of complex applications.

  • Computational Science Research at Krell Institute (Krell Institute)
    Booth R0855
    Barbara Helland, helland@krellinst.org

      At Krell Institute, we ensure that students at all levels have the opportunity to study and work in scientifically and technologically complex areas throughout their careers. Specifically, our Booth will highlight research conducted by the next generation of scientists and technologists in two fellowships administered by Krell and will demonstrate a computer-based computational science curriculum for K-12 teachers. The Krell Booth will focus on the Department of Energy’s Computational Science Graduate Fellowship (CSGF) and the High-Performance Computer Science Fellowship (HPCSF) sponsored by Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories. CSGF fellows carry out research in a wide variety of resource-intensive computational science areas including turbulent combustion, protein folding, and transport theory. HPCSF fellows concentrate their research in the high-performance computing areas of scalable operating/run-time systems, hierarchical program systems, compiler design, networking research, performance modeling, and component architectures. For more information, see http://www.krellinst.org/

  • Accelerating Scientific Discovery through Advanced Computing (Lawrence Berkeley National Laboratory)
    Booth R1171
    Thomas M. DeBoni, TMDeBoni@LBL.GOV

      Lawrence Berkeley National Laboratory (LBNL), home to the Department of Energy's National Energy Research Scientific Computing Center (NERSC) and the Energy Sciences Network (ESnet), is a global leader in computing and networking research. Berkeley Lab's HPC and networking capabilities and facilities are advancing DOE research programs by providing leading resources and expertise in computational science. LBNL’s display will feature scientific results obtained using NERSC’s 2,528 processor IBM SP and 696-processor Cray T3E supercomputers; collaborative capabilities, including an Access Grid Node, utilizing the capabilities of ESnet; telepresence via a conference-roving robot linked to the AG and capable of providing virtual tours of the SC 2001 Exhibit Area and direct participation in the technical sessions, with audio, video, and remote control functions using a wireless interface to the Access Grid; and technical presentations by Berkeley Lab staff and NERSC users; and demonstrations of HPC tools developed at LBNL.

  • Leibniz Computing Center (Leibniz-Rechenzentrum, LRZ)
    Booth R0869
    Helmut Heller, heller@lrz.de

      The Leibniz Computing Center (Leibniz-Rechenzentrum, LRZ) of the Bavarian Academy of Sciences is one of Germany's national centers for technical and scientific high-performance supercomputing and also the regional computing center for the universities in Munich and Bavaria. The Competence Network for Technical and Scientific High Performance Computing in Bavaria (KONWIHR) enlarges the deployment of HPC technology through research and development projects. Since the beginning of 2000, the LRZ has been running Europe's fastest supercomputer, a Hitachi SR8000-F1, with a peak CPU performance of 1.3 TFlop/s and 928 GB memory. The machine will soon be upgraded to 2 TFlop/s. The usage of the system as either an MPP or as a hybrid shared memory system will be demonstrated with several applications exploiting such unique features as pseudo-vectorization and automatic parallelization. Grid technology may be used to steer these applications. We also demonstrate tools to monitor performance, profile user activities, and supervise such a large-scale system.

  • High Performance Scientific Comp at LANL (Los Alamos National Laboratory)
    Booth R0451
    Alice Chapman, chapman@lanl.gov

      The exhibit will demonstrate hardware and software solutions for visualizing extremely large datasets. ParaView, a parallel visualization tool developed by Kitware Inc. and Los Alamos as part of the ASCI VIEWS program, will visualize the results of real-world scientific simulations on a commodity visualization cluster running Windows 2000. The five node visualization cluster demonstrated at Supercomputing is representative of a 128-node visualization cluster currently being prototyped at Los Alamos National Laboratory.

      The Los Alamos Computer ARchitecture Toolkit (a la carte) project, also being demonstrated, uses scientific visualization techniques to help analyze the implications of scaling parallel supercomputing architectures.

      The a la carte project has simulated 64 node and 4096 node architectures connected by switches arranged in a fat tree. The visualization tools under development help the team to understand the architecture of this network of processors and the dynamics of the virtual circuits being established and the messages being passed between them during the transmission of simulated loads running on the simulated machine.

      These visualization tools help the team debug their simulation as well as to recognize and understand possible communication bottlenecks, resource mismatches and other anomolous features of these systems. These same tools should be applicable to real, running environments with similar architectures.

      These tools were created in conjunction with the AHPCC at the University of New Mexico using the Flatland visualization development environment built by them.

  • Maui Supercomputing Center
    Booth R1052
    Steve Karwoski, karwoskis@saic.com

      The Maui Supercomputing Center (MSC) is the new designation for the former Maui High-Performance Computing Center (MHPCC). MSC is managed by the University of Hawaii in association with SAIC and Boeing under contract with the U.S. Air Force Research Laboratory (AFRL). MSC is a Distributed Center within the DoD High-Performance Computing Modernization Program (HPCMP) and is nationally recognized as a leader in scalable computing technologies. Scientific focus areas include signal and image processing, modeling and simulation, and training in scalable, parallel technologies. Projects featured at SC2001 include advanced image enhancement research, new material design, mesoscale weather modeling, advanced research in wave front sensing, and development of Linux clusters.

  • National Aeronautics and Space Administration
    Booth R0317
    Patricia (Pat) A. Elson, pelson@mail.arc.nasa.gov

      NASA's research exhibit demonstrates how NASA meets its goals using high-performance computing and networking with projects from five field installations. A variety of real-time and interactive demonstrations feature the latest research in computational applications serving NASA's aero-space, Earth science, and space science missions; remote collaboration and use of virtual reality; software tools for developing, debugging, converting, monitoring, and optimizing code in grid environments; learning technologies, and high-end networking. A large collection of workstations, interactive theaters, and virtual reality devices are used to display the research and encourage visitor interaction.

  • National Aerospace Laboratory of Japan
    Booth R0465
    Naoki HIROSE, nahirose@nal.go.jp

      CFD Technology Center of National Aerospace Laboratory (NAL) promotes research and development of numerical simulation technologies with the main object of Computational Fluid Dynamics and manages high-performance computer systems for internal and external users. The Center's major objectives are to develop aerodynamic simulation codes for very complicated, real aircraft configurations and multidisciplinary analysis and optimization design systems for flow-structure-thermal interactive problems that should be computed within a practical computer time so that they can be served as the practical industrial design tool as well as the fundamental numerical simulation technology development. Following this objectives, we will show the major achievements up to the present using the Numerical Wind Tunnel (NWT). NWT started operation in 1993, and earlier achievements received the Gordon Bell Prize Awards from 1994 to 1996. Its contribution is significant to the Japanese aerospace projects such as the NEXST: supersonic civil transport project, HOPE: the unmanned space shuttle between Space Station and Japan. NAL also promotes fundamental research in fluid dynamics and computational sciences. In the exhibit, we also show next generation NWT system project, Multidsciplinary Simulation Concept, WANS: Web Access to NS System, UPACS: Unified CFD software Package.

  • DataSpace—An Infrastructure for the Data Web (National Center for Data Mining/National Scalable Cluster Project)
    Booth R0443
    Robert Grossman, grossman@uic.edu

      The web today provides an infrastructure for working with distributed multimedia documents. DataSpace is an infrastructure for creating a web of data instead of documents. DataSpace is designed to support the distribution, analysis, and mining of scientific, engineering, health care, business, and e-business data. We will demonstrate open source data servers and data browsers for the data web, as well as a variety of DataSpace applications. The DataSpace infrastructure scales from the commodity internet to emerging high-performance optical testbeds, from single PCs to high-performance compute and data clusters, and from off-line computations to real-time, interactive ones. The DataSpace Project includes a number of academic and industrial partners, including the University of Pennsylvania, National Center for Atmospheric Research, Imperial College,the University of Amsterdam, Dalhousie University, CalTech, and Magnify.

  • HP Comp & Net in Nat'l Center for HP Comp (NCHC), Taiwan (National Center for High Performance Computing, Taiwan)
    Booth R0561
    Fang-Pang Lin, fplin@nchc.gov.tw

      The National Center for High-performance Computing (NCHC) is one of the national laboratories under the National Science Council (NSC) in Taiwan. It is the only research center for high-performance computing applications in Taiwan. Recently, the center was also made to be the center for the next generation research network of Taiwan. NCHC has conducted various research applications regarding high-performance computing and networking. In the research exhibition we will use immersive and collaborative virtual reality to showcase the following programs: CFD design practice using a numerical wind tunnel; virtual GIS-based 3D hydrodynamic model of the Tamshui river; the crashworthiness of the Yulong newly developed vehicle during a frontal impact; and the structure-based drug design model of a transmembrane endothelin receptor and its antagonist. Moreover, we will collaboratively work with international supercomputing centers from Germany, U.S., Japan, and the UK to showcase global metacomputing applications.

  • National Computational Science Alliance (Alliance)
    Booth R0216
    Karen Green, kareng@ncsa.uiuc.edu

      The National Computational Science Alliance is a partnership of more than 50 institutions working to create a ubiquitous, pervasive national-scale information infrastructure. The National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign, anchors the Alliance, which is funded by the National Science Foundation. NCSA is also one of four sites in the TeraGrid project, a $53 million NSF effort to build the most comprehensive infrastructure ever deployed for scientific research. At SC2001, NCSA/Alliance researchers will demonstrate the TeraGrid's potential in collaborative demonstrations with partners at Argonne, SDSC, and Caltech. The demos will show the power of Linux clusters and Intel's Itanium processor in solving scientific problems. They will utilize a 40 GB/s network, similar to the network Qwest will build to connect the TeraGrid sites.

  • Federally Funded IT R&D Programs (National Coordination Office for Information Technology Research and Development)
    Booth R0551
    Carolyn Van Damme, vandamme@itrd.gov

      The exhibit hosted by the National Coordination Office (NCO) for Information Technology Research and Development will feature demonstrations and displays about Federal information technology R&D. Additional information will be available about other Federal IT R&D efforts, the Presidentís Information Technology Advisory Committee, and the role of the NCO.

  • National Partnership for Advanced Computational Infrastructure (NPACI)
    Booth R0206
    Michael P. Gannis, mgannis@sdsc.edu

      NPACI is an NSF-supported consortium of four-dozen premier academic, industrial, and research institutions, led by SDSC at UC San Diego. Its mission is to advance science by creating a national cyberinfrastructure through capability computing: providing compute and information resources of exceptional capability to enable scientific discovery at scales not previously achievable; discovery environments: developing and deploying integrated, easy-to-use computational environments to foster scientific discovery in traditional and emerging disciplines; and computational literacy: extending the excitement, benefits, and opportunities of science to a diverse population. NPACI's exhibit will showcase cyberinfrastructure advances in bioinformatics, protein folding, telescience, multicomponent environmental modeling, scalable visualization, biological fluid dynamics, and cellular microphysiology. We will demonstrate new tools and applications being developed by the cooperating partners, present Grid-based supercomputing in action, and show how the partnership's activities, products, and services are meeting real needs of the computational science community.

  • The Virtual Earth System (NCAR Scientific Computing Division)
    Booth R0119
    Susan Cross, susanc@ucar.edu

      NCAR's Scientific Computing Division presents The Virtual Earth System, a large-format 3D electronic presentation and interaction environment in which we will showcase recent developments in large-scale simulation efforts, related algorithms, and the emerging technologies that will help us develop a better understanding of our planet. For SC2001, we will demonstrate virtual explorations of large datasets from new HPC simulation efforts, distributed wavelet-based volume rendering, advances in the WEB 100 project, demonstrations of the DOE/NSF Earth System Grid Project, new work in data portals, and collaborative visualization applications for the AccessGrid.

  • High Performance Computing at ORNL (Oak Ridge National Laboratory)
    Booth R0429
    Betsy (A) Riley, rileyba@ornl.gov

      ORNL highlights scientific discoveries in astrophysics, climate, fusion, genomics, and materials, made possible by advances in mathematical methods and high performance computing. Learn how performance evaluations of early systems are used to develop specialized techniques to optimize applications for terascale systems. Try out data mining tools that use intelligent agents to sift through petabytes of data and build knowledge trees. See how scalable tools help build fault-tolerant clusters that can be dynamically assembled and administered via a web browser. Learn how to detect the hidden substructure of a network and try out the CCA (Common Component Architecture)—the next best thing to cut and paste for developing large-scale multi-disciplinary simulations.

  • The State of Computing and Beyond (Ohio Supercomputer Center)
    Booth R1046
    Kathryn Kelley, kkelley@osc.edu

      OSC is Ohio's flagship center for high-performance computing, networking, educational outreach, and information technology. OSC empowers its academic, industrial, and government partners to further advance their research and training capabilities. OSC will make several scheduled presentations regarding its expertise in managing and coordinating the following regional and national programs: Cluster Ohio, a centralized management of distributed clusters, state scalable programs, and statewide software licensing; Sun Center of Excellence in High-Performance Computing Environments, a collaboration with education, medical institutions, and industry in bioinformatic; Outreach programs such as the Alliance PACS and EOT-PACI, Platform Lab, Technology Policy Group, and summer institutes; National contracts that support the Department of Defense, Maui Supercomputing Center, and ITEC-Ohio, a consortium of Ohio universities and corporate partners that is one of two national testbeds for Internet2 research.

  • Share the Excitement of Science (Pacific Northwest National Laboratory)
    Booth R0437
    N. Lee Prince, nlee.prince@pnl.gov

      Modeling and simulation on advanced computing systems are signature capabilities of the national laboratories. Pacific Northwest is unique in the diversity of Computational Science & Engineering projects currently being carried out. Our expertise includes Advanced Process Simulation; Applied Mathematics; Atmospheric Chemistry; Biology; Chemistry; Climate; Computational Materials Science; Future Technology; Imaging Science; Mechanical & Materials Engineering; Problem-Solving Environments; and Reactive Transport. Computational science has grown into a third branch of science, a partner with theory and experiments. It is becoming possible to solve the complex equations that describe natural phenomena with an accuracy comparable to, and sometimes exceeding experimental measurements. These advances help address DOE's Science mission and provide the technologies for modeling and engineering capabilities for DOE's Energy Resources and Environmental Quality missions.

  • The Paradyn Parallel Tools Project (Paradyn Project–University of Wisconsin and University of Maryland)
    Booth R0502
    Barton Miller, bart@cs.wisc.edu

      We will be demonstrating the latest technology from the Paradyn and Dyninst efforts. Paradyn can efficiently measure the performance of large-scale parallel/distributed applications on SMPs and (heterogeneous) clusters of workstations. Novel techniques allow instrumentation of a program while it is running, automatically controlling the instrumentation to collect only the information needed to find current problems. Dynamic Instrumentation directly instruments unmodified applications during execution, greatly reducing the amount of performance data collected. Paradyn provides automated help to isolate performance bottlenecks to specific causes and parts of an application program (using our Performance Consultant module). A machine-independent interface, known as the dyninstAPI, is used for a wide range of research and commercial tools. We will demonstrate new security-attack applications of Dyninst, as well as kerninst, a dynamic instrumentation facility that runs on production OS kernels. In additional to kernel profiling, we dynamically optimize the kernel code on-the-fly.

  • Pittsburgh Supercomputing Center
    Booth R0301
    Kenneth G. Hackworth, hackworth@psc.edu

      The Pittsburgh Supercomputing Center (PSC) is an NSF national terascale supercomputing center. It also receives funding from the Department of Energy, the National Institutes of Health, and the Commonwealth of Pennsylvania. PSC provides government, academic, and industrial users with access to state-of-the-art high-performance computing and communication resources. The center's educational mission, through an internship program, provides participants with real experience in a high-technology environment. Above all, PSC strives to provide a flexible environment conducive to solving today's largest and most challenging computational science problems. This year's research exhibit will demonstrate the capabilities of our resources, which include the Terascale Computing System, a Cray T3E/LC512 and other HPC platforms. PSC will feature a variety of demonstrations designed to showcase research done at the center. Particular areas of focus include computational biomedical research such as bioinformatics, high energy physics, weather modeling, computational pathology, and materials science.

  • Seamless Parallel and Distributed Computing (Real World Computing Partnership)
    Booth R0670
    Yutaka Ishikawa,ishikawa@rwcp.or.jp

      Real World Computing Partnership (RWCP), funded by the Japanese government, will show: (I) network architectures, and (II) system software and applications on seamless parallel and distributed computing environments. I) Two network architectures will be presented: (i) Comet, a clustering-over-Internet technology for information grids, and (ii) RHiNET, a local area system network for high-performance parallel computing. (II) The following system software and applications will be shown: (i) SCore cluster system software, (ii) cluster-enabled Omni OpenMP compiler for PC clusters, (iii) PROMISE programming environment for regular and irregular scientific applications, (iv) SPST Programming Tool for heterogeneous parallel and distributed systems, and (v) a parallel data mining system.

  • Research Exhibits: Directory
    Booth R0500
    James Pool, jpool@cacr.caltech.edu

  • Research Exhibits: Directory/Headquarters
    Booth R0847
    James Pool, jpool@cacr.caltech.edu

  • Research Exhibits: Directory/Villages
    Booth R0461
    James Pool, jpool@cacr.caltech.edu

  • Research Organization for Information Science & Technology
    Booth R0560
    Yoshitaka Wada, wada@tokyo.rist.or.jp

      The Research Organization for Information Science & Technology (RIST) was established in 1995 under the umbrella of MEXT (Ministry of Education, Culture, Sports Science and Technology). Since then, RIST, located in the center of Tokyo, in accordance with MEXT's guidance, has been making endeavors to advance the frontier of computational science and technology. One of its major missions is to support integrated computational environments focusing on the earth science and its related area. Exhibitions are mainly on GeoFEM (parallel FE solid earth simulation code) and Foo-Jing (a framework for the next-generation atmospheric model).

  • 75 Tflops Special-Purpose Comp. for Molecular Dyn. Sims [RIKEN (The Institute of Physical and Chemical Research)]
    Booth R0570
    Atsushi Kawai,atsushi@atlas.riken.go.jp

      We have completed the full system of Molecular Dynamics Machine (MDM), the Gordon Bell prize winner last year. MDM is a computer system for MD simulations. It accelerates the calculation of Coulombnic force using two special-purpose hardwares, MDGRAPE-2 and WINE-2. The peak performance of the full system is 75 Tflops. It consists of 1536 MDGRAPE-2 processors and 2304 WINE2 processors, connected to Alpha/SPARC workstation clusters through Myrinet. We are exhibiting the building blocks of MDGRAPE-2 and WINE-2. We also present live MD simulations on a 128 Gflops subset of the MDM system.

  • Comp. Sci. and 3D Vis. in Ed. and Research of Saitama U (Saitama University)
    Booth R0661
    Shunji Ido,ido@poti.fms.saitama-u.ac.jp

      Computational studies and 3-dimensional visualization are shown for the activities of Saitama University in the fields of education and research. Major facilities are: Hitachi SR8000, ONYX3400, and VRs such as CAVE and LINUX PCs. The 3-dimensional visualization has been the major interest in the education and research at Saitama University. The advanced VR systems such as CAVE, etc ., are used in the exercises for undergraduate students, the open school programs for the middle school students, and the public.

  • Benchmarking High-Performance Computers [Standard Peformance Evaluation Corporation (SPEC)]
    Booth R1139
    Dianne Rice, Dianne_Rice@spec.org

      The Booth will present benchmarking activities of the High Performance Group of the Standard Performance Evaluation Corporation (SPEC/HPG). The exhibit pursues two goals. First, it will present SPEC's high-performance computing benchmarks, SPEChpc96, and the new SPEComp 2001 suite. These benchmarks are a service to the High-Performance Computing (HPC) community, where they can be used for machine procurement, to improve existing computer systems, and for research on software and hardware components of high-performance computing systems. Second, the Booth will present several research projects that are closely related to the SPEC effort. These projects define performance evaluation methodologies, characterize computational applications, and evaluate candidate benchmarks. We will present a number of such efforts from several participating organizations. One particular highlight of this year's exhibit will be the earlier released SPEComp 2001 benchmark suite and results. SPEComp 2001 provides new benchmarks written in the parallel programming standard OpenMP, which is now supported by all major high-performance computing platforms.

  • Extreme Sci.: Picoseconds & Petabytes, Teravolts & Tflops (Stanford Linear Accelerator Center and Fermi National Accelerator Laboratory)
    Booth R1060
    Robert Cowles, robert.cowles@slac.stanford.edu

      At Fermilab and SLAC, America’s principal facilities for experimental high-energy physics, the world's physicists probe extremes of nature, colliding minute particles at tremendous energies. Detectors measure forces with a range of femtometers (10-15 meters) between particles accelerated by teravolts in interactions lasting only picoseconds. Readout and analysis of the resulting physics data requires gigabit networks, petabytes of data storage and teraflop computing resources.

      SLAC will demonstrate its high-speed network connecting a thousand-node compute farm with a half petabyte-and-growing Objectivity database used by hundreds of physicists. Fermilab will demonstrate systems to manage and analyze even larger data volumes. Progress will be shown on the Particle Physics Data Grid's high-speed File Replication Service. Physics analysis techniques and results will be described using the examples from the BaBar, CMS, CDF and D0 experiments. Fermilab (operated by URA, Inc.) and SLAC (operated by Stanford University) are funded by the U.S. Department of Energy.

  • The Aggregate
    Booth R0227
    Hank Dietz, hankd@engr.uky.edu

      Based at the University of Kentucky, The Aggregate refers to a collection of researchers and the public domain technologies that they develop and use to make the components of a parallel computer work better together. We consider all aspects of Compilers, Hardware Architectures, and Operating Systems (KAOS) together, optimizing system performance rather than performance of the individual parts. For example, in 2000, our KLAT2 (Kentucky Linux Athlon Testbed 2) supercomputer won awards for its GA-designed asymmetric Flat Neighborhood Network (FNN) and use of 3DNow! for scientific computing. This year, our exhibit will showcase these and other new systems technologies. These advances will be demonstrated with real applications, including visualization using a Linux PC cluster video wall and our CFD (Computational Fluid Dynamics) code that was recognized in last year's Gordon Bell awards.

  • The MITRE Corporation
    Booth R0335
    David Koester, dkoester@mitre.org

      MITRE is a nonprofit national technology resource that provides systems engineering, research and development, and information technology support to the government. It operates federally funded research and development centers for the DOD, the FAA, and the IRS. Research at MITRE develops technical innovations that solve key problems for our clients. The MITRE Technology Program covers several hundred research areas that include architectures; collaboration and visualization; communications and networks; computing and software technology; decision support; electronics; human language technology; information assurance; information management; intelligent information processing; investment strategies; modeling, simulation, and training; sensors and environment. Although much of MITRE's research and project work is unavailable for public release because government contract obligations, a selection of publicly released information developed in support of MITRE's clients can be found at www.mitre.org/technology. An example of MITRE research that we will exhibit at SC2001 includes remote demonstrations of Quantum Cryptography.

  • High Performance Computing and Pervasive Computing (Universidade de Sao Paulo)
    Booth R0361
    Sergio T. Kofuji, kofuji@lsi.usp.br

      Recent advances in microelectronics, telecommunication systems, wireless communications, portable computing, system on chip design, MEMS technology, internet services give us excellent ways for information society's infrastructure implementation. Our vision of this society is an extremely interconnected world, with computers everywhere, organized in layers. At the top layer, there are the high-performance and high-information storage computing systems distributed worldwide and implementing a huge distributed system. At the bottom layer there will be wearable computers for each citizen and small pervasive/ubiquitous computers everywhere. Our research focuses on enabling the technology for this new information infrastructure: wearable computers, virtual reality ‘holodecks,’ information security, huge advanced parallel and distributed data bases, and new high performance parallel computers with strong support for high availability, high data volume management, several storage layers, high performance file systems, and an interface to the external world with support for high volume of secure transactions in near real time. We aim to to have this computing/networking model compatible with other ones, such as ad-hoc networks employed in pervasive computing. Several of these technologies as, for instance, high performance parallel clusters, have been transferred to Brazilian industries (Elebra, Itautec, etc.) and have been used in socially important areas as weather forecast. Several research centers in Brazil are acquiring our technologies in high performance computing. At our exhibit, we will demonstrate some of the technologies developed and under development at Universidade de São Paulo: high-performance parallel clusters with support for high-performance I/O (intra-cluster and extra-cluster); virtual reality low-cost CAVE; wearable computing equipment and applications; and some research in MEMS technology. One of the applications to be demonstrated for this system will be a Multimedia Digital Library which can store not only information from one institution or corporation but also information generated by users and communities interested on exchanging information.

      Right now we are evaluating this technology in the scope of the recently launched program named “Cidade do Conhecimento” (www.cidade.usp.br) (“Knowledge Town”) at University of São Paulo.


  • EZ-Grid Resource Broker / Cougar Compiler (University of Houston)
    Booth R0512
    Barbara Chapman, chapman@cs.uh.edu

      Many research and development projects currently aim to facilitate the use of computational grids for the execution of supercomputing applications. Such environments promise an improvement in the utilization of existing computational resources, as well as faster time to completion for the individual user's job. However, an efficient utilization of grid resources is currently impeded, from both points of view, by the manual effort involved in resource selection and job submission. These tasks are supported at a high level for certain classes of applications only. Our goal is to permit the majority of grid users to specify the needs of their job in a convenient manner and to support them in the task of detecting and selecting computational resources that are likely to meet these needs. In this exhibit, we display the EZ-Grid system, which is an ongoing project at the Department of Computer Science, University of Houston. The aim is to design and implement a resource brokerage system coupled with user interfaces and robust information objects for multisite grid computing. We use Globus tools for grid services and develop the above software tools for making resource selections and job submission to achieve the time and/or cost constraints specified by the user.

  • University of Manchester, Manchester Computing
    Booth R0873

    Kaukab Jaffri, k.jaffri@man.ac.uk

      Manchester Computing is Europe's premier university high-performance computing facility supporting world-class research and teaching in all disciplines. It is used by the UK academic community and, increasingly, by many overseas higher education institutions. It is also a major node in the EU-sponsored EUROGRID project and is a member of the eGRID forum. MC provides computing services to the University of Manchester through the Manchester Research Centre for Computational Science (MRCCS) and the Manchester Visualization Centre. It is an international center for HPCN and visual supercomputing, with a recently installed Virtual Reality center specializing in Virtual Medicine. More than 25,000 users from over 150 UK institutions use Manchester Computing. Highlight: Manchester Computing is providing the UK SC Global Constellation site with four workshops and BoF sessions on global metacomputing, solar-terrestrial physics, GRID portals and GRID-enabled materials science. We are also part of the European Village highlighting European research.

  • University of Tennessee
    Booth R0343
    Scott Wells, swells@cs.utk.edu

      The University of Tennessee (UT), including the Computer Science Department (CS), the Center for Information Technology Research (CITR), and the Innovative Computing Laboratory (ICL), engages in high-performance computing (HPC) research. Focusing on the areas of distributed network computing, linear algebra, software repositories, and performance benchmarking, ICL delivers inventive and original solutions to problems inherent in high-performance computing applications and architectures. Sun, IBM, SGI, and Cray are just a few of the companies we work closely with to meet the demands associated with parallel programming.

      The U.S. Department of Defense, Department of Energy, NASA, and the National Science Foundation are just some of the organizations we do work for.

  • High Performance Computing at the University of Utah (University of Utah, CHPC)
    Booth R0329
    Julia Harrison,

      The Center for High Performance Computing provides large-scale computer resources to facilitate advances in the field of computational science at the University of Utah. The projects supported by CHPC come from a wide array of disciplines requiring large capacity computing resources, both for calculating the solutions of large-scale, two and three dimensional problems and for graphical visualization of the results.