SC2001 Exhibitor Forum

TUESDAY November 13th
ROOM A107

10AM-10:30
Architecture Trends in Level 3's Network
Robert Hagens, Sr. Vice President, Global Network Engineering, Level (3) Communications
A discussion of the current and future architecture of Level 3's network, along with a view of emerging technology trends. Included in this will be a discussion of MPLS based services.
10:30AM-11AM
Superclusters - Superior Performance Using SCI
Keith Murphy, VP Sales & Marketing, Dolphin Interconnect
Will discuss SCI (Scaleable Coherent Interface), the high speed interconnect standard that time forgot! Developed over ten years ago, SCI is only now finding the markets its remarkable performance deserves. This is especially true in the HPC supercluster environment where the high bandwidth and low latencies of SCI makes it ideal for connecting clusters of PC servers for HPC supercluster performance. Learn how the performance of Scali's SCI-based MPI and cluster management software combined with Dolphin's 2D or 3D SCI card assemblies creates superclusters that can equal or surpass performance on most of the present supercomputers.
11AM-11:30AM
The Quadrics Interconnect
Duncan Roweth, Head of Software R&D, Quadrics
The talk will describe the architecture and implementation of the Quadrics interconnect, a high bandwidth, low latency data network for clusters of commodity servers. Performance data will be presented for a range of node types and system sizes.
11:30AM-12NN
Expanding the High Performance Computing Systems Roadmap
Richard S. Kaufmann, Technical Director, Compaq
Compaq has a comprehensive set of products for high performance technical computing. These systems are very popular in the bioscience community - they were the workhorses for the human genome sequencing efforts, and are deployed at LANL and many other laboratory and commercial sites world-wide. This talk will survey: Our flagship product, the AlphaServer SC. These systems span from the tens to the thousands of CPUs, and are tied together with a very fast message passing interconnect (MPI latency 4 microseconds!). The complete system has many desirable single system image and fault resilience characteristics, and is used widely in the HPTC community - including the 6 TeraFLOP NSF machine installed at the Pittsburgh Supercomputer Center.

Upcoming enhancements to our Alpha processors (including the EV7, a chip with 12.8 GB/s of memory bandwidth and more than 40 GB/s of chip interconnect), Our roadmap for the Intel IA-64 processors, and Our Linux-based Alpha, IA-32 and IA-64 systems.
12NN - 12:30PM
Massive Scalability and the Grid
Steve Campbell, Director, Computer Systems, Sun Microsystems, Inc.
Today, it is clear that the future of supercomputing is network-centric, and lies in both the shared power of smaller, parallel web-serving system and large, cooperating clusters of SMP servers. Rather than focusing solely on traditional islands of computing power in a single location, Sun is bringing disparate commercial and scientific communities the hardware they need to share data and resources across the globe.
12:30-1PM
Fujitsu's High Performance and Highly Scalable Servers and Supercomputers
Kenichi Miura, Chief Architect, Fujitsu America, Inc.
Fujitsu has two major high performance computing product lines. The first product line consists of the PRIMEPOWER scalar SMP systems and the second consists of the vector-parallel VPP5000 Series.

Fujitsu's PRIMEPOWER development strategy focuses on High Performance and Scalability. That's why the PRIMEPOWER servers are developed using a number of key Fujitsu technologies such as the SPARC64™ GP processor, which is fully SPARC V9 standard compliant, and Fujitsu's High Performance Cross-Bar Switch, which scales to support processor growth. Another key technology is the scalability of the multiprocessor systems. The PRIMEPOWER already scales higher than comparable systems, and is set to maintain that lead to ensure that growing computing requirements continue to be met.

The VPP5000 Series of vector-parallel supercomputers boasts peak performance of 9.6 Gflops per processing element, with main memory capacity of up to 16 Gbytes per PE, amounting to 2 Tbytes per system, allowing ultra large-scale operations. The VPP5000 runs the Unix SVR4-based UXP/V operating system, and supports vectorizing compilers and MPI 2.0. Fujitsu's two high performance computing product lines are highly complementary to each other and deliver a total solution for research and development users.
1PM - 1:30PM
NEC's Supercomputer Product Roadmap
Joerg Stadler, Marketing Manager, NEC
One size rarely ever fits all - this fact is true in all areas of life and supercomputing is no different: In order to meet the wide range of its customers needs, NEC has based its strategy for the HPC market on a range of products that is outlined in this presentation. The high-performance, very-high memory bandwidth supercomputers of the NEC SX series are specialized tools that use the reliable, proven and well-known vector approach in order to combine a maximum of application performance with an outstanding ease-of-use. The SX memory subsystem feeds data to the processors orders of magnitude faster than in COTS machines, which makes the SX the perfect system for bandwidth hungry applications. NEC augments its highest end supercomputer offering with a series of mid-range servers based on Intel's IA64 architecture and the Linux operating system. These are meant to support users who need the ease-of-use of a large SMP machine but who don't need the very-high memory bandwidth of the SX systems.
1:30PM-2PM
The Case for Architectural Diversity
Burton Smith, Chief Scientist, Cray Inc.
At one time vector supercomputers effectively were the only choice in the market, so nearly everyone used them. When alternative architectures became available, including the pioneering Cray T3D and T3E, users chose the architecture best suited to their applications. Architectural diversity emerged. Today there is a risk of reverting to a situation where users are effectively limited to a single architectural choice. Clearly Cray Inc. intends to maintain architectural choices in the interest of providing the best platform for a wide variety of applications. What is the need for diversity going forward?
2PM-2:30PM
CRAY SV1ex-4 Supercluster Performance
Beata E. Sarnowska, Senior Capacity Analyst, Northrop Grumman Information Technology
This presentation presents the results of investigations into the performance of the Cray SV1ex. Relevant architectural issues are presented to develop an appreciation for the hardware and software environment of the SV1ex. The benchmark suite used includes workload specific codes to measure the anticipated performance in a real-world environment, as well as standardized and kernel codes to examine memory, CPU, and cache subsystem performance. The benchmark suite will be discussed and results from runs on SV1, SV1e, SV1ex will be c ompared against each other and executions on "Classic" Cray architectures such as the C90 and J90se systems.
2:30PM-3PM
The Art of Commodity Computing
Bret Stouder, Director, Atipa Technologies
We will cover the diverse nature of purchasing commodity computers for Linux Clusters and discuss the benefits of buying from an ISO 9002 certified manufacturer. Atipa Technologies maintains long-standing relationships at the manufacturer level to ensure the solutions that we propose and build are thoroughly tested with the Linux Operating System. Our Research and Development teams are able to offer tomorrows technology today. By combining the power of our manufacturing capability with our Linux expertise, we are able to offer the most cost effective solutions to our long list of satisfied customers.
3PM-3:30PM
LinuxBIOS for Beowulf Clusters
Steven M. James, CTO, Linux Labs
This presentation will cover aspects of using LinuxBIOS in a production supercomputing environment. Topics include: ease of configuration, simplification of boot, reliability, and diskless boot using inexpensive motherboards. Other topics are: how to maximize performance, and elimination of K/V/M switches and other hardware.
3:30PM-4PM
Today's Cost Effective Supercomputer Solution
Danny J. Harrison, Senior Corporate Account Manager, RackSaver, Inc.
The answer for a high performance, high-density supercomputer has traditionally required a large allocation of funds. However, with the leaps in clustering technology, companies such as RackSaver, Inc. are able to achieve the performance, once believed only available in multi million dollar supercomputers. RackSaver, now harnesses this power in the form of their expertly engineered 1U RackSaver line of servers. This results in a machine that outperforms the competition, in a fraction of the space, at a fraction of the cost.

The HPC industry as a whole has embraced the idea of using commodity components in their offerings, in an attempt to keep a better cost/performance relationship. RackSaver's manufacturing of custom servers for this industry, has given organizations the ability to implement large server clusters, which often outperform their high priced competition. These custom designed servers allow customers to have systems designed that meet and often times exceed their requirements.

RackSaver, Inc., is leader in this field and their servers continue to be one step ahead of the competition in their design and functionality.
4PM-4:30PM
CIToolkit - making Alpha Linux cluster integration more reliable and repeatable
Chris Powell, Program Manager, High Performance Technologies, Inc.
HPTi has integrated large Alpha Linux clusters for government labs and agencies, and has learned systems integration lessons from the cabling level on up to the user codes themselves. These lessons are captured in a developing program HPTi calls "CIToolkit", for Cluster Integration Toolkit. CIToolkit is being developed with Sandia National Labs in their CPlant environment, and leverages HPTi and customer expertise already appearing in cluster tools for job scheduling and systems management. It captures the entire cluster integration and bring-up process from initial hardware roll-out and capturing a cluster configuration database, through installation and running of user codes and systems management tools, including cluster maintenance and change activities. This exhibitor forum will give an overview of CIToolkit, including an overview of the configuration database, the processes and tools used for cluster interconnect routing, and tools used for cluster checkout and problem diagnosis.
4:30PM-5PM
Real-World Experiences in Building Production Computing Grids
Ian Lumb, Integration Architect, Platform Computing Inc.
The notion of federating geographically distributed compute centers for the purpose of aggregating resources, or providing access to specialized resources remotely, has been brought into focus in recent times through the concept of The Grid. Much like the ubiquitous, highly available electrical power grid, the global computing grid allows challenging problems in high-performance computing (HPC) to be addressed. Various academic research (e.g. the GLOBUS and Legion projects) and commercial (e.g. Applied Meta and Platform Computing) ventures are already realizing The Grid. Because grid computing necessitates increased collaboration between all stakeholders, standardization efforts such as the Global Grid Forum and New Productivity Initiative (http://www.newproductivity.org) are also of increasing importance.

Whereas much of the current activity in grid computing is focused on issues spanning from architecture to standards, the present approach is much more pragmatic. Through the use of currently available grid technologies from Platform Computing, the realization of production grids for compute capability and capacity is illustrated. These production implementations are already allowing organizations to derive value from the promise of grid computing. After a brief overview of the relevant Platform computing grid technologies, the bulk of consideration is given here to real-world experiences in building production grids. The primary example is drawn from an ongoing project at the U.S. Department of Defense.
5PM-5:30PM
Storage Area Networking
Brad Winett, VP Market Development, DataDirect Networks
Storage Area Networking appliance leader DataDirect Networks provides Rich-Media
communities with high bandwidth, virtualized, scalable and flexible network infrastructures, reducing TCO, enabling higher productivity with increased ROI. The SAN DataDirector, first of a family of intelligent SANappliances, allowing powerful and quick SAN deployment for rich media encoding, streaming and content delivery, will be exhibited.
5:30PM-6PM
The Importance of High Performance in Storage Area Networks
John R. Tibbitts, CEO, INLINE Corporation
This presentation will take a look at the scalability of Storage Area Networks. Special attention will be given to the need for high-throughput in order to accommodate more clients. There will also be a discussion of the technology that INLINE Corporation utilizes to maximize the available bandwidth in our SAN products.
WEDNESDAY November 14th
ROOM A107
10AM-10:30
Beyond Virtualization: Architecting the Ubiquitous Storage Utility
Wayne Karpoff, VP, CTO, YottaYotta
The storage utility concept has been an implicit promise of SANs since the mid-90's. We are on the verge of keeping that promise to users, but pitfalls remain and data-centric infrastructures will further impact server use.

As the necessary underpinnings for super-scale network storage emerge we can begin to think about reaching the Holy Grail of SANs: the ubiquitous storage utility. Several trends are driving this fundamental re-scaling of storage applications: content growth, resource constraints, storage networking, and server thinning. The economic benefits of super-scale storage, while compelling in themselves, are but a prelude to the next steps in a decades-long trend of moving storage related functionality from servers to storage. The era of piecemeal storage solutions is drawing to a close. Learn more about how network effects will reshape the industry and push customer expectations towards carrier-class solutions as we move beyond virtualization to the ubiquitous storage utility.

This presentation will specifically answer what unique requirements storage utilities have over and above enterprise storage and how the data-centric model will affect how you purchase and use servers.
10:30AM-11AM
High Performance Archives, Towards GigaBytes Per Second
Jim P. Hughes, StorageTek Fellow, StorageTek
The ability to move tens of terabytes in reasonable amounts of time is critical to many supercomputer applications. We examine the issues of high performance, high reliability tape storage systems, and presents the ability to reliably move 1GB/s to an archive that can last 20 years. We will cover the requirements, approach, hardware, application software, interface descriptions, performance, measured reliability and predicted reliability. RAIT allows a sustained 80MB/s of data per Fiber Channel interface that is striped out to multiple tape drives. This looks to the application as a single tape drive from both mount and data transfer perspective. Striping 13 RAIT systems together will provide more than 1GB/s to tape. The reliability is provided by adding parity stripes to the data stripes. For example, adding 2 parity tapes to an 8 stripe group will allow any 2 of the 10 tapes to be lost or damaged without loss of information. The reliability of RAIT with 8 stripes and 2 parities exceeds that of mirrored tapes by using 10 tapes instead of the 16 tapes that a mirror would require.
11AM-11:30AM
Scalable Disk Archives: Are Tape Archives Obsolete?
Greg Lindahl, CTO, Conservative Computer, Inc. (Logicon Booth)
Large archives of information are conventionally stored in large tape archives, with robotic mechanisms for access. Although these archives suffer from poor performance for small files, they are much more cost effective and reliable than large disk systems. Pundits have predicted for years that the rapidly falling cost of commodity IDE disks would eventually render tapes obsolete, but many interesting engineering problems must be solved to make a cost effective and reliable large disk archive.

This talk describes a prototype scalable disk archive constructed from commodity components. Although somewhat more expensive than today's large tape archives, this archive provides superior small file performance, superior bandwidth for large files, and is substantially cheaper and more reliable than conventional large disk systems.
11:30AM-12NN
High Performance Global Storage Networks: The INRANGE Way
Mark Sincevich, Federal & Partner Programs Manager, INRANGE Technologies
Exploding data growth, increasing storage demands and the need for 24x7 availability are commonplace in today's large data centers. As a result, increasing demands are being placed on your organization to manage your data. Storage Networking enables you to enhance the productivity of managing your data while increasing its availably and value in a global, 24x7 economy. To meet this need, Storage Networking has evolved from a conceptual storage utility model, to localized Fibre Channel Islands, to Enterprise Storage Networks, and now, Open Global Storage Networks. This session will examine how Open Global Storage Networks are being deployed today to provide interoperability, investment protection, scalability from 100s to 1,000s of ports, global availability regardless of distance, and simplified management.

Attendees will learn more about:
--Increasing data availability by reducing downtime
--Simplifying the implementation of multi protocol, mixed vendor storage networks
--Increasing productivity and staff efficiency through simplified management tools
--Maximizing IT budgets to provide investment protection and ROI gains.
1PM - 1:30PM
Intelligent Disaster Recovery Software Solutions
Vincent deVenoge, Senior Systems Engineer, VERITAS Software
Disaster recovery is essential. Organizations large and small need their data protected, accessible, and uninterrupted in the event of a disaster. VERITAS disaster recovery solutions are based on software products that work together efficiently and seamlessly across all platforms and applications and are flexible enough to grow along with your requirements. As a disaster recovery plan evolves, VERITAS provides a layer of protection at every stage.

Foundation Layer
The foundation layer of our disaster recovery solutions gives unparalleled control and flexibility in managing data. With proactive monitoring and maintenance of data storage systems, these tools keep organizations online 24xforever.

Backup/Recovery
Having an easily recoverable backup copy of your data is fundamental to a disaster recovery plan. VERITAS provides offline storage for basic protection of all data, and a range of other backup and recovery products for desktops to large data centers.

Replication and Clustering
VERITAS replication and clustering technology offers immediate data recovery and uninterrupted data access with replicated data and failover technologies that don't require vendor-specific hardware.

Disaster Recovery Enterprise Consulting Services
The disaster recovery experts at VERITAS Consulting can help develop and implement a disaster recovery plan. Our staff of certified disaster recovery planning engineers has years of collective experience.
1:30PM-2PM
Gigabit Ethernet and the Adapter Market
Terry Appling, Senior Manager of Technical Services, SysKonnect
The technology that is "Gigabit Ethernet" is influenced by a variety of factors. In addition to the switch and NIC hardware that drives Gigabit speed, servers, cabling, fiber connectors, and chip designs are changing the face of Gigabit Ethernet. Gigabit Ethernet will enable emerging high-bandwidth applications such as storage, video, and animation. But, it is these applications that are driving the deployment of Gigabit Ethernet. As a result, the technology that is Gigabit Ethernet continues to evolve to take advantage of innovation and to keep pace with user demand. This presentation will cover trends in the GigE including: A. Bus technology - PCI-X and Infiniband B. 10 Gig and the MAN C. High growth applications such as SAN, iSCSI vs. Fibre Channel, video, and clustering D. Evolution of the adapter market to meet these changes
2PM-2:30PM
Foundry Networks' Global Ethernet™ Solutions For Metro Networks
Jeffrey L. Carrell, Manager, Educational Services, Foundry Networks
Introducing the complete, end-to-end, next generation MAN solution. You can extend the high performance and low cost of Ethernet to your Metropolitan Area Network (MAN). Foundry Networks' Global Ethernet MAN solution gives you high-speed Internet connectivity through long-haul 10 Gigabit and Gigabit Ethernet or Packet Over SONET with speeds ranging from OC-3 up to OC-48. And, beyond bandwidth, our MAN solution comes with a rich set of service provisioning and delivery capabilities including Quality of Service (QoS), bandwidth management, security, and accounting and billing. Combined with Foundry's proven high-performance switch architecture, these value-added capabilities help you evolve your business by turning bandwidth into revenue.
2:30PM-3PM
Marconi Optical Networking
Robert J. Riehl, Technical Director, Marconi Communications Federal, Inc.
Peter Turnbull, DWDM Product Support Manager, Marconi Optical Networks
Marconi Communications Federal, Inc. presents its All Optical Networking that supports 2.5 Gb/s to 10 Gb/s per wavelength featuring the PMA-32, thirty-two wavelength add drop multiplexer. Marconi will present its strategy of interconnecting its large protocol agnostic BXR-48000 switch capable of native IP, ATM, and MPLS supporting "Ships-in-the-Night" with its SmartPhotonix All Optical Networking products.
3PM-3:30PM
LJuniper Networks Routers in Research & Education Networks
John Jamison, Consulting Engineer, Juniper Networks
Juniper Networks Routers are deployed in Research & Education (R&E) Networks worldwide. They have become the router of choice for network engineers supporting high performance applications. This presentation will discuss how Super Computer Centers, Research Labs, Universities, and Wide Area R&E Networks are taking advantage of Juniper Networks support of features like MPLS, CCC, Line Speed Firewall Filtering, Filter Based Forwarding, and QoS to support innovative applications.
3:30PM-4PM
InfiniBand Now
John Freisinger, VP of Sales & Marketing, Essential
Seven of the computing industry's leaders, Compaq, Dell, Hewlett-Packard, IBM, Intel, Microsoft and Sun Microsystems joined together in developing a new common I/O specification to deliver a channel based, switched fabric technology. This new technology, InfiniBand, promises to simplify and lessen the costs associated with running an enterprise data center. Over 220 vendors have committed to producing products for the InfiniBand specification.

John Freisinger will discuss the realities of implementing an InfiniBand-based solution today and what products are still needed to fulfill the promise of this technology. This is and ideal primer for anyone trying to gain an understanding of InfiniBand or anyone who is considering implementing an InfiniBand solution in the coming months.
4PM-4:30PM
Debugging Parallel Programs Automatically
Henry A. Gabb, Staff Parallel Applications Engineer, Intel Corp.
Though it has not always been true, processor time today is considerably cheaper than programmer time. However, debugging parallel programs is a difficult, programmer-intensive endeavor. Intel's Assure for Threads transfers much of the debugging effort to the processor. A series of parallel codes, each containing subtle errors, will be presented. The codes compile and sometimes even execute correctly. The presentation will demonstrate how Assure automatically finds bugs in parallel programs with non-deterministic behavior.
4:30PM-5PM
Mac OS X as a Unix Operating Systems for Scientists
John Martellaro, Apple Science & Technology, Apple Computer, Inc
Apple's new Mac OS X is a Unix-based operating system with an open source Mach 3 kernel and BSD 4.4 core that creates a superb working environment for the researcher. The excellent integration of a desktop GUI with standard Unix tools creates a world-class development environment: access to the G4 vector processor, SMP, clustering, legacy Macintosh applications, new Mac OS X applications, the most modern imaging and GUI of any Unix OS, an object oriented development environment, and all the standard BSD and GNU tools. This session will demonstrate the working environment of this next generation OS and showcase research and development tools for the scientist.
5PM-5:30PM
Optimizing Compilers for Modern Architectures
Randy Allen, CEO and Founder of Catalytic Compilers, Morgan Kaufmann Publishers
Morgan Kaufmann will present an overview of the contents of their latest publication, "Optimizing Compilers for Modern Architectures", by Randy Allen and Ken Kennedy. Dr. Allen will present the major highlights of the book, including dependence-based transformations for uncovering and enhancing parallelism, exploiting memory hierarchies, and optimizing performance on advanced architectures. The presentation will be oriented towards three groups: hardware architects concerned with obtaining the best compiler-hardware trade-offs, programmers focused on implementation of automatic transformations, and application programmers interested in obtaining maximal performance out of compiler-hardware systems.
5:30PM-6PM
Extending Source Analysis to Open MP
Monty Swaiss, CTO, Cleanscape Software International
While a static source code analysis tool will not eliminate the need for other bug eradication tools and tactics, it can greatly reduce the overhead on more expensive resources by catching problems early and allowing programmers to correct problems at the source while they are most familiar with their code. By helping to eliminate problems at the source, static source code analysis increases the competitive viability of the software development organization.

In "Extending Source Analysis to Open MP" Cleanscape Software International will educate Fortran developers about the most advanced software problem eradication tools and tactics, showing how using automated static source code analysis tools to identify and eliminate problems early in the software development process will improve organizational viability. The forum will identify the risks associated with latent software problems, review the alternatives development organizations might consider to reduce such risk, discuss the means by which developers can eliminate problems earlier in development, and presents the history, use, and benefits of using a static source code analysis tool. Developers who attend the presentation will receive a free copy of "Stopping bugs before they kill your software organization: Frequently asked questions about static analysis."
THURSDAY November 15th
ROOM A107
10AM-10:30
High-Performance BioInformatics
Gerald Lipchus, Director of Sales, Scientific Computing Associates
High-performance computing on cost effective clusters of commodity processors is of growing importance in the life sciences. Scientific Computing Associates, Inc. and TurboGenomics, Inc. are partnering to help the life science community leverage this critical technology. This presentation describes several recent initiatives that make the power of clusters more accessible to the broad user community. A discussion of several TurboGenomics products including TurboBLAST, a high performance, parallel version of NCBO BLAST, will be presented.
10:30AM-11AM
Cracking MPI/OpenMP Performance Problems
Karl Solchenbach, Managing Director, Pallas
Cluster computing has emerged as a defacto standard in parallel computing over the last decade. Now, researchers have begun to use clustered, shared-memory multiprocessors (SMPs) to attack some of the largest and most complex scientific calculations in the world today. To program these clustered systems, people use MPI, OpenMP, or a combination of MPI and OpenMP. However analyzing the performance of MPI/OpenMP programs is difficult. While there are several existing tools to analyze performance of either MPI or OpenMP programs efficiently, development to combine these into tighly-integrated tools is just on its way.
Pallas GmbH and KAI Software have partnered with the Department of Energy through an ASCI Pathforward contract to develop a tool called Vampir/GuideView, or VGV. This tool combines the richness of the existing tools, Vampir for MPI, and GuideView for OpenMP, into a single, tightly-integrated performance analysis tool. From the outset, its design targets performance analysis on systems with thousands of processors.
11AM-11:30AM
ChaMPIon/Pro: A Next-generation, Terascale MPI-2
Anthony Skjellum, President, MPI Software Technology, Inc.
Current MPI implementations often date back to the Argonne/Mississippi State model implementation, MPICH. This early model implementation development led to huge usage of MPI-1.2 throughout the world. Later, ROMIO was added as a way to provide public-domain support for most of MPI I/O, a key part of MPI-2. ChaMPIon/Pro provides a commercial, highly scalable implementation of the MPI-2 standard, based on a software design that itself is two generations newer than MPICH. It also offers a next-generation alternative to our current commercial-grade MPI-1.2 system, MPI/Pro. Whereas MPI/Pro provides a thread-safe, high-performance MPI-1.2 implementation, we learned a lot in this process that has been brought to bear in an entirely new design. This new design includes substantial performance and interoperability requirements posed for huge-scale systems, up to 15,000 processors or more.

ChaMPIon/Pro targets the largest DOE supercomputers and superclusters, including ASCI White, Sandia Cplant, ASCI Purple, and ASCI "Q". This talk covers the effort to create this MPI implementation, and its future planned commercial uses outside of the DOE Pathforward progam that have initially funded its creation. In particular, we compare to our current MPI/Pro product, and explain the advantages of the next-generation MPI, particularly for systems of huge scale.
11:30AM-12NN
PGI Linux Compilers and Tools for High Performance Computing
Vincent Schuster, Director AST Portland Lab, STMicroelectronics
In combination with Linux, the PGI® CDK™ Cluster Development Kit™ enables turnkey use of networked clusters of x86-compatible systems for HPC users. The PGI Fortran, C and C++ compilers include full native support for OpenMP shared-memory parallel programming, SSE and prefetch optimizations for IA32 processors, and optimizations for Athlon processors. The PGI CDK also includes tools for debugging and profiling of shared-memory OpenMP and distributed-memory MPI applications. A custom installer builds and installs preconfigured versions of the most common open source cluster utilities: MPI, the PBS batch queuing system, the ScaLAPACK parallel math library and PVM. The PGI CDK also includes a large number of training, example and tutorial programs. This presentation provides overview of the PGI CDK, new tools, and applications and benchmark results using the PGI CDK.
12NN - 12:30PM
ConsoleWorks the Web Based Console Management Solution - for High Performance Technical Computing (HPTC)
William D. Johnson, CEO, TECSys Development, Inc..
This session will cover system architecture, configuration, integration, features, benefits, upgrade paths, plus migration from PCM to ConsoleWorks. Coverage will include how to, why, and benefits of upgrading or implementing a Web based console management solution. If you have not implemented a console management solution, this session will teach you what you need to know. Specifically, this is not marketing hype; it's console management at its core. This will be a technical discussion. Attendees should have working knowledge of terminal servers, networks, the Web, firewalls, telnet, Java script, Java and Web browsers.
12:30-1PM
Evolution of I/O Connection Technologies and Their Effect on Storage Architecture
Rip Wilson, Product Marketing Manager, LSI Logic Storage Systems, Inc.
One of the keys to high performance computing is the speed at which data is transferred from the storage system(s) to the server(s). In the last few years, we've seen this pipe evolve from 40 MB/s SCSI to 200 MB/s Fibre Channel. And with 4-Gbit and 10-Gbit Fibre Channel and InfiniBand on the horizon, it's only going to get faster.

As storage vendors strive to get the latest and fastest host I/O connections on their arrays, the storage architects are working to optimize the controller design to take full advantage of the new speeds. They must consider internal bandwidth, single bus or multiple, cache design and size, and back-end I/O. All components of the storage controller must come together to effectively and efficiently take advantage of emerging technology and ensure the storage array can deliver the performance enabled by its host I/O connectivity.

This presentations looks at the evolution of I/O connections and their effect on storage system design.
1PM - 1:30PM
IBM Unparalleled Performance and Petaflop Research
Surjit Chana, VP HPC Marketing and Dave Jensen, Sr. Mgr, IBM Research
Surjit Chana will briefly review IBM's recent POWER4 and Cluster announcements. Dave Jensen will then discuss the expansion of IBM's Blue Gene research project. On Friday, November 9th, IBM announced a partnership with the Department of Energy's National Nuclear Security Agency to expand IBM's Blue Gene research project. IBM and NNSA's Lawrence Livermore National Laboratory will jointly design a new supercomputer based on the Blue Gene architecture. Called Blue Gene/L, the machine will be 15 times faster, consume 15 times less power per computation and be 50 to 100 times smaller than today's fastest supercomputers. Blue Gene/L is a new member of the IBM Blue Gene family, marking a major expansion of the Blue Gene project. Blue Gene/L is expected to operate at about 200 teraflops (200 trillion operations per second) which is larger than the total computing power of the top 500 supercomputers in the world today. IBM will continue to build a petaflop-scale (one quadrillion operations per second) machine for a range of projects in the life sciences, originally announced in December 1999.
1:30PM-2PM
Meta-computing as the Key to Distributed, Multi-disciplinary Simulation
John A. Benek, Principal Scientist, Raytheon
The Department of Defense (DoD), Simulation Based Acquisition (SBA) strategy requires increasing levels of simulation and simulation fidelity. Successful competitors must demonstrate the superiority of their designs largely through simulations; therefore, selections will be made primarily on the quality of the simulations. Since major weapon systems are typically designed by teams of contractors or by single contractors with widely dispersed design groups, a means must be found that will allow the best modeling capability of the team members to be used, while protecting their proprietary data and models. This methodology must also provide a simple way of connecting models at distributed sites. Building blocks for this capability include the DoD HLA initiative, the Defense Research and Engineering Network (DREN), as well as several modeling environments that are currently under development. However, the key to combining these elements into the comprehensive structure required to adequately support SBA, consists of elements of a meta-computing architecture that can connect heterogeneous computing resources, including computers, operating systems and networks, and models seamlessly and transparently to the user. This presentation will describe Raytheon's vision of this architecture and progress toward its implementation.
2PM-2:30PM
New Architecture and Challenges in Creating Networks for the Teragrid
Wesley K. Kaplow, CTO, Government Systems Division, Qwest Communications
The combination of teraflop supercomputing clusters and multi-gigabit wide area networks has enabled the long awaited era of the TeraGrid to begin. However, issues such as the optimum architecture to ensure scalability still remain. Long-haul optical transmission systems can now provide around 120, ten-Gbps channels. These have been used in nationwide deployments, enabling point-to-point communication in the tens of Gbps. However, the key is creating a network with the capability to scale dozens of endpoints with predictability in performance. There are a variety of network standards that can be used to create TeraGrid networks. Two are 10 Gigabit Ethernet and OC-192 IP Packet-over-SONET. Both of these can create fully meshed networks providing an initial implementation. However, the cost-effective scalability of these approaches alone is uncertain, and the next generation of integration of the optical transport layer with switching and routing may be needed. The evolution of optical transport gear with the ability to provision via a mechanism such as G-MPLS may be necessary to allow the sharing of the optical transport layer.

The current state of optical transport infrastructure, current vendor switch and router hardware, and cost-effective scalability TeraGrid architectures will be discussed.
2:30PM-3PM
Open Inventor/CAVELib Integration
Mike Heck, VP of R&D, TGS, Inc.
TGS and VRCO are working together to improve the immersive environment. The resulting new Open Inventor(TM)/CAVELib(TM) configuration will include a new Open Inventor/CAVELib layer that integrates CAVELib and Open Inventor making the two libraries significantly easier to use. VRCO will license Open Inventor from TGS and use it in the development of new products, including a new version of VRScape(R).

This combination of CAVELib and Open Inventor will overcome the majority of past problems faced when trying to create robust, flexible and multi-platform applications that will support advanced displays, collaboration, and interaction technologies. This solution will be enabled by new multi-threaded versions of Open Inventor and CAVELib. This breakthrough technology allows multiple rendering threads to share a single copy of the scene graph, saving memory and simplifying management of the scene graph. This capability will present to the market a cross-platform solution never before available.

Using an object-oriented scene graph API offers many advantages over programming directly to low-level graphics APIs. For example, Open Inventor provides a higher-level abstraction and built-in optimizations. Compared to platform-specific APIs, Open Inventor protects your software investment by allowing migration to many platforms. In this way it perfectly complements the portability of CAVELib. Compared to open source scene graph APIs, Open Inventor from TGS provides far more features, a history of successful use in major applications, and dedicated and highly responsive product support. Using Open Inventor also enables the use of powerful extension classes available from TGS for 3D data visualization and volume rendering.
3PM-3:30PM
A New Class of Challenges in Commercial HPC
Andrew Grimshaw, CTO, Avaki Corporation
Varied sectors of the business world are experiencing a new class of challenges in the realm of high performance computing. Some examples:
(1) Biotech: Against the backdrop of advanced biotechnologies such as genomics and proteomics exists a complex web of relationships between companies, institutions, and individuals that demand the secure sharing and management of applications and extremely large, proprietary data sets across organizational boundaries.

(2) Financial Services: F.S. organizations rely on mission-critical, deadline-contingent simulations that demand large amounts of processing power. Requirements include the ability to manage and recover from failures in real-time and federate existing enterprise computing resources.

(3) Engineering-Intensive Manufacturing: Collaboration across organizational boundaries, in the forms of data-sharing and complex application to application (A2A) interactions, is an emerging requirement for EIM enterprises, driven by the need to radically collapse product development lifecycles and increase product quality.

Dr. Andrew Grimshaw, chief architect of one of the world's leading Grid computing projects (Legion) and CTO of Avaki Corporation, will discuss how companies are approaching these challenges today and will present his vision of how a distributed, pervasive, peer-oriented architecture can elegantly address such challenges in the future.
3:30PM-4PM
Smarter AND Faster: Supercomputing with FPGAs
Richard Loosemore, Director of Research, Star Bridge Systems, Inc.
What is the point of fast computers if they are so viciously hard to program? The question is relevant for "ordinary" computers, but it seems to be even more devastating when applied to FPGA computers. In this presentation we argue three things: (a) the payoff is so huge, it is worth trying to build and program FPGA computers; (b) the problems involved in programming FPGA computers (assuming we want to squeeze the maximum performance out of them) are so huge that we are forced to rebuild all our ideas from scratch; (c) surprisingly, once the old ideas about programming massively parallel machines are torn down and rebuilt, there is a new approach that can make FPGA computers much easier to program than "ordinary" machines.

The new approach involves hyperspecificity (choosing circuitry on the fly to custom fit both the task and the required data rate), massive application of weak constraints (smart parallelism in the compiler and elsewhere), and a liberal dose of psychology in both the circuit architecture and the interface seen by the developer. With these ingredients, a thousand-fold increase in compute density is not just possible, it might actually be usable.