Personal tools
You are here: Home LACSI Symposium LACSI Symposium 2004
Document Actions

LACSI Symposium 2004

by admin last modified 2004-11-16 08:15
October 12-14, 2004
Symposium Site: 
Eldorado Hotel, 309 West San Francisco Street
Santa Fe, NM 87501


Agenda for LACSI Symposium 2004


The Fifth Symposium offered participants a wide variety of opportunities to discuss and learn about technical and policy issues in high-performance computing.
Workshop Sessions | Posters | Papers | Panel Discussions

Tuesday, October 12: Workshop Sessions
All Day Workshops: 9:00am – 5:30pm


Open Source Development and Software Engineering Practices (full day)

Rod Oldehoeft (Los Alamos National Laboratory)
This workshop explores the sometimes-conflicting goals and practices in the open-source development world and traditional software engineering processes.

Many organizations have established standard practices in software engineering (SE) for their projects.  These are often predicated on a traditional view of an in-house team of employees with a common project goal.  However, in other organizations, reliance on, and contributing to, open-source software (OSS) is important.  In this world, software development processes have some different origins, goals, and outputs than traditional SE processes.  This workshop will explore the implications of these different but co-existing worldviews.  Speakers include DOE lab project managers and software developers, and well-known researchers studying open-source development phenomena.  The result will be a better understanding of issues, not a "silver bullet."

Mimetic Methods for PDEs and Applications (full day)

Mikhail Shashkov (T7, LANL), Jim Morel (CCS-2, LANL), Yuri Kuznetsov (University of Houston)

Mimetic methods are the class of methods that mimic important properties of underlying geometrical, mathematical and physical models, such as geometry, conservation laws, symmetry preservation, positivity and monotonicity preservation.  This workshop is focused on presentation of new mimetic discretizations for wide variety of partial differential equations (PDE).

Python for High Productivity Computing (full day)

Download Workshop Presentation
Craig E. Rasmussen (LANL), Matthew J. Sottile (LANL), Patrick J. Miller (LLNL)

This workshop will focus on Python as a high productivity development environment with tutorial and technical sessions.

A growing trend in the scientific community is to prototype research concepts using Python.  This is motivat­ed by very practical reasons: it is free, available on nearly all platforms, provides easy interoperability with other languages, and has a mature base of scientific extension libraries.  This workshop will bring together users and developers to explore the use of Python in scientific computing.  The workshop will focus on programmer productivity, with a tutorial session in the morning and a technical session in the afternoon.

Performance and Productivity of Extreme-Scale Parallel Systems (full day)

Adolfy Hoisie (LANL), Dan Reed (UNC, Institute for Renaissance Computing)

The topics of performance and productivity of systems at extreme-scale will be addressed in this workshop through talks and Q&A sessions by leading experts.

Building extreme-scale parallel systems and applications that can achieve high performance has proven to be incredibly difficult.  Today's systems have complex processors, deep memory hierarchies and heterogeneous interconnects requiring careful scheduling of an application's operations, data accesses and communication to achieve a significant fraction of potential performance. Furthermore, the large number of components in extreme-scale parallel systems makes component failure inevitable; therefore, achieving fault-tolerance in hardware and/or system software becomes an integral part of the performance landscape.  In addition to "classical" performance considerations, the notion of high productivity of systems at scale is now of para-mount importance. Productivity encompasses availability, fault tolerance, ease of use, upward portability (including performance portability), as well as code development time.  The latter is not a focus of our workshop.

Clustermatic: An Innovative Approach to Cluster Computing (full day)

Gregory Watson (LANL), Ronald Minnich (LANL), Erik Hendriks (LANL), Matt Sottile (LANL)

Clustermatic is an award winning, innovative, software architecture that simplifies the management and deployment of cluster computer systems.

Clustermatic is an award winning, innovative, software architecture that redefines cluster computing at all levels: from the BIOS to the parallel environment.  Other cluster systems typically rely on a complicated software suite that is layered on top of a conventional operating system that must be installed on a local disk in every node. The complexity and size of these systems tends to limit their deployment to small-to-mid size machines, reduces reliability, and requires a significant management overhead for normal administrative activities.  In contrast, the Clustermatic design maximizes performance and availability by achieving signifi­cant improvements in system booting and application startup times, minimizing points of failure and vastly simplifying management and administration activities.  It is suitable for use on a wide range of architectures, and has been successfully deployed on tiny clusters containing only 2 diskless nodes all that way up to a 1408 node (2816 processor), 11 Tflop cluster at Los Alamos National Laboratory.  Key components of Clustermatic include LinuxBIOS, BProc, BJS, LA-MPI, and Linux.

Path to Extreme Supercomputing (full day)

Erik P. DeBenedictis (Sandia National Laboratories), Peter Kogge (University of Notre Dame), Thomas Sterling (Caltech/JPL), Michael Frank (University of Florida)

A workshop studying the feasibility of creating supercomputers that could meet the largest projections of applications demand of 10^21 FLOPS (or 1 Zettaflops).  Workshop URL:  http://www.zettaflops.org

Applications scientists envision applications for supercomputers up to 1 Zettaflops (10**21 FLOPS), yet there is little consensus on how to build them.  Recent studies of computational science applications show a continuum of truly important problems requiring supercomputers from today's 40 Teraflops to 1 Zettaflops over a period of several decades.  This represents a faster growth than the historical trend of supercomputer performance.  Furthermore, this magnitude of supercomputer performance exceeds the limits set by the laws of physics for clusters and Massively Parallel Processors (MPP).  In this workshop, scientists will describe the limits of current computers and propose a constructive path for extending supercomputer power to meet applications demand. The purpose of the workshop is to inform participants of the issues and provide a basis for interdisciplinary cooperation.

High Availability and Performance Computing Workshop (full day)

Stephen L. Scott (Oak Ridge National Laboratory)

HAPCW2004 is a venue to discuss the state-of-art and on-going research and development in High-Availability and Performance Computing.  Workshop UR: http://xcr.cenit.latech.edu/hapcw

HAPCW2004 is a forum for state-of-art and on-going research in High-Availability and Performance Computing.  High-Availability (HA) Computing has long played a critical role in commercial mission critical applications.  Likewise, High-Performance Computing (HPC) has equally been a significant enabler of the R&D community for scientific discoveries.  Serviceability aims toward effective means by which corrective and preventive maintenance can be performed on a system.  Higher serviceability improves availability and helps sustaining quality, performance and continuity of services at expected levels.  Together, the combina­tion of HA, Serviceability, and HPC will clearly lead to even more benefits to critical shared major HEC resource environments.  Papers will be electronic-published on the web site and CD.

Taking Your MPI Application To The Next Level:  Threading, Dynamic Processes, & Multi-Network Utilization (full day)

Richard L. Graham, Graham Fagg, Geroge Bosilca, Jeff Squyres

The tutorial focuses on these areas of MPI:  threading, dynamic processes, heterogeneous networking, and run-time tuning of MPI applications.

Important features of the MPI-2 specification and run-time environments have only recently matured in MPI implementations.  Multi-threaded MPI programs can be exploited for useful control and computational feat­ures.  MPI-2 dynamic process models can be used for practical applications such as dynamically reporting on the status of long-running parallel codes.  Using multiple networks to communicate between processes is becoming increasingly relevant in both the LAN and Grid/WAN environments.  Finally, run-time tuning of the MPI implementation itself allows performance tweaking without changing any application code.  A balance of presentation and hands-on examples aimed at users, system administators, and developers will be used.


Half-day workshops

Adaptive Mesh Refinement (half day AM)

Bobby Philip (Computer & Computational Sciences Division. LANL)
Michael Pernice (Computer & Computational Sciences Div.)

Participants in the AMR workshop will discuss challenges in developing optimal parallel solvers, interfacing frameworks and applications, and framework interoperability.

The demand for greater accuracy, detail, and complexity in computational science cannot be satisfied solely by hardware advances.  In numerical simulations, adaptive mesh refinement (AMR) can provide increased local resolution at greatly reduced cost.  Using AMR has historically been time-consuming and application-specific, limiting its use.  The workshop will highlight challenges in making AMR technology more accessible to the scientific community.  Issues of interest include optimal parallel solver capabilities, interfacing frame­works with application codes, and interoperability of different frameworks.  The workshop will provide a forum for AMR framework and application developers to highlight and propose solutions to some of these problems.

Building Scalable Simulations of Complex Socio-Technical Systems (half day, PM)

Madhav V. Marathe (Los Alamos National Laboratory), Stephen Eubank (Los Alamos National Laboratory), James Smith (Los Alamos National Laboratory)
Keith Bisset (Los Alamos National Laboratory), Christopher L. Barrett (Los Alamos National Laboratory)

We propose to organize half a day workshop focusing on simulating detailed extremely large complex socio-techncial systems on high performance computing platforms.

Complex socio-technical systems consist of a large number of interacting physical, technological, and human/societal components.  Examples of such systems are urban regional transportation systems, national electrical power markets and grids, the Internet, ad-hoc communication and computing systems, public health, etc.  Realistic social and infrastructure networks spanning urban regions are extremely large: consisting of millions of nodes and edges. As a result, the detailed simulations capable of representing such systems consist of millions of interacting agents.  The challenges pertaining to design and implementations of such simulations on high performance computing platforms are unique, e.g., computation of extremely large dynamic unstructured composed networks.  The workshop aims to bring together some of the leading researchers with the goal of identifying fundamental issues in designing, implementing and using such simulations on current and next generation high performance computing architectures.  Examples of topics that will be covered include scalable HPC oriented design of such simulations; distributed algorithms and their implementations; formal specifications and simulation specific HPC system software.



Welcoming Reception and Poster Presentations: 6:00pm – 7:00pm

Workshops/tutorials on subjects of special interest to attendees, and the Welcome Reception and Poster Exhibit were featured. The posters were also made available over the next two days for additional inspection.

Poster Presentations
  • Using Generic Programming Techniques with Procedural Finite Element Codes
    Fehmi Cirak and Julian C. Cummings (California Institute of Technology)
  • Cost-effective Performance-scalable Workstation Accelerators for High-resolution Volumetric Imaging
    Robert Michael Lea, Aby Jacob Abraham, and Pawel Tomil Tetnowski  (School of Engineering & Design, Brunel University, UK)
  • Reliability, Availability and Serviceability Management for HPC Linux Clusters: Self-awareness Approach
    Stephen L Scott (ORNL); Chokchai Leangsuksun, Tong Liu, and Yudan Liu (Louisiana Tech University); Richard Libby (Intel); Ibrahim Haddad (Ericsson Research)
  • Design and Development of High Performances Parallel Particle in Cell (PIC)
    Stefano Markidis, Giovanni Lapenta, and W. Brian VanderHeyden (LANL)
  • MPI Collective Operation Performance Analysis
    Jelena Pjesivac-Grbovic, Thara Angskun, George Bosilca, Graham Fagg, Edgar Gabriel, and Jack Dongarra (Innovative Computing Laboratory, University of Tennessee, Knoxville)
  • High Performance Simulation of Developmental Biology on a Hybrid Grid
    Frederic R. Fairfield (Fairfield Enterprises); Giovanni Lapenta and Stefano Markidis (LANL)
  • A Sample-Driven Call Stack Profiler
    Nathan Froyd, John Mellor-Crummey, and Robert J.  Fowler (Rice University)
  • Reliability Costs in LA-MPI
    Galen M. Shipman, Arthur B. Maccabe, and Patrick G. Bridges (The University of New Mexico)
  • Design and Implementation of Adifor90: Preliminary Results
    Michael Wayne Fagan (Rice University)

Wednesday, October 13:

  • 9:00am Welcoming remarks
  • 9:30am: Keynote Address – “On Demand Processing, Query, and Exploration of Distributed Petascale Datasets”
    Dr. Joel Saltz, Ohio State University

    Abstract:
    Increasing numbers of applications communities are demanding infrastructure to support efficient on-demand analysis and query of very large heterogeneous collections of distributed data.  We will focus on recent developments that support high level language queries directed at very large datasets consisting of large numbers of files, mechanisms for global management of grid-based metadata definitions, and mechanisms for rapid definition and instantiation of  databases used to cache grid-based data.  We will describe application scenarios in biomedical research, earth science, and climate modeling.  These appli¬cation scenarios will be used to provide a broad view of what advances in systems software are needed to make this vision a reality; the application scenarios will also motivate a variety of focused performance studies that explore tradeoffs associated with different methods of optimizing performance of on-demand computations and queries.
  • 10:30am Break
  • 11:00am – Reviewed Papers I:  Systems
  • How To Build A Fast And Reliable 1024 Node Cluster With Only One Disk
    Erik Arjan Hendriks, Ronald Minnich, Los Alamos National Laboratory
  • An Event-driven Architecture for MPI Libraries
    Supratik Majumder, Scott Rixner, Rice University; Vijay S. Pai, Purdue University
  • Layout Transformation Support for the Disk Resident Arrays Framework
    Sriram Krishnamoorthy, Gerald Baumgartner, Chi-Chung Lam, The Ohio State University; Jarek Nieplocha, Pacific Northwest National Laboratory; P Sadayappan, The Ohio State University
  • 12:30pm – 2:00pm Lunch

Thursday, October 14: Panel Discussions

Four panel discussions took place focusing on important issues for researchers and managers in high performance computing and a Closing Reception ended the LACSI Symposium. 
  • 8:00am Breakfast
  • 9:00am – Panel I
  • FPGAs in High Performance Computing
    Use of Field-Programmable Gate Arrays is expanding from specialized embedded systems to more general-purpose application accelerators.  Panelists will consider software support aspects and applications using FPGAs.

    Wim Bohm, Colorado State University, Burton Smith, Cray, Inc., Maya Gokhale, Los Alamos National Laboratory, Keith Underwood, Sandia National Laboratories, Jeffrey Hammes, SRC Computers, Inc.
    Moderator:  Rod Oldehoeft, Los Alamos National Laboratory
  • 10:30am Break
  • 11:00am – Panel II
  • Panel II: Computer Science Innovations in ASC ASAP Centers
    The ASC Academic Strategic Alliance Program Centers pursue advances in computational science, computer systems, mathematical modeling, and numerical mathematics important to Advanced Simulation and Computing.  The panelists will discuss innovations in computer science that are contributing to the success of their Centers.

    Michael Aivazis, California Institute of Technology , Tom Henderson, University of Utah, Eric Darve, Stanford University, Sanjay Kale, University of Illinois at Urbana- Champaign, Anshu Dubey, University of Chicago
    Moderator:  Karl-Heinz Winkler, Los Alamos National Laboratory    
  • 12:30 Lunch
  • 2:00pm – Panel III
  • Panel III: HPC Languages of the Future
  • 3:30pm Break
  • 4:00PM – 5:30pm Panel IV
  • Panel IV:  TBA at Production Time
  • 5:30pm – 7:00pm Closing Reception

Supplement Materials

VISTAR group, Electronic and Computer Engineering
School of Engineering & Design
Brunel University, UK
  • Cost-effective performance-scalable workstation accelerators for high-resolution volumetric imaging
  • Cone-beam X-ray CT reconstruction
Bobby Philip, Michael Pernice
Computer and Computational Sciences Division
Los Alamos National Laboratory
P.O. Box 1663, MS B256
Los Alamos, NM 87545
  • Workshop on Adaptive Mesh Refinement


Powered by Plone

LACSI Collaborators include:

Rice University LANL UH UNM UIUC UNC UTK