Personal tools
You are here: Home LACSI Symposium LACSI Symposium 2005
Document Actions

LACSI Symposium 2005

by admin last modified 2006-01-11 07:32
October 11-13, 2005
Symposium Site:
Eldorado Hotel, 309 West San Francisco Street
Santa Fe, NM 87501

Agenda for LACSI Symposium 2005

The Sixth Symposium offered participants a wide variety of opportunities to discuss and learn about technical and policy issues in high-performance computing.

Workshops | Poster Sessions | Keynote | Papers | Panels

Tuesday, October 11: Workshops, Posters, and Opening Reception

NOTE: All workshops will be held at the Eldorado OR the Hilton (across the street). Room assignments available at Check-in.
Workshop #1

High Availability & Performance Computing  Workshop    (full day)
Contact person: Leangsuksun, Chokchai Box, Lousiana Tech University (
Author(s): Chokchai Box Leangsuksun, Louisiana Tech U; Stephen Scott, ORNL

High availability and performance computing has recently become arecognized important combination to those organizations that requiretremendous computing power to solve their important problems such asEnergy, Climate, Fusion, Biology, and Nanotechnology. These non-trivialproblems are usually characterized by massive and long runningapplications. Therefore, Reliability, Availability and Serviceability(RAS) management will become an increasingly paramount aspect in manycomputing environments. RAS management goals are to maximize uptime andtherefore undoubtedly complement High End Computing (HEC) objectives bypreventing performance degradation and spectrum availability. HighAvailability (HA) Computing has always played a critical role incommercial mission critical applications. Likewise, High PerformanceComputing (HPC) has equally been a significant enabler of the R&Dcommunity because of their scientific discoveries. Serviceability aimstoward effective means by which corrective and preventive maintenancecan be performed on a system. Higher serviceability improvesavailability and helps retaining quality, performance and continuity ofservices at expected levels. Together, the combination of HA,Serviceability, and HPC will clearly lead to even more benefits tocritical shared major HEC resource environments. This third annualworkshop (HAPCW2005) is a forum for the discussion of topics related tothe issues affecting high availability and performance computing.

Workshop #2

Advanced Numerical Methods for PDEs    (full day)
Contact person: Oleg Boyarkin, University of Houston (

Participants in the workshop will discuss new developments and challenges in construction, investigation, and applications of new numerical methods and algorithms for the solution of partial differential  equations relevant to LANL applications. New disctretization methods for PDEs on arbitrary polyhedral meshes, their stability and accuracy, and efficient preconditioned solvers for underlying large scale algebraic systems as well as interface reconstruction algorithms are among the major topics of the workshop. 

Workshop #3

Performance and Productivity of Extreme-Scale Parallel Systems    (full day)
Contact person: Darren J. Kerbyson, LANL (
Author(s): Darren J. Kerbyson, PAL/LANL; Dan Reed, UNC/Institute for Renaissance Computing

In this workshop we will concern with the interplay across system architecture, network, applications and system software design. The invited speakers, leaders in these fields, will not only cover these areas, but will also address the state-of-the-art in methodologies for performance analysis and optimization including benchmarking, modeling, tools development, tuning and steering, as well as metrics for productivity. At this time we envision the workshop to be composed of four sessions comprising three talks each.

Workshop #4

Models & Simulations for Large-Scale Socio-Technical Systems    (full day)
Contact person: Stephan J. Eidenbenz, LANL (
Author(s): James P Smith, LANL; Stephan Eidenbenz, LANL;
Gabriel Istrate, LANL; Anders Hansson, LANL; Christian Reidys, LANL

Complex socio-technical systems consist of millions of interacting physical, technological, and human/societal components. Examples of such systems  include transportation systems, national commodity markets, telecommunication  and computing systems including the Internet, and public healthcare systems.  High-fidelity simulations capable of representing and analyzing such complex  systems require the use of high performance computing platforms and tools.  The workshop aims to bring together some of the leading researchers with the goal of identifying fundamental issues in designing, implementing and using  such simulations on high performance computing architectures. Topics include: scalable HPC oriented design of such simulations, distributed algorithms and  their implementations, and large-scale discrete event simulation systems.

Workshop #5

High Performance Computing in Beam Physics & Astrophysics    (full day)
Contact person: Salman Habib, LANL (
Author(s): Salman Habib, LANL; Robert Ryne, LBNL

Particle-based codes are among the most widely used high performancecomputing tools today, essential components of the state-of-the-art infields such as astrophysics and cosmology, compressible andincompressible fluid dynamics, and plasma and beam physics. Severallarge-scale applications are now at a threshold where they can be usedas precision tools rather than as quantitative indicators of systembehavior. Certain target problems in beam physics and astrophysics andcosmology have very stringent error control requirements fornext-generation simulation frameworks and tools -- ranging fromsub-percent to parts per million. Additionally, the success of majorprojects such as the International Linear Collider and large-scalecosmological surveys such as the Joint Dark Energy Mission, the DarkEnergy Survey, and the Large Synoptic Survey Telescope depends onaccurate and truly predictive simulations. That these projectsrepresent a multi-billion dollar science investment furtherunderscores the importance of high-performance simulation tools totheir success. In this workshop we will aim to bring togetherresearchers in these fields to discuss the future challenges inhigh-performance simulations for beam physics and astrophysics. Theworkshop will enable researchers to share successful strategies thathave worked in their sub-disciplines and to outline the areas wheremore work is clearly needed. A joint strategy for attacking theseproblems will be a major aim of the workshop.

Workshop #6

Automatic Tuning of Whole Applications    (full day)
Contact person: Ken Kennedy, Rice University (
Author(s): Ken Kennedy, Rice University

For many years, retargeting of applications for new architectures has been a major headache for high performance computation. As new architectures have emerged at dizzying speed, we have moved from uniprocessors, to vector machines, symmetric multiprocessors, synchronous parallel arrays, distributed-memory parallel computers, and scalable clusters. Each new architecture, and even each new model of a given architecture, has required retargeting and retuning every application, often at the cost of many person-months or years of effort.

Recently a number of strategies have emerged for automating the process of tuning applications to new architectures, based on using large amounts of computation time to explore a space of different variants of the program, running each variant on the target architecture. One example of this strategy is the Atlas system, which uses substantive amounts of computation to provide versions of a computational linear algebra kernel that are highly tuned in advance to different machines. If this approach can be extended more generally to components and whole programs, it would help avoid the enormous human costs involved in retargeting applications to different machines. A major research question in this area remains: given that the space of variants can be enormous, how can we reduce tuning time to manageable levels?

Recently a number of research groups have been pursuing work in this area.  We propose to hold a workshop at the 2005 LACSI Symposium to report on these efforts and to solicit feedback from the application development community.  The goals of this workshop will be:

  1. To provide a forum for research groups working on automatic tuning to report on the status of their efforts and their future plans;
  2. To foster collaborations among different autotuning research groups and the potential users of tools that they may develop; and
  3. To initiate an activity to develop a standard set of benchmarks for use in this research.
Hank Alme (LANL X-8): Application Performance Tuning Activities in LANL X Division

A new team has formed in LANL's X division, charged with bringing the tools an method available to bear on improving performance of the main LANL weapons simulation codes. We will present an overview of the planned team activities, with an emphasis on the areas where we hope to be able to interact with the performance tuning community outside the lab.

Chun Chen, Jacqueline Chame, and Mary Hall (USC ISI): Combining Models and Guided Empirical Search for Memory Hierarchy Optimization

This talk will describe an algorithm for simultaneously optimizing across multiple levels of the memory hierarchy for dense-matrix computations. Our approach combines compiler models and heuristics with guided empirical search to take advantage of their complementary strengths. The models and heuristics limit the search to a small number of candidate implementations, and the empirical results provide the most accurate information to the compiler to select among candidates and tune optimization parameter values. We will present performance results and discuss future directions. Notably our results on Matrix Multiply achieve comparable performance with ATLAS and vendor BLAS libraries.

Shirley Moore, Jack Dongarra, Keith Seymour, and Haihang You: Generic Code Optimization
Presentation - PDF | Powerpoint

Generic Code Optimization is a research effort to develop a tool that will allow a critical software segment to be analyzed and empirically optimized in a way that is similar to how ATLAS performs its optimization. It is a collaboration with the ROSE project, which involves source-to-source code transformation and optimization, at Lawrence Livermore National Laboratory.  The approach to optimizing arbitrary code, especially loop nests, includes the following components: 
  1. machine parameters detection
  2. source to source code generation
  3. test driver generation
  4. an empirical search engine
David Padua (University of Illinois at Urbana-Champaign): Research Directions in Automatic Tuning of Libraries and Applications

I will discuss a few open problems in automatic tuning as well as our efforts at Illinois to address them. Topics include:

  1. Infrastructures to support the development of self-tuning code. We have the outline of an infrastructure built around language extensions to specify code generation and transformation as well as search strategy.
  2. Search strategies to identify the best version from the astronomical number of possibilities that are usually available. We have studied some search strategies based on Explanation Based Learning.
  3. Tuning techniques when performance depends on the input data. We have built a generator of sorting routines and shown that our approach produces what seems to be the fastest available sorting routine for sorting arrays of integers.
Apan Qasem, Ken Kennedy, John Mellor-Crummey (Rice): Using Direct-search in Automatic Tuning of Applications
Presentation - PDF | Powerpoint

Exploring the large and complex transformation search space is one of the main obstacles in developing efficient and practical tools for automatic tuning of whole applications. To address this issue, we have developed a prototype tool that uses loop-level performance feed-back and a direct-search strategy to effectively explore the optimization search space. In our talk, we will give an overview of our autotuning framework and present results from experiments using our search strategy.

Dan Quinlan (LLNL): ROSE: Source-to-Source Analysis and Optimization
Presentation - PDF | Powerpoint

ROSE is an open source tool for the optimization of C and C++ scientific applications.  ROSE is specifically a C++ library for building source-to-source translators and analysis tools for large scale DOE scientific applications.  ROSE provides a simple object-oriented compiler infrastructure targeted at a general non-compiler audience.  Our focus in on the optimization of scientific applications, but ROSE can and has been used by others for the development of highly specialized source-based analysis tools. A specific focus within our project is on robustness and loop optimization, so that large scale DOE applications can be optimized. Recent work has focused on the use of ROSE as a basis for empirical optimization (automatic tuning).  Work with Rice has also developed initial FORTRAN support to ROSE.

Todd Waterman and Keith Cooper (Rice): Adaptive Inlining
Presentation - PDF | Powerpoint

Procedure inlining is a complex optimization that has been the subject of significant research over the years, but prior techniques have all had limited success. Adaptive techniques have recently emerged as a method for improving compiler performance with the primary focus being on optimization order. We present an adaptive inlining system that finds a program-specific set of inlining decisions. This results in consistently better program performance than a static inlining technique is capable of achieving.

Clint Whaley (University of Texas, San Antonio): Tuning High Performance Kernels through Empirical Compilation

There are a few application areas which remain almost untouched by the historical and continuing advancement of compilation research.  For the extremes of optimization required for high performance computing on one end, and embedded systems at the opposite end of the spectrum, many critical routines are still hand-tuned, often directly in assembly. At the same time, architecture implementations are performing an increasing number of compiler-like transformations in hardware, making it harder to predict the performance impact of a given series of optimizations applied at the ISA level. These issues, together with the rate of hardware evolution dictated by Moore's Law, make it almost impossible to keep key kernels running at peak efficiency.  Automated empirical systems, where direct timings are used to guide optimization, have provided the most successful response to these challenges. This paper describes our approach to performing empirical optimization, which utilizes a low-level iterative compilation framework specialized for optimizing high performance computing kernels. We present results showing that this approach can not only provide speedups over traditional optimizing compilers, but can improve overall performance when compared to the best hand-tuned kernels selected by the empirical search of our well-known ATLAS package.

Kathy Yelick (U.C. Berkeley and LBNL ): Automatic Tuning of Sparse Matrix Kernels

The Optimized Sparse Kernel Interface (OSKI) Library is a collection of automatically tuned computational kernels for sparse matrices, which is designed for use in solver libraries and applications. OSKI has a BLAS-style interface, providing basic kernels like sparse matrix-vector multiply and sparse triangular solve, among others. OSKI contains a set of optimizations such as data structure reorganizations that are specific to the matrix structure.  The optimizations currently target memory hierarchies on cache-based scalar processors, although work on vector processor and SMP optimizations is ongoing.  I will describe the optimizations and how the OSKI interface is designed to allow for performance information from offline analysis of the hardware performance, from user hints, and from runtime feedback.

This work is joint with Jim Demmel, Rich Vuduc, and other members of the Berkeley Benchmarking and Optimization (BeBOP) group.

Qing Yi (University of Texas, San Antonio): Parameterization of Compiler Optimizations For Empirical Tuning

Conventional compiler optimizations are based on static approximation of machine behavior and has been shown to be inadequate in many cases. Empirical tuning can compensate the inaccuracies of static performance models by selecting optimizations based on actual runtime information. However, since conventional compilers provide very limited   ways to adjust their application of optimizations, previous research has focused on tuning a very small subset of optimization opportunities, such as loop blocking and unrolling.

We believe that in order for empirical tuning to be successful, it needs to be able to fully control the application of compiler optimizations. We propose a framework where compilers systematically parameterize their optimizations and produce an intermediate form that encodes   optimizations with an explicit searching space. A separate code generator can then be invoked by the empirical tuner to generate various versions of the optimized code. As the empirical tuner has the complete freedom to navigate the entire optimization search-space available to a compiler, it will be much more effective in finding the best solution. This work is in collaboration with several research groups and is currently ongoing. I will present some preliminary results,   with focus on combination of various loop optimizations, including loop blocking, fusion and unrolling.

Kamen Yotov (Cornell): How Oblivious can Cache-oblivious Codes be?
Presentation - PDF | Powerpoint

Cache-oblivious algorithms provide a solution to the problem of writing  programs that adapt automatically to memory hierarchies to optimize their performance. These algorithms, which are based on the divide-and-conquer paradigm, enjoy certain important theoretical properties such as I/O optimality, but there are few head-to-head comparisons of the experimental performance of cache-oblivious and cache-aware programs.  

This talk describes such a study for matrix multiplication. Starting from code that is completely oblivious to machine architecture, we  successively add "awareness" to different architectural features by optimizing the code to take advantage of those features until we get a completely cache/architecture-aware code. Our experiments show that obliviousness has a significant penalty on current architectures, and that it will not be easy to eliminate this penalty.

Workshop #7

Algorithm Acceleration with Reconfigurable Hardware    (full day)
Organizer and contact persons: Rod R. Oldehoeft (; 505 665 3663) Maya Gokhale (; 505-665-9095)
Over the past 15 years, direct execution of algorithms in reconfigurable hardware has demonstrated speedup of one to two orders of magnitude over equivalent software. Reconfigurable Computers (RC) using Field Programmable Gate Arrays (FPGAs ) as processors have emerged as co-processors to augment microprocessors in work stations, clusters, and supercomputers. While RC offers remarkable opportunities for performance, research challenges abound. The workshop will be organized into two half-day sessions. The morning session will present introductory topics and applications. The afternoon session will include research topics in FPGA-based architectures, systems, tools, and future directions. While RC offers remark¬able opportunities for performance, research challenges abound:
  • designing system architectures that balance conventional and reconfigurable processors
  • developing analysis and compiler tools to automatically map
  • algorithm kernels to hardware
  • minimizing communications costs between hardware and software
  • designing highly parallel, fine-grained computational elements for direct hardware execution
  • scheduling and managing reconfigurable computing elements in large systems
The purpose of WS7 is to discuss successes and challenges of reconfigurable supercomputing.  The AM session will present introductory topics and applications; the PM session will include research topics in FPGA-based architectures, systems, tools, and future directions.
Maya Gokhale, Introduction to Reconfigurable Computing
Abstract | Presentation

Reid Porter, Al Conti, Jan Frigo, Neal Harvey, Garret Kenyon, Maya Gokhale: A Reconfigurable Computing Framework for Multi-scale Cellular Image Processing

Abstract | Presentation

Daniel G. Chavarra, Miranda and David Chassin A Hardware-Accelerated Steady-State Power Flow Solver:
Abstract | Presentation

Chuan He, Guan Qinand and  Wei Zhao, High-order Finite Difference Seismic Modeling on Reconfigurable Computing Platform
Abstract | Presentation

Zachary K. Baker and Viktor K. Prasanna, Hardware Accelerated Apriori Algorithm for Data Mining
Abstract | Presentation

Chen Chang, John Wawrzynek, Robert W. Brodersen, The Design And Application of BEE2 A High-End Reconfigurable Computing System
Abstract | Presentation

Keith D. Underwood and K. Scott Hemmert, Implications of FPGAs for Floating-Point HPC Systems
Abstract | Presentation

Justin L. Tripp et. al., Trident: An FPGA Compiler Framework for Scientific Computing
Abstract | Presentation

Workshop #8

Parallel Programming with Charm++ and AMPI    (full day)
Contact person: Mendes, Celso L., University of Illinois (
Author(s): Laxmikant V. Kale, University of Illinois; Celso L. Mendes, University of Illinois

Adaptive MPI (AMPI), Charm++ and the frameworks built upon them have emerged as powerful parallel programming systems in recent years. By allowing programmers to divide the computation into a large number of entities which are mapped to the available processors by an intelligent runtime system, Charm++ enables a separation of concerns between the programmers and the computing system. This approach leads to both improved programmer productivity and higher system performance. The workshop will focus on showcasing leading research in parallel processing based on Charm++ and its frameworks. Topics will include tutorial-level introduction to Charm++ and AMPI, followed by case studies of applications developed using the frameworks, as well as advances in AMPI/Charm++ technology itself. Authors and attendees will be encouraged to share their experiences and plans for the systems built upon Charm++/AMPI.

Workshop #9

LinuxBIOS Summit    (full day)
Contact person:  Ron Minnich, LANL (

WS9 will include a structured set of talks and a less structured discussion period.  We will explore the current status of LinuxBIOS, including presentations by vendors on how they are using or plan to use LinuxBIOS in their products.  We will discuss successes as well as problems and draw lessons learned from both.  We will try to determine where LinuxBIOS should be taken next and to set goals and figure out how to meet them.  We plan to close by producing a consensus document on the next steps needed over the coming year.

Workshop #10

Application Development Using Eclipse & the Parallel Tools Platform    (full day)
Contact person:  Gregory Watson, LANL (; 505-665-0726)

Eclipse is an extensible, open-source integrated development environment (IDE) meant to be a full-featured, commercial-quality platform for development of highly integrated software tools.  Eclipse offers many features:  syntax-highlighting editor, incremental code compilation, thread-aware debugger, code and class navigator, file/project manager, interfaces to standard source control systems, and support for Java, C, C++, Fortran, and other languages.  The Parallel Tools Platform (PTP) is an official Eclipse Foundation Technology Project that focuses on integrating parallel tools into the Eclipse environment for enhanced application development.  PTP supports a range of architectures and runtime systems and simplifies interaction with parallel systems.  This tutorial will introduce Eclipse and PTP, provide hands-on experience at managing and developing software, demonstrate both C/C++ and Fortran Development Toolkits, and present PTP tools.  Participants will be able to use their own laptops (Linux or OS X) with supplied Eclipse and PTP software to maximize the hands-on time at software development activities.

Welcoming Reception and Poster Presentations: 6:00pm – 7:00pm

Workshops/tutorials on subjects of special interest to attendees, and the Welcome Reception and Poster Exhibit were featured. The posters were also made available over the next two days for additional inspection.

Poster Presentations
  • Using Cache Models and Empirical Search for Automatic Tuning of Applications
    Contributors: Ken Kennedy, John Mellor-Crummey, Apan Qasem, (Rice University)
  • Parallel Space-Filling Curve Generation for Dynamic Load Balancing
    Justin Luitjens, Tom Henderson, and Martin Berzins (University of Utah)
  • Adaptive Performance Monitoring and Profiling On Large Scale Systems
    G. Todd Gamblin, Ying Zhang, Daniel A. Reed (Renaissance Computing Institute, University of North Carolina at Chapel Hill)
  • Support for Simultaneous Multiple Substrate Performance Monitoring
    Kevin London, Shirley Moore, Daniel Terpstra, Jack Dongarra (University of Tennessee)
  • PathScale InfiniPath Interconnect Performance
    Greg Lindahl (PathScale, Inc.)
  • An Initial Implementation of the Program Database Toolkit using the Open64 compiler
    Oscar Hernandez (University of Houston), Sameer Shende (University of Oregon), Barbara Chapman (University of Houston)
  • Improving Adaptive Compilation with Truncated Execution and Loop Unrolling
    Jeff Sandoval, Keith Cooper, Tim Harvey (Rice University)
  • Adaptive Inlining
    Todd Waterman and Keith Cooper (Rice University)
  • Compiling for Memory Constraints on Short Vector Machines
    Yuan Zhao, Ken Kennedy  (Rice University)
  • Scout: A GPU-Accelerated Language for Visualization and Analysis
    Patrick McCormick, Jeff Inman, James Ahrens (Los Alamos National Laboratory), Greg Roth, Chuck Hansen (University of Utah)
  • A Multi-platform Co-Array Fortran Compiler for High-Performance Computing
    Yuri Dotsenko and Cristian Coarfa (Rice University)

Wednesday, October 12: Keynote Address and Refereed Research Contributions

  • 9:00am Welcoming remarks
  • 9:30am: Keynote Address - New Architectures for a New Biology, David E. Shaw
    D. E. Shaw Research and Development and Center for Computational Biology and Bioinformatics, Columbia University


    Some of the most important outstanding questions in the fields of biology, chemistry, and medicine remain unsolved as a result of our limited understanding of the structure, behavior and interaction of biologically significant molecules.  The laws of physics that determine the form and function of these biomolecules are well understood.  Current technology, however, does not allow us to simulate the effect of these laws with sufficient accuracy, and for a sufficient period of time, to answer many of the questions that biologists, biochemists, and biomedical researchers are most anxious to answer.  This talk will describe the current state of the art in biomolecular simulation and explore the potential role of high-performance computing technologies in extending current capabilities.  Efforts within our own lab to develop novel architectures and algorithms to accelerate molecular dynamics simulations by several orders of magnitude will be described, along with work by other researchers pursuing alternative approaches.  If such efforts ultimately prove successful, one might imagine the emergence of an entirely new paradigm in which computational experiments take their place alongside those conducted in “wet” laboratories as central tools in the quest to understand living organisms at a molecular level, and to develop safe, effective, precisely targeted medicines capable of relieving suffering and saving human lives.
  • 10:30am Break
  • 11:00am – Reviewed Papers I:  Systems
  • 12:30pm – 2:00pm Lunch
  • 2:00pm – Reviewed Papers II:  Performance
  • 3:30pm – 4:00pm Break
  • 4:00pm - Reviewed Papers III:  Algorithms and Applications
  • Performance Analysis, Modeling and Enhancement of Sandia’s Integrated TIGER Series (ITS) Coupled Electron/Photon Monte Carlo Transport Code
    Draft that appeared on the Symposium CDROM | Final Version
    Mahesh Rajan, Brian Franke, Robert Benner, Ron Kensek and Thomas Laub, Sandia National Laboratories

Thursday, October 13: Panel Discussions

  • 9:00 – 10:30 - Panel I: The Impact of ASC on Computer Science Research
    Panelists: Sally McKee, Cornell University; Peter Eltgroth, Lawrence Livermore National Laboratory; Patrick Bridges, University of New Mexico; David Womble, Sandia National Laboratories; Ken Kennedy, Rice University; Rod Oldehoeft, Los Alamos National Laboratory

    Representatives from each of the ASC laboratories and an academic partner present overviews of the interactions and their impact on research.
  • 11:00 – 12:30 - Panel II: Diverging Architectural Directions in HPC?
    Moderator: Rob Fowler
    Panelists: Burton Smith, Allen McPherson, Steve Poole, John Gustafson

    Over the past few years, the "conventional cluster" architecture for general-purpose high-performance computing has consisted of collection of high-end, high-power microprocessors in small SMP boxes (withdisks) connected by a network, either commodity or designed specifically as a cluster interconnect.  These shared the HPC space with vector machines and large shared-memory systems. Recently, there has been a consensus that space, power, reliability, communication latency, and manageability are among the issues that constrain these systems.  Some of the system characteristics perceived to address these problems, and on which emerging systems are based, include special purpose hardware, co-processors, processors that optimize a computation versus power function, multi-core chips, multi-threading, and vector/streaming  rocessors.  We thus appear to be entering an era in which architectural designs may be diverging. The panelists, and the audience, are invited to discuss emerging directions in high-end architectures and their implications on the user communities for these systems.
  • 12:30 – 2:00 - Lunch and Closing event - everyone is welcome


The Symposium proceedings will be distributed on CDROM and will contain  material from all three days of the meeting; authors may provide additional background and supplementary information.  Contributors will retain  copyright to their materials, but must agree to inclusion in the proceedings and posting on the symposium web site after the event.

A few months after the Symposium, selected papers from the research presentations and workshops will be invited to appear in a special issue of The Journal of Supercomputing.

Rob Fowler, Program Committee Chair

Powered by Plone

LACSI Collaborators include: