I’m trying to think more and more about parallel CUBE and ways to make it happen and make it happen in a collaborative environment. The most likely partners are UNH folks such as Brian Calder et al. They were instrumental in inventing CUBE and have the knowledge and resources to enhance it. They apparently have a Dell Blade which they would like to tightly couple to a network storage and try and get speed up. That’s a good idea for the ships.

However, what could we do at the branches? If the branches have connectivity to I2 and are connected to some of the High Performance Computing nodes of NOAA, would we be able to farm out computational jobs to a data center in Boulder or Princton? What services does HPCC offer to the rest of NOAA? We should try and come up with a generic hydro data processing framework that can be deployed in a variety of environments (not just on a single workstation, like CARIS HIPS, or only on a specialized grid cluster like in Boulder…).

What do we need to do to get this project rolling:

  • Demonstrate the need. Show how much time/money gets lost applying correctors, recomputing grids, computing n single resolution grids, converting data, etc.
  • Propose High Performance Computing ("High Performance Bathy Processing"?) as the solution. This could be a variety of optimization strategies. Tightly coupled storage to speedup I/O, parallel pre-processing and CUBE grid computation to utilize extra processors.
  • Implement some of these (one per paper?) and run benchmarks to demonstrate feasibility.

I need to flesh this out more.



blog comments powered by Disqus

Published

15 October 2008

Category

work

Tags