Difference between revisions of "How to run ESTEL in parallel on a cluster"

From SourceWiki
Jump to navigation Jump to search
Line 6: Line 6:
  
 
= Pre-requesites =
 
= Pre-requesites =
 +
* TELEMAC system installed and configured for MPI.
 +
* [http://en.wikipedia.org/wiki/Portable_Batch_System PBS] queuing system
 +
* <code>PATH</code> for fortran compiler...
 +
 +
= Submitting a job =
 +
When the setup is done, it is quite easy to .
 +
A script exists in the <code>/path/to/systel90/bin/</code> directory wehich submits a TELEMAC job to the PBS queue.
 +
<code><pre>
 +
$ qsub-telemac jobname nbnodes walltime code case
 +
</pre></code>
 +
 +
where:
 +
* <code>jobname</code>
 +
* <code>nbnodes</code>
 +
* <code>walltime</code>
 +
* <code>code</code>
 +
* <code>case</code>
 +
 +
For instance, for '''ESTEL-3D''' one could use:
 +
<code><pre>
 +
$ qsub-telemac test 12 10:00:00 estel3d cas
 +
</pre></code>
 +
 +
This would submit a job on 12 processors with a walltime of 10 hours to run a case names "cas" with ESTEL-3D.
 +
 +
Note that

Revision as of 11:16, 11 September 2007


This article describes how to run parallel jobs in ESTEL on HPC clusters.

Beowulf clusters are real high performance facilities such as Blue Crystal. If you plan to run ESTEL on a network of workstations instead, use this article about networks of workstations.

Pre-requesites

  • TELEMAC system installed and configured for MPI.
  • PBS queuing system
  • PATH for fortran compiler...

Submitting a job

When the setup is done, it is quite easy to . A script exists in the /path/to/systel90/bin/ directory wehich submits a TELEMAC job to the PBS queue.

$ qsub-telemac jobname nbnodes walltime code case

where:

  • jobname
  • nbnodes
  • walltime
  • code
  • case

For instance, for ESTEL-3D one could use:

$ qsub-telemac test 12 10:00:00 estel3d cas

This would submit a job on 12 processors with a walltime of 10 hours to run a case names "cas" with ESTEL-3D.

Note that