Difference between revisions of "General use of the TELEMAC system"

From SourceWiki
Jump to navigation Jump to search
 
(13 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
This page describes the general use of the TELEMAC system in Geographical Sciences.
 
This page describes the general use of the TELEMAC system in Geographical Sciences.
  
The TELEMAC system is installed centrally on "dylan". You will need to log into dylan to run TELEMAC jobs. Therefore it helps to practice a bit in a Linux environment. The [[:category:Pragmatic Programming | Pragmatic Programming]] course might be a good place for this.
+
The current modules are:
 +
* TELEMAC-2D, 2D flood inundation
 +
* SISYPHE, 2D river bed evolution
 +
* ESTEL-2D, 2D groundwater flow and contaminant transport
 +
* ESTEL-3D. 3D groundwater flow and contaminant transport
 +
More modules (3D hydrodynamics, waves ...) could be added if necessary. Just ask.
  
TELEMAC-2D, SISYPHE, ESTEL-2D and ESTEL-3D are available. More modules could be added if necessary. Just ask.
+
= Linux =
 +
The TELEMAC system is installed centrally on "dylan" which the Linux operating system (CentOS).
 +
'''You will need to log into dylan and use linux commands to run TELEMAC jobs'''. Therefore it helps to practice a bit in a Linux environment. The [[:category:Pragmatic Programming | Pragmatic Programming]] course might be a good place for this. Ask the scientific computer officer for pointers if you need some and get some training if required.
 +
 
 +
"Note that because the pre- and post-prtocessors run on MS Windows, I personally encourage people to use the Linux shell only to start the TELEMAC jobs and use the MS Windows environment for editing files etc... See [[General_use_of_the_TELEMAC_system#A_note_about_ASCII_files | note below]] for a word of warning about MS Windows text editors." [[User:Jprenaud|JP Renaud]] 18:25, 18 September 2008 (BST)
  
 
= Environment set-up =
 
= Environment set-up =
Line 25: Line 34:
 
$ which telemac2d
 
$ which telemac2d
 
/home/telemac/bin/telemac2d
 
/home/telemac/bin/telemac2d
 +
</pre>
 +
 +
Note that if you log in another machine (i.e. not dylan) you might get an error message about "/home/telemac" not existing or file not found. This is normal, the location probably does not exist of the other machine... Live with it or adapt the .bashrc so that the files are sourced only on dylan. For instance, you could use:
 +
 +
<pre>
 +
# Location of the TELEMAC system
 +
# Only done on dylan
 +
if [ `hostname | grep dylan` ]; then
 +
    SYSTEL90=/home/telemac
 +
    export SYSTEL90
 +
 +
    source $SYSTEL90/intel_env
 +
    source $SYSTEL90/config/systel_env
 +
fi
 
</pre>
 
</pre>
  
 
= Test =  
 
= Test =  
  
Telemac-ed includes some test cases. Copy one into your filespace and run it:
+
Telemac-2d includes some test cases. Copy one into your filespace and run it:
  
 
<pre>
 
<pre>
Line 37: Line 60:
 
</pre>
 
</pre>
  
If this works, you have a well configured enviroment. Now go and do some real work...
+
If this works, you have a well configured environment. Now go and do some real work with your own files
 +
 
 +
= A note about ASCII files =
 +
 
 +
Windows and Linux treat end of line in ASCII files differently. This means that if you edit a steering file on Windows using MS Notepad, you might have problems running the simulation on Linux. However, it is sometimes cumbersome to use text editors on Linux via an ssh session. The solution is too use a good text editor and configure it to use Unix type end of line characters. [http://www.scintilla.org/SciTE.html Scite] is a very good text editor for the MS Windows environment.
  
 
= Parallel jobs =
 
= Parallel jobs =
The TELEMAC is configured to run in parallel mode if requested by the user. This is actually a very simple thing to do and highly encouraged if you use large meshes and run long simulations. A few extra initial steps are required.
+
The TELEMAC is configured to run in parallel mode if requested by the user. This is actually a very simple thing to do and highly encouraged if you use large meshes and run long simulations. However, a few extra initial steps are required.
  
 
TELEMAC uses MPI for parallel operations. MPI requires a secret word in a hidden configuration file. Simply type the following instructions to create it. Note that "somethingsecret" below should contains no spaces.  
 
TELEMAC uses MPI for parallel operations. MPI requires a secret word in a hidden configuration file. Simply type the following instructions to create it. Note that "somethingsecret" below should contains no spaces.  
Line 58: Line 85:
 
</pre>
 
</pre>
  
The example above should run in about 55s on dylan. Now edit cas.txt so that the line about the number odf processors looks like:
+
The example above should run in about 55s on dylan. Now edit cas.txt so that the line about the number of processors looks like:
 
<pre>
 
<pre>
 
PARALLEL PROCESSORS = 8
 
PARALLEL PROCESSORS = 8
 
</pre>
 
</pre>
  
Before you can run TELEMAC in parallel, you need to start the MPI daemon. This needs to be done once per login.
+
Note that dylan has 8 cores so the system is configured to run with '''8 processors as a maximum'''.
 +
 
 +
Put "0" to run in scalar mode. "1" runs in parallel mode but with one processor only, so "0" and "1" should give the same results despite using different libraries.
 +
 
 +
Before you can run TELEMAC in parallel, you need to start the MPI daemon. Note that this needs to be done once per login, not for each job.
  
 
<pre>
 
<pre>
Line 69: Line 100:
 
</pre>
 
</pre>
  
Then, you can now run teleun telemac2d again:
+
Then, you can now run telemac2d again:
 
<pre>
 
<pre>
 
$ telemac2d cas.txt
 
$ telemac2d cas.txt
 
</pre>
 
</pre>
  
It should run again, faster, maybe 30 seconds or so. It is not a lot faster because it's a silly example and splitting the mesh in 8 subdomains accounts for a large part of the computation time.
+
It should run again, faster this time, maybe 30 seconds or so instead of 55 seconds. It is not a lot faster (certainly not 8 times faster!) but this is because it's a silly example and splitting the mesh in 8 subdomains accounts for a large part of the computation time. With biggers meshes and longer sinmulations, you should get a better acceleration.
  
 
Before you log out, it is a good idea to kill the MPI daemon:
 
Before you log out, it is a good idea to kill the MPI daemon:
Line 80: Line 111:
 
$ mpdallexit
 
$ mpdallexit
 
</pre>
 
</pre>
 +
 +
It is also possible to run TELEMAC on the University cluster, bluecrystal. This is described on another page (not finished yet but will be done soon).
 +
 +
=Changing between versions of the TELEMAC system=
 +
The basic configuration allows the user to switch transparently between versions of the TELEMAC system via the commands v5p8 and v5p9:
 +
<pre>
 +
$ v5p9
 +
Switched to TELEMAC version: v5p9
 +
</pre>
 +
 +
Version v5p9 is under development and you are encouraged not to use it though! It might be that a developer asks you to test something under version v5p9 so this is merely to make this easier.
 +
 +
=A note about binary files=
 +
TELEMAC was traditionally run on large Unix machines which have a different way of storing binary data than the PCs used today. They are called "big endian" systems and most PCs are "little endian" machines. By convention, TELEMAC uses files in the big endian format. Luckily, the pre- and post-processors running on the PCs can output or read big endian.
 +
 +
In terms of the TELEMAC code itself, the Intel compiler is very handy as the big or little endian type can be changed without having to recompile the whole code. This is done with the environmental variable F_UFMTENDIAN. It defaults at "big" (see /home/telemac/intel_env). This can be changed for particular applications but really you should not have to do anything.

Latest revision as of 09:57, 19 September 2008


This page describes the general use of the TELEMAC system in Geographical Sciences.

The current modules are:

  • TELEMAC-2D, 2D flood inundation
  • SISYPHE, 2D river bed evolution
  • ESTEL-2D, 2D groundwater flow and contaminant transport
  • ESTEL-3D. 3D groundwater flow and contaminant transport

More modules (3D hydrodynamics, waves ...) could be added if necessary. Just ask.

Linux

The TELEMAC system is installed centrally on "dylan" which the Linux operating system (CentOS). You will need to log into dylan and use linux commands to run TELEMAC jobs. Therefore it helps to practice a bit in a Linux environment. The Pragmatic Programming course might be a good place for this. Ask the scientific computer officer for pointers if you need some and get some training if required.

"Note that because the pre- and post-prtocessors run on MS Windows, I personally encourage people to use the Linux shell only to start the TELEMAC jobs and use the MS Windows environment for editing files etc... See note below for a word of warning about MS Windows text editors." JP Renaud 18:25, 18 September 2008 (BST)

Environment set-up

It is very easy to configure the environment to use TELEMAC as you simply have to source central files. Simply add the following lines into your .bashrc configuration file, then log-out and back in again.

# Location of the TELEMAC system
SYSTEL90=/home/telemac
export SYSTEL90

source $SYSTEL90/intel_env
source $SYSTEL90/config/systel_env

You should then be able to "see" the Fortran compiler and the programs of the TELEMAC system, for instance:

$ which telemac2d
/home/telemac/bin/telemac2d

Note that if you log in another machine (i.e. not dylan) you might get an error message about "/home/telemac" not existing or file not found. This is normal, the location probably does not exist of the other machine... Live with it or adapt the .bashrc so that the files are sourced only on dylan. For instance, you could use:

# Location of the TELEMAC system
# Only done on dylan
if [ `hostname | grep dylan` ]; then
    SYSTEL90=/home/telemac
    export SYSTEL90

    source $SYSTEL90/intel_env
    source $SYSTEL90/config/systel_env
fi

Test

Telemac-2d includes some test cases. Copy one into your filespace and run it:

$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/hydraulic_jump .
$ cd hydraulic_jump
$ telemac2d cas.txt

If this works, you have a well configured environment. Now go and do some real work with your own files

A note about ASCII files

Windows and Linux treat end of line in ASCII files differently. This means that if you edit a steering file on Windows using MS Notepad, you might have problems running the simulation on Linux. However, it is sometimes cumbersome to use text editors on Linux via an ssh session. The solution is too use a good text editor and configure it to use Unix type end of line characters. Scite is a very good text editor for the MS Windows environment.

Parallel jobs

The TELEMAC is configured to run in parallel mode if requested by the user. This is actually a very simple thing to do and highly encouraged if you use large meshes and run long simulations. However, a few extra initial steps are required.

TELEMAC uses MPI for parallel operations. MPI requires a secret word in a hidden configuration file. Simply type the following instructions to create it. Note that "somethingsecret" below should contains no spaces.

$ cd
$ touch .mpd.conf
$ chmod 600 .mpd.conf
$ echo "MPD_SECRETWORD=somethingsecret " > .mpd.conf

Run the software once in scalar mode once to look at job duration, for instance:

$ cp -r /home/telemac/telemac2d/tel2d_v5p8/test.gb/cavity .
$ cd cavity/
$ telemac2d cas.txt

The example above should run in about 55s on dylan. Now edit cas.txt so that the line about the number of processors looks like:

PARALLEL PROCESSORS = 8

Note that dylan has 8 cores so the system is configured to run with 8 processors as a maximum.

Put "0" to run in scalar mode. "1" runs in parallel mode but with one processor only, so "0" and "1" should give the same results despite using different libraries.

Before you can run TELEMAC in parallel, you need to start the MPI daemon. Note that this needs to be done once per login, not for each job.

$ mpd &

Then, you can now run telemac2d again:

$ telemac2d cas.txt

It should run again, faster this time, maybe 30 seconds or so instead of 55 seconds. It is not a lot faster (certainly not 8 times faster!) but this is because it's a silly example and splitting the mesh in 8 subdomains accounts for a large part of the computation time. With biggers meshes and longer sinmulations, you should get a better acceleration.

Before you log out, it is a good idea to kill the MPI daemon:

$ mpdallexit

It is also possible to run TELEMAC on the University cluster, bluecrystal. This is described on another page (not finished yet but will be done soon).

Changing between versions of the TELEMAC system

The basic configuration allows the user to switch transparently between versions of the TELEMAC system via the commands v5p8 and v5p9:

$ v5p9
Switched to TELEMAC version: v5p9

Version v5p9 is under development and you are encouraged not to use it though! It might be that a developer asks you to test something under version v5p9 so this is merely to make this easier.

A note about binary files

TELEMAC was traditionally run on large Unix machines which have a different way of storing binary data than the PCs used today. They are called "big endian" systems and most PCs are "little endian" machines. By convention, TELEMAC uses files in the big endian format. Luckily, the pre- and post-processors running on the PCs can output or read big endian.

In terms of the TELEMAC code itself, the Intel compiler is very handy as the big or little endian type can be changed without having to recompile the whole code. This is done with the environmental variable F_UFMTENDIAN. It defaults at "big" (see /home/telemac/intel_env). This can be changed for particular applications but really you should not have to do anything.