Grid Requirements and Mapbc Format for USM3D-ME
USM3D-ME is compatible with grids containing tetrahedral, prismatic, pyramidal, and hexahedral elements. The solver requires the grid format to be aflr3, stream, and double precision. Additionally, the mapbc requires conversion from aflr3 format to the format shown below:
# patch no. bc family surfs surfids name 1 4 4 0 0 FUSE 2 4 4 0 0 CANARD 3 11002 11002 0 0 INLET 4 10102 10102 0 0 NOZZLE 5 4 4 0 0 WING 6 4 4 0 0 VERT 7 4 4 0 0 HORIZ 8 4 4 0 0 NAC 9 3 3 0 0 FARFIELD 10 1 1 0 0 SYMMETRY
This conversion can be performed by hand for grids containing only a few boundaries. However, this process can be tedious for problems with a large number of boundaries specified in the mapbc file. For those problems, a utility is provided (add link to utility) to perform the conversion from aflr3 to USM3D-ME mapbc format.
Executing USM3D-ME
To execute USM3D-ME, the executable should be called with a single argument (project name), as shown below.
usm3dme.xxx proj_name
Note that the run directory should contain all necessary files including the proj_nam.b8.ugrid, proj_name.mapbc, proj_name.inpt, and any supporting files for the specific run (see LINK to INPUT file and MAPBC files).
Compiling USM3D-ME
To compile USM3D-ME, several prerequisites are required:
1) Fortran compiler – we have used Intel Fortran (ifort) and GNU Fortran (gfortran)
2) MPI library – we have used OpenMPI (https://www.open-mpi.org) and HPE MPI / SGI MPT
3) ParMetis 4.03, available at http://glaros.dtc.umn.edu/gkhome/metis/parmetis/download
4) TecIO library, part of Tecplot or at: https://www.tecplot.com/products/tecio-library/
Building ParMetis and Tecio requires CMake to be installed on the system. CMake is available at https://cmake.org/download/ for various operating systems. In addition, building TecIO requires the Boost C++ library (versions 1.69-1.74 are acceptable). Boost is available at https://www.boost.org or on SourceForge. These instructions assume you are setting up to compile and run USM3D-ME on a stock system configured for basic build and compile work (ie, “make” and “cc” are already available). You can skip over steps that are already complete on your system.
STEP 1: Build/install a Fortran compiler. Below we will assume you have chosen gfortran.
STEP 2: Build/install an MPI library. Here we will assume you are using OpenMPI and you want it installed in “/usr/local/openmpi” on your system. From within the OpenMPI source directory, the proper build commands are: ./configure –prefix /usr/local/openmpi FC=gfortran make all install The “FC” flag specifies which fortran compiler will be used for the MPI wrapper compilers (mpifort, mpif77, etc) that greatly simplify compiling MPI programs like USM3D-ME. This also ensures OpenMPI will be built with Fortran support, which is not always the case for standard OpenMPI installations. Note that you may need to invoke a “sudo” for the “make all install” command in order to gain super user privileges to install in the directory “/usr/local/openmpi”. After installing OpenMPI, be sure to add “/usr/local/openmpi/bin” to your shell path.
STEP 3: Build/install CMake following instructions in the CMake download.
STEP 4: Build/install Boost following instructions in the Boost download.
STEP 5: Build TecIO following instructions in the TecIO source under “teciosrc/readme.txt” (only standard TecIO is needed, not TecIO-MPI).
STEP 6: Build ParMetis following instructions in the “BUILD.txt” file that comes with ParMetis. IMPORTANT: prior to doing this, edit the metis header file in the ParMetis distribution at “metis/include/metis.h” to change the values of IDXTYPEWIDTH and REALTYPEWIDTH from 32 to 64.
STEP 7: Locate or gather the libraries you just built in order to compile USM3D-ME. To compile, we need to link the following static libraries: libtecio.a (from TecIO) libmetis.a (from ParMetis) libparmetis.a (from ParMetis). How and where you install these libraries determines how we link to them in the USM3D-ME makefile. For convenience, we will assume you have put them in a “lib” directory inside the same directory containing USM3D-ME source. However, they can be installed elsewhere on your system as long as you adjust the linking specifications in the USM3D-ME makefile.
STEP 8: At this point you should be ready to go. From within the directory containing USM3D-ME source, simply type: make toolset=gfortran_openmpi and compilation will begin. The Fortran wrapper compiler will begin compiling each of the Fortran source files. When complete, all the object files will be linked (along with the libraries above and the MPI libraries) into a USM3D-ME executable called “usm3dme”.
Although the instructions above are tailored to gfortran and OpenMPI, the USM3D-ME makefile has other toolset definitions available:
toolset=gfortran_openmpi
toolset=ifort_openmpi
toolset=gfortran_mpt
toolset=ifort_mpt
toolset=custom
If no “toolset=” option is specified with the make command, then compilation will use shell defaults for various compilation flags. Advanced users can edit the makefile to override this behavior, specify a default toolset, edit toolset definitions, or define a custom toolset.
Output Files
All the output files generated during a run have been summarized below. The various files and their formats have been described following the table.
File name | Purpose | When created? | Update frequency |
---|---|---|---|
proj_name.urest | Solution restart | Always | every nwrest, end of a run |
tet.out | input echo, grid statistics, forces and moments with respect to various axis systems and components, CPU time and memory information | Always | every iteration |
proj_name_hist.plt | Convergence monitoring | Always | every iteration |
proj_name_surface_bc.plt or proj_name_surface_bc.szplt | Surface solution, tecplot format with a single zone for each BC type | ipltqn not equal to 0 | end of run |
proj_name_surface_patch.plt or proj_name_surface_patch.szplt | Surface solution, tecplot format with a single zone for each surface patch | ipltqn not equal to 0 | end of run |
proj_name_surface_comp.plt or proj_name_surface_comp.szplt | Surface solution, tecplot format with a single zone for each component specified in proj_name.fandm | ipltqn not equal to 0 and icompfm = 1 | end of run |
proj_name_volume.plt or proj_name_volume.szplt | Volume solution, tecplot format with a single zone | ipltqn not equal to 0 | end of a run |
fort.60 | Diagnostics: minimum and maximum values of pressure | idiagnos > 0 | end of run |
fort.61 | Diagnostics: minimum and maximum values of density | idiagnos > 0 | end of run |
fort.62 | Diagnostics: minimum and maximum values of temperature | idiagnos > 0 | end of run |
fort.501 | Diagnostics: maximum vorticity | idiagnos > 1 | end of a run |
fort.502 | Diagnostics: minimum and maximum values of eddy viscosity (tnu) | idiagnos > 1 | end of a run |
fort.70 | Diagnostics: maximum residuals for all 6 equations | idiagnos > 1 | end of run |
fort.898 | Diagnostics: information related to the catastrophic violation of face realizability | idiagnos > 1 | end of run |
fort.801 | Diagnostics: rms of the preconditioner residual during the G-S iteration for meanflow | idiagnos > 2 | end of run |
fort.802 | Diagnostics: rms of the preconditioner residual during the G-S iteration for SA model | idiagnos > 2 | end of run |
CELLS_HIGH*.dat | Diagnostics: cell information for values designated as high for density, pressure, temperature, Mach number, and eddy viscosity (tnu) | idiagnos > 2 | end of run |
CELLS_LOW*.dat | Diagnostics: cell information for values designated as low for density, pressure, temperature, Mach number, and eddy viscosity (tnu) | idiagnos > 2 | end of run |
Special Features
Compatibility of proj_name.mapbc and proj_name.inpt files:
At the beginning of the run, the flow solver automatically checks for the consistency between the flow analysis option, ivisc selected and the type of boundary condition specified for the boundary patches. Therefore, if a user has selected inviscid flow analysis option (ivisc = 0) but the proj_name.mapbc file contains a patch with a viscous boundary condition (boundary condition type 4), then the flow solver detects this inconsistency and terminates the run and writes a message on tet.out fileas shown below.
write(lw,*)’STOP: “.mapbc” file not compatible ‘
write(lw,*)’Change appropriate wall BC to inviscid’
The flow solver also alerts a user if it finds that the user has selected viscous flow analysis option (ivisc > 0) but the proj_name.mapbc file contains a patch with an inviscid boundary condition (boundary condition type 5). However, it does not terminate the run. The warning message is written out on the tet.out file prior to initiating the time integration loop and it is shown below.
write(lw,*)’WARNING: “.mapbc” file may not be compatible ‘
write(lw,*)’May have inviscid b.c. when you want viscous
Monitoring and stopping the run execution:
The flow solver updates the files, hist.plt and tet.out at every iteration to relay the information related to the case convergence to a user. It also has a provision to prematurely stop a run before the the number of cycles ncyc, specified at the beginning of a run, are completed. Before stopping a run, the flow solver writes all the relevant output files including the solution restart file and the flow visualization file. This feature can be invoked by creating a file named as usm3d.stop with an entry of “1”. The file must be in the same directory from where the run execution command was given. The flow solver checks for the existence of the file and the entry of “1” in it at every iteration and if the check is positive, stops the run immediately. This feature is helpful for a case that may be gradually diverging. In such circumstances, user may want to interrupt the run, analyze the flow results and identify the cause of the convergence difficulty.
Writing output files without stopping run execution:
In addition to interrupting a run using the usm3d.stop file, the flow solver also allows for writing an intermediate output without stopping the run. This feature can be activated by creating a file named as usm3d.write with an entry of “1”. The file must be in the the same directory from where the run execution command was given. The flow solver checks for the existence of the file and the entry of “1” in it at every iteration and if the check is positive, writes out intermediate output.