The first two sections give general information on running MPQC:
The final sections given specific information on running MPQC in different environments:
-o
-i
-messagegrp
-memorygrp
-threadgrp
-integral
-l
-W
-c
-v
-w
-d
-h
-f
mpqc.in
. This cannot be used if another input file is specified. This option is deprecated, as both input file formats can be read by given the input file name on the command line without any option flags.
Some MPI environments do not pass the command line to slave programs, but supply it when MPI_Init is called. To make MPQC call MPI_Init with the correct arguments as early as possible use the configure option --enable-always-use-mpi
.
SCLIBDIR
MESSAGEGRP
MEMORYGRP
THREADGRP
INTEGRAL
By default, MPQC tries to find library files first in the lib
subdirectory of the installation directory and then the source code directory. If the library files cannot be found, MPQC must be notified of the new location with the environmental variable SCLIBDIR
.
For example, if you need to run MPQC on a machine that doesn't have the source code distribution in the same place as it was located on the machine on which MPQC is compiled you must do something like the following on the machine with the source code:
cd mpqc/lib tar cvf ../sclib.tar basis atominfo.kvThen transfer
sclib.tar
to the machine on which you want to run MPQC and do something like
mkdir ~/sclib cd ~/sclib tar xvf ../sclib.tar setenv SCLIBDIR ~/sclibThe
setenv
command is specific to the C-shell. You will need to do what is appropriate for your shell.The other three keywords specify objects. This is done by giving a mini ParsedKeyVal input in a string. The object is anonymous, that is, no keyword is associated with it. Here is an example:
setenv MESSAGEGRP "<ShmMessageGrp>:(n = 4)"Shared Memory Multiprocessor with SysV IPC
By default, MPQC will run on only one CPU. To specify more, you can give a ShmMessageGrp object on the command line. The following would run mpqc in four processes:mpqc -messagegrp "<ShmMessageGrp>:(n = 4)" input_fileAlternately, the ShmMessageGrp object can be given as an environmental variable:
setenv MESSAGEGRP "<ShmMessageGrp>:(n = 4)" mpqc input_fileIf MPQC should unexpectedly die, shared memory segments and semaphores will be left on the machine. These should be promptly cleaned up or other jobs may be prevented from running successfully. To see if you have any of these resources allocated, use the
ipcs
command. The output will look something like:
IPC status from /dev/kmem as of Wed Mar 13 14:42:18 1996 T ID KEY MODE OWNER GROUP Message Queues: Shared Memory: m 288800 0x00000000 --rw------- cljanss user Semaphores: s 390 0x00000000 --ra------- cljanss user s 391 0x00000000 --ra------- cljanss userTo remove the IPC resources used by
cljanss
in the above example on IRIX, type:
ipcrm -m 288800 ipcrm -s 390 ipcrm -s 391And on Linux, type:
ipcrm shm 288800 ipcrm sem 390 ipcrm sem 391Shared Memory Multiprocessor with POSIX Threads
By default, MPQC will run with only one thread. To specify more, you can give a PthreadThreadGrp object on the command line. MPQC is not parallelized to as large an extent with threads as it is with the more conventional distributed memory model, so you might not get the best performance using this technique. On the other the memory overhead is lower and no interprocess communication is needed.The following would run MPQC in four threads:
mpqc -threadgrp "<PthreadThreadGrp>:(num_threads = 4)" input_fileAlternately, the PthreadThreadGrp object can be given as an environmental variable:
setenv THREADGRP "<PthreadThreadGrp>:(n = 4)" mpqc input_fileShared or Distributed Memory Multiprocessor with MPI
A MPIMessageGrp object is used to run using MPI. The number of nodes used is determined by the MPI run-time and is not specified as input data to MPIMessageGrp.
mpqc -messagegrp "<MPIMessageGrp>:()" input_fileAlternately, the MPIMessageGrp object can be given as an environmental variable:
setenv MESSAGEGRP "<MPIMessageGrp>:()" mpqc input_fileUsually, a special command is needed to start MPI jobs; typically it is named
mpirun
.Special Notes for MP2 Gradients
The MP2 gradient algorithm uses MemoryGrp object to access distributed shared memory. The MTMPIMemoryGrp class is the most efficient and reliable implementation of MemoryGrp. It requires a multi-thread aware MPI implementation, which is still not common. To run MP2 gradients on a machine with POSIX threads and an multi-thread aware MPI, use:
mpqc -messagegrp "<MPIMessageGrp>:()" \ -threadgrp "<PthreadThreadGrp>:()" \ -memorygrp "<MTMPIMemoryGrp>:()" \ input_fileor
setenv MESSAGEGRP "<MPIMessageGrp>:()" setenv THREADGRP "<PthreadThreadGrp>:()" setenv MEMORYGRP "<MTMPIMemoryGrp>:()" mpqc input_fileSpecial Notes for MP2-R12 energies
Distributed MemoryThe MP2-R12 energy algorithm is similar to the MP2 energy algorithm that uses MemoryGrp object to access distributed memory. Hence the MTMPIMemoryGrp is the recommended implementation of MemoryGrp for such computations (see Special Notes for MP2 Gradients).
Disk I/O
In contrast to the MP2 energy and gradient algorithms, the MP2-R12 energy algorithm may have to use disk to store transformed MO integrals if a single pass through the AO integrals is not possible due to insufficient memory. The best option in such case is to increase the total amount of memory available to the computation by either increasing the number of tasks or the amount of memory per task or both.
When increasing memory further is not possible, the user has to specify which type of disk I/O should be used for the MP2-R12 energy algorithm. It is done through the
r12ints
keyword in input for the MBPT2_R12 object. The default choice is to use POSIX I/O on the node on which task 0 resides. This kind of disk I/O is guaranteed to work on all parallel machines, provided there's enough disk space on the node. However, this is hardly most efficient on machines with some sort of parallel I/O available. On machines which have an efficient implementation of MPI-IO ther12ints
should be set instead tompi-mem
. This will force the MBPT2_R12 object to use MPI-IO for disk I/O. It is user's responsibility to make sure that the MO integrals file resides on an MPI-IO-compatible file system. Hence ther12ints_file
keyword, which specifies the name of the MO integrals file, should be set to a location which is guaranteed to work properly with MPI-IO. For example, on IBM SP and other IBM machines which have General Parallel File System (GPFS), the user should setr12ints = mpi-mem
andr12ints_file
to a file on a GPFS file system.Integral object
At the moment, MBPT2_R12 objects require specific specialization of Integral, IntegralCints. Thus in order to compute MP2-R12 energies, your version of MPQC needs to be compiled with support for IntegralCints. A free, open-source library called
libint
is a prerequisite for IntegralCints (see Compiling). In order to use IntegralCints as the default Integral object, add-integral "<IntegralCints>:()"
to the command line, or set environmental variableINTEGRAL
to"<IntegralCints>:()"
.
Generated at Fri Mar 19 10:48:26 2004 for MPQC 2.2.1 using the documentation package Doxygen 1.3.5.