![]() Here, one distributed-memory process runs on each node and is responsible for communications between nodes and shared-memory is used for communication within each node. This final approach combines the both of the last two examples. The main differences between the serial and shared memory scripts are 1) 2 nodes are requested, each using 12 cores 2) The GAUSS_WDEF variable is used 3) the getlinda script is run with an argument of '1' to indicate distributed parallel is to be used for communication between all cores: #!/bin/tcsh #PBS -N GaussianDMParallel #PBS -l nodes=2:vortex:ppn=12 #PBS -l walltime=0:60:00 #PBS -j oe cd $PBS_O_WORKDIR module load gaussian/g16 # test.out Hybrid shared/distributed memory parallel Here is an example script for running a distributed-memory parallel Gaussian 16 job. The same as the serial script except 1) multiple cores are specified (ppn=12) and 2) the extra '-p=' where is the number of cores to use: #!/bin/tcsh #PBS -N GaussianSMParallel #PBS -l nodes=1:vortex:ppn=12 #PBS -l walltime=0:60:00 #PBS -j oe cd $PBS_O_WORKDIR module load gaussian/g16 # test.out Distributed-memory parallel: This is an example of shared-memory parallel. Here is a Torque batch script for serial Gaussian16 jobs: #!/bin/tcsh #PBS -N GaussianSerial #PBS -l nodes=1:vortex:ppn=1 #PBS -l walltime=0:60:00 #PBS -j oe cd $PBS_O_WORKDIR module load Gaussian/g16 # test.out Shared-memory parallel There are a few ways to run Gaussian 16 on the cluster: serial (only one computing core), shared-memory parallel (using the cores in parallel on one node), distributed-memory parallel (using cores on multiple nodes) or shared-memory / distributed hybrid (multiple cores on multiple nodes where Gaussian's parallel execution environment, Linda, is used for communication between nodes but shared-memory is used within the nodes). This can be done by putting this line in your Torque batch script or in your start-up script for the vortex sub-cluster. Users need to load the Gaussian/g16 module to use Gaussian 16 and/or GaussView. Preparing to use Gaussian/GaussView on the HPC cluster Therefore, external collaborators will not have access to Gaussian or GaussView. The Gaussian 16 site license specifically states that users must have their primary affiliation with the institution named in the license (W&M). Please email you need it to be installed on other sub-clusters. W&M has purchased both the serial and fully-parallel versions of Gaussian as well as GaussView for use on the W&M HPC cluster.Ĭurrently, Gaussian 16 and GaussView are installed only on the bora, hima and vortex sub-clusters. Here is a link that shows a summary of Gaussian 16 features and here is a link to the Gaussian 16 release notes. Gaussian 16 is a state-of-the-art software suite which performs ab-initio electronic structure calculations within a Gaussian basis. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |