-
Notifications
You must be signed in to change notification settings - Fork 12
Home
For information about a particular module, see its wiki page.
Raul P. Pelaez 2016
A fast generic multiscale CUDA Molecular Dynamics code made into modules for expandability and generality. It is coded into separated modules, with a SimulationConfig driver in C++ that can hold many modules in order to construct a simulation. For example, the simulation could have a VerletNVT module and and PairForces interactor module to create a molecular dynamics simulation. Or a Brownian Dynamics with Hydrodynamic interactions integrator module with a Bonded Forces module, etc.
There are three types of modules:
1. Integrators
2. Interactors
3. Measurables
Interactors
An Interactor is an abstract entity that has the ability of computing the forces acting of each particle due to some interaction. For example, an Interactor could compute the pair Lennard Jonnes forces between each particle pair of the system or sum the forces due to some particles being joined by springs.
Integrators
An Integrator is an entity that has the ability of moving the simulation state to the next time step. In order to do so it can hold any number of Interactors and use them to compute the forces at any time.
Measurables
A Measurable is any computation that has to be performed between steps of the simulation, it can be any magnitude that is calculated from the simulation state (positions, forces..). A measurable can compute the energy, the radial function distribution or any arbitrary computation that does not change the simulation state.
Additionally, there are other types of submodules that take care of more particular things. Like NeighbourList and Transverser.
These objects are abstract classes that can be derived to create all kinds of functionality and add new physics. Just create a new class that inherits Interactor, Integrator or Measurable and override the virtual methods with the new functionality. Usually, if your new module is similar to any of the available, you will be able to inherit directly from that module, or use it as part of your own.
Finally there is a Driver that puts them all together and controls the flow of the simulation.
The simulation construction is performed in Driver/SimulationConfig.cpp. Where the integrator, interactors and measurables are created and the initial conditions and parameters are set. This is the "input" of UAMMD.
#Currently Implemented
Interactors:
1.Pair Forces: Implements hash (Morton hash) sort neighbour cell list construction algorithm to evaluate pair forces given some short range potential function, LJ i.e. Ultra fast
2.Bonded forces: Allows to join pairs of particles via springs (Instructions in BondedForces.h)
3.Three body angle bonded forces: Allows to join triples of particles via angle springs (Instructions in BondedForces.h)
4.NBody forces: All particles interact with every other via some potential.
5.External forces: A custom force function that will be applied to each particle individually.
6.Pair Forces DPD: A thermostat that uses the Pair Forces module to compute the interactions between particles as given by dissipative particle dynamics.
Integrators:
1.Two step velocity verlet NVE
2.Two step velocity verlet NVT with BBK thermostat
3.Euler Maruyama Brownian dynamics
4.Euler Maruyama Brownian dynamics with hydrodynamic interactions via Rotne Prager Yamakawa (obtaining the brownian noise via several choices. Cholesky or Lanczos)
Measurables
1.Energy Measurable. Computes the total, potential and kinetic energy and the virial pressure of the system
Now you can select between single and double precision in globals/defines.h. Single precision is used by default, you can change to double precision by commenting "#define SINGLE_PRECISION" and recompiling the entire code. This last step is very important, as failing to do so will result in unexpected behavior.
##USAGE
If you dont have cub (thrust comes bundled with the CUDA installation) clone or download the v1.5.2 (see dependencies). The whole cub repository uses 175mb, so I advice to download the v1.5.2 zip only. The Makefile expects to find cub in /usr/local/cub, but you can change it. CUB doesnt need to be compiled.
Hardcode the configuration (Integrator, Interactor, initial conditions..) in Driver/SimulationConfig.cpp, set number of particles, size of the box, dt, etc, there. You can change the integrator at any time during the execution of the simulation, see Driver/SimulationConfig.cpp.
Then compile with make and run. You can use the --device X flag to specify a certain GPU.
You may need to adequate the Makefile to you particular system
##DEPENDENCIES
Depends on:
1. CUB (v1.5.2 used) : https://github.com/NVlabs/cub
2. thrust (v1.8.2 bundled with CUDA used): https://github.com/thrust/thrust
3. CUDA 6.5+ (v7.5 used) : https://developer.nvidia.com/cuda-downloads
This code makes use of the following CUDA packages:
1. cuRAND
2. cuBLAS
3. cuSolver
##REQUERIMENTS
Needs an NVIDIA GPU with compute capability sm_2.0+ Needs g++ with full C++11 support, 4.8+ recommended
##TESTED ON
- GTX980 (sm_52) on Ubuntu 14.04 with CUDA 7.5 and g++ 4.8
- GTX980 (sm_52) on Ubuntu 16.04 with CUDA 7.5 and g++ 5.3.1
- GTX980 (sm_52), GTX780 (sm_35), GTX480(sm_20) and GTX580(sm_20) on CentOS 6.5 with CUDA 7.5 and g++ 4.8
- GTX1080 (sm_61), Tesla P1000 (sm_60) on CentOS 6.5 with CUDA 8.0 and g++ 4.8
##BENCHMARK
Current benchmark:
GTX980 CUDA-7.5
N = 2^20
L = 128
dt = 0.001f
1e4 steps
PairForces with rcut = 2.5 and no energy measure
VerletNVT, no writing to disk, T = 0.1
Starting in a cubicLattice
####------HIGHSCORE---------
Number of cells: 51 51 51; Total cells: 132651
Initializing...
DONE!!
Initialization time: 0.15172s
Computing step: 10000
Mean step time: 127.33 FPS
Total time: 78.535s
real 1m19.039s
user 0m53.772s
sys 0m25.212s
##NOTES FOR DEVELOPERS
The procedure to implement a new module is the following:
1. Create a new class that inherits from one of the parents (Interactor, Integrator, Measurable...) and overload the virtual methods. It is advised to create 4 files, Header and declarations for the CPU side, and header and declarations for the GPU callers and kernels. But you can code it and compile it anyway you want as long as the virtual methods are overloaded.
2. Include the new CPU header in Driver/Driver.h
3. Add the new sources in the Makefile.
4. Initialize them as needed in driver/SimulationConfig.cpp as in the examples.
Keep in mind that the compilation process is separated between CPU and GPU code, so any GPU code in a .cpp file will cause a compilation error. See any of the available modules for a guideline.
In globals/globals.h are the definitions of some variables that will be available throughout the entire CPU side of the project. These are mainly parameters. It also contains the position, force and an optional velocity arrays.
In the creation of a new module (Interactor or Integrator) for interoperability with the already existing modules, the code expects you to use the variables from global when available. Things like the number of particles, the temperature or more importantly, the Vectors storing the positions, forces and velocities of each particle (again, when needed). These Vectors start with zero size and are initialized in Driver.cpp. However, your code should check the size of the arrays at startup with Vector::size() and initialize them if the size doesnt match the number of particles (i.e is 0).
Currently the code initializes pos and force Vectors in Driver.cpp, after the parameters are set. Vel should be initialized in the constructor of any module that needs it, see VerletNVT for an example.
Guidelines
Each module should have its own namespace, or adhere to an existing one, in order to avoid naming conflicts. This allows to name the functions and parameters in a more human readable way.
If you want to make small changes to an existing module, without changing it. Then you should create a new module that inherits it, and overload the necesary functions.
##ACKNOWLEDGMENTS
UAMMD was developed at the Departamento de Física Teórica de la Materia Condensada of Universidad Autónoma de Madrid (UAM) under supervision of Rafael Delgado-Buscalioni. Acknowledgment is made to the Donors of the American Chemical Society Petroleum Research Fund (PRF# 54312-ND9) for support of this research and to Spanish MINECO projects FIS2013- 47350-C05-1-R and FIS2013-50510-EXP.
Akcnowledgemt is made to NVIDIA Corporation for support of this research.
-
-
1. PairForces
2. NbodyForces
3. ExternalForces
4. BondedForces
5. AngularBondedForces
6. TorsionalBondedForces
7. Poisson (Electrostatics) -
-
MD (Molecular Dynamics)
1. VerletNVT
2. VerletNVE - BD Brownian Dynamics
-
BDHI Brownian Dynamics with Hydrodynamic Interactions
1. EulerMaruyama
1.1 BDHI_Cholesky Brownian displacements through Cholesky factorization.
1.2 BDHI_Lanczos Brownian displacements through Lanczos algorithm.
1.3 BDHI_PSE Positively Split Edwald.
1.4 BDHI_FCM Force Coupling Method. - DPD Dissipative Particle Dynamics
- SPH Smoothed Particle Hydrodynamics
-
Hydrodynamics
1. ICM Inertial Coupling Method.
2. FIB Fluctuating Immerse Boundary.
3. Quasi2D Quasi2D hydrodynamics
-
MD (Molecular Dynamics)
-
- 1. Neighbour Lists
-
1. Programming Tools
2. Utils
-
1. Transverser
2. Functor
3. Potential
-
1. Particle Data
2. Particle Group
3. System
4. Parameter updatable
-
1. Tabulated Function
2. Postprocessing tools
3. InputFile
4. Tests
5. Allocator
6. Temporary memory
7. Immersed Boundary (IBM)
-
1. NBody
2. Neighbour Lists
3. Python wrappers
- 1. Superpunto