- Synopsis
- News
- Basic goals
- Background
- Basic Design
- Building and Installing
- Using MGS 4th Generation
- Contributing
- Personal Notes
This library computes the Mitchell Gravity Set. This is written in C++17, with the API exposed as C calls.
This project also includes the “client” code which is based on the Qt framework. Qt is indeed awesome, but its API is still very much of the C++ of the 90s, so I have to painfully defer to it in many places. On the other hand, it packs a wallop of power that I would otherwise have to implement by hand, so I’ll live. :)
This is a video snippet of the configuration screen for MGS 4th Generation: https://youtu.be/U-3g_VLK1kU
The planetarium show https://www.planetarium.berlin/veranstaltungen/chaos-and-order Uses the Mitchell Gravity Set (2D, softology) as
We want this to support any number of cores available. Since the algorithm is highly parallarizable, this should not be a problem.
We wish to allow human interaction with MGS, and to provide a means to show progressive buildup while the computation is happening. As such, we wish to allow these callbacks to happen when the next level of detail is completed.
It should be a StamgHTforward way to allow distributed computation of the Gravity Set, in theory. This, for example, may simply be accomplished by instance partitioning; different ranges being delegated to different instances (machines), with all the results being sent to the rendering instance.
The main bottleneck will be with the potentially massive amounts of data that would be generated and sent across the wire.
Please see:
https://mitchell-gravity-set.quora.com/
for the mathematical background behind MGS.
I keep coming back to MGS over the decades because I find it intriguing. It creates fractals in its own class like no other, and I am into fractals from more of a mathematical curiosity. Alas many of the so-called “fractals” seem to be more into the art aspect and less into the math aspect.
MGS 4th Generation will represent the first time I am doing this in 3D, and today’s multicore hardware makes this an intriguing possibility, to say nothing of the ultra powerful graphics we take for granted today as well.
I am not lost on the “art” aspect here, but this is NOT “art for art’s sake”, with no intended disrespect to Sweet! :)
The original MGS was implemented as a 2D gravity field. As well, at the time, the computing power of the computers at the time were limited, and generating a 3D field would’ve required a lot more time.
Today, multicore CPUs are common, and so I’ve decided to implement this solely as 3D. We are entering new territory here, as I do not know what to expect. But it makes more sense anyway, since real gravity operates in 3D space, in the classical Newtonian sense.
Besides, today we have very impressive 3D fractals rendered by others, and I am not willing to be “left out” of the fun! This will also be a serious chance to make use of the OpenGL library for rendering the results.
We are going to leverage the awesome power of C++17 to make this a reality. In short, we want to define types in a way that makes for strong typing, remenescent of Rust, without all the overly strict protection against “data races”, which for this, we don’t really care too much. We need to be able to run multiple cores to increase the speed of rendering the MGS, especially in 3 dimensions, as well as dealing with pipeline issues with the GPU (for display).
Basic structures:
- Scalar
- This could be int, float, double. It will overload basic operations to allow us to be a bit agnostic on the “primitive” numerical types.
- Coords
- This can be either float or double, x,y,z… coordinates.
- Basic computing elements
- Position
- derived from Coords
- Velocity
- derivied from Coords
- Acceleration
- derivied from Coords
- Position
- Star
- Index
- This will provide the i,j,k… indexing, and the flexibility to be dimension-agnostic.
- Space
- This is the “mesh”, internally implemented as a vector, but addressable with either Index or Coord, with an iterator too.
There is a TODO here because the build instructions are a “work in progress”. For instance, I do not yet mention all the many dependencies, especially with the Qt libraries. I am currently using the latest, which may be ahead of what is available in the distro.
However, you can figure out the dependencies from the CMake files. Just keep in mind that everything, incluing the build, is still in flux until that TODO disapears.
We rely on Qt 5.10 for its Data Visulization Module, which was removed from opensource, sadly, in 5.11 and later.
- Download qt
wget https://download.qt.io/archive/qt/5.10/5.10.1/qt-opensource-linux-x64-5.10.1.run
- Change the permissions of the .run file and run it.
chmod a+x qt-opensource-linux-x64-5.10.1.run ./qt-opensource-linux-x64-5.10.1.run
- Install it to the opt directory You should now have a /opt/Qt5.10.1 directory when all is said and done.
- Install additonal packages apt install libfontconfig1 mesa-common-dev libglu1-mesa-dev
We use ninja instead of make:
mkdir build
cd build
cmake -GNinja .. && ninja -k3 -j8
If you wish to use make instead:
mkdir build
cd build
cmake .. && make -k -j8
You may leave off both the “-k” and the “-j8” parameter. If you use “-j”, adjust to the number to the number of cores you have on your computer. For instance, if you have 4 cores:
cmake .. && make -k -j4
The location of the GL headers have shifted. On 18.04, they are located in /usr/include/libdrm. On 18.10, they are located in /usr/include. To remedy this problem, as root:
cd /usr/include/libdrm
ln -s ../GL .
Recompile and it should all work.
These notes are basically for myself, having to do with building and installing and the like, so they are not “official”. When this project is all said and done, I will be writing formal documentation on installation and running MGS. I do not promise to keep Personal Notes up-to-date, and will most likely be deleted once this project is complete.
To cmake for debugging:
cmake -DCMAKE_BUILD_TYPE=Debug .
For release:
cmake -DCMAKE_BUILD_TYPE=Release .
Using these two seem like massive overkill (they are both large and all I need is parallel support!) so I will experiment with them for a time, but try to nuke them when it comes to distribution.
Or, I may not wait that long. I will attempt to implement a multithreading approach without HPX.
Building Boost:
cd $BOOST
bootstrap --prefix=<where to install boost>
./b2 -j<N> --build-type=complete
./b2 install
This is just for my environment. Capturing the suggestions of the successful build of Boost. Since I’ve also installed this beast onto my system, I will most likely not be using this unless I run into a snag. But what snag I could possibly run into? Boost has been around forever!
The following directory should be added to compiler include paths:
/development/cpp_proj/third/boost
The following directory should be added to linker library paths:
/development/cpp_proj/third/boost/stage/lib
Some notes on the installation of HPX. From: https://stellar-group.github.io/hpx/docs/html/hpx/manual/build_system/building_hpx/build_recipes.html#hpx.manual.build_system.building_hpx.build_recipes.unix_installation
Create a build directory. HPX requires an out-of-tree build. This means you will be unable to run CMake in the HPX source tree.
cd hpx
mkdir my_hpx_build
cd my_hpx_build
Invoke CMake from your build directory, pointing the CMake driver to the root of your HPX source tree.
Currently, I am using gcc, but will eventually switch over to clang, especially for the direct tie-in to the LLVM, which will make it easy to leverage doing these computations on GPUs.
One exciting thing about clang is the ability to do optimization at the link level, which combines all the translation units at the IR level. For this project, there will probably be no significant gain, but for the upcoming ZuseNEAT project, that will be a different story.
Eventually I may want to alter the allocator to take advantage of memory locality to squeeze more performance from leveraging the CPU caches more efficiently.
Due to the sheer size of the compute results, it does not behoove me to keep all those results in memory at one time. I actually might need to anyway, but really, they should be computed in a lazy fashion, using marching cubes or marching tetrahedra.
So the compute module remains “pure”, and another module would be for creating the millions of triangles geared to direct visualization.
I have created a map for the marching tetrahedra to help in codifying how the cube will be disected.
https://www.youtube.com/watch?v=ffnVCEAcOns
This weekend I want to accomplish the mesh generation from the tetrahedra, using the pipeline approach, of course. Later on, we can consider making this process lazy, and thus saving on the sheer amount of RAM necessary. But for now, it is important to acheive some acutal visualization at all, so that is where my focus shall lie. Optimize later.