Frequently Asked Questions about MMC

1. MMC reports 0 absorption and the normalizor is nan, what is going on?
2. Where is MMC's executable located?
3. How can I see the percentage progress of the simulation?
4. Why I can not find a binary for my platform in the download section?
5. I downloaded the binary for Mac OSX, why the binary is named "mmc_sfmt"?
6. How can I generate a tetrahedral mesh for my problem domain?
7. When running a simulation, mmc stalled, what to do?
8. Any important missing features I need to be aware of?
9. How can I share my changes to the upstream author?
10. Does MMC give erroneous results when simulating refractive index mismatch?
11. How do I interpret MMC's output data?
12. Can MMC output photon partial lengths at the detectors?
13. Will you consider porting MMC to MPI to run on my cluster?
14. Why using the same random number seed won't give me reproducible MMC solutions?

1. MMC reports 0 absorption and the normalizor is nan, what is going on?

This often happens when you fail to set the initial element ID in the input file. An example input file is shown in the Readme. For any new simulation mesh you made, you need to find out which tetrahedron enclosing your initial source position (this can be done with the tsearchn command in Matlab or GNU Octave). Once you have the ID (an integer starting from 1), you need to type it in the input file, right below the session ID.

2. Where is MMC's executable located?

After extracting the downloaded binary package form MMC's website, the main MMC executable is located inside a folder named mmc/src/bin.

3. How can I see the percentage progress of the simulation?

In the command line, you can append "-D P", this will print a progress bar to show the percentage of completion of the simulation.

4. Why I can not find a binary for my platform in the download section?

The maintainer of the software only has access to a limited types of computers and platforms, and it is not possible to make binaries to be exhaustive. Please download the source code package and compile the binary by yourself. It is very straightforward to recompile the code. In most cases, you simply need only one command as "make clean omp" inside the "mmc/src/" folder.

5. I downloaded the binary for Mac OSX, why the binary is named "mmc_sfmt"?

We are still working on porting all the modules to Mac OSX and currently only the SFMT19973 RNG is supported for this platform. Actually, mmc_sfmt is slightly faster than the default GNU RNGs used in the default built of other platforms. So, it is not a bad idea to use mmc_sfmt to run your simulation. If the name is too long, simply rename mmc_sfmt to mmc is perfectly fine.

6. How can I generate a tetrahedral mesh for my problem domain?

You need to use a 3D mesh generator. There are a lot of choices. Here we recommend iso2mesh toolbox, which is designed for simplicity and generality. And it is free.

7. When running a simulation, mmc stalled, what to do?

First of all, please turn on the progress bar by using the "-D P" option. If you see a smooth advancement at the beginning and then it stopped at some point, this may indicate an issue in your input file. Please make sure that you have used the "savemmcmesh()" command to create all the mesh files. If you got the mesh files from somewhere else, please make sure all the mesh elements have consistent orientation (i.e. the 4-point determinant is positive). You can correct this by using the "meshreorient" function in the iso2mesh toolbox.

8. Any important missing features I need to be aware of?

MMC is a work-in-progress. Please read the Known issues section, you will see a list of features that are currently missing or partially tested. We will update this list as time goes.

9. How can I share my changes to the upstream author?

You need to first make a patch for your changes, and then send the patch file to the upstream author.

To create a patch, you need to download and install subversion or svn. Then, you need to download the released source code package or check out the latest code from the official svn repository anonymously. Then make your changes inside the mmc directory structure, recompile the code and test your patch to make sure it works. When you are happy with your changes, cd the root folder of the source code tree, and type "svn diff > yourname_feature_description.patch". Then email this patch file as an attachment to the upstream author. He will review it and patch it to the official svn. Of course, your name will be acknowledged in the AUTHORS.txt file.

10. Does MMC give erroneous results when simulating refractive index mismatch?

The short answer is, no. MMC has correctly implemented the refractive index mismatch calculations (i.e. with -b 1 option) even in its first release (v0.2). People may came up with this question after reading the posts from this website. The authors of the web-site had made a mistake when comparing MMC with MCML, TIM-OS and CUDAMCML. It was not realized by the authors of the page that the error (~2.4%) observed in their simulations was due to specular reflection, as a result of different assumptions regarding the source initial position.

In MCML (and MCML-inspired codes, such as CUDAMCML and TIM-OS), the source is considered inside the background medium. In TIM-OS, even one positions the source in the first tetrahedron, TIM-OS still considers the source at the outer face of the mesh; thus, the photon weight is dropped by R=((n1-n2)/(n1+n2))^2 right before start propagating the photon. This is different in MMC (as well as tMCimg and MCX). We consider the source inside the mesh (even it is on the surface). If you need to consider the specular reflection, you need add additional tetrahedra (or layers of voxels in MCX) that has the background properties, and start ray-tracing from there. With this in mind, the 2.4% difference becomes clear: it is simply R=((1-1.37)/(1+1.37))^2.

In MMC v0.8 or later, a new command line flag "-V" or "--specular" is added to force imposing specular reflection if the source is located on the outer surface of the mesh (the flag is off by default). If the source is inside the mesh, "-V" will be ignored. With this flag, the results for the simulations comparing MMC with MCML/TIM-OS become almost identical.

11. How do I interpret MMC's output data?

By default, MMC produces the Green's function of the fluence rate (or flux) for the given domain and source. Sometime it is also known as the time-domain "two-point" function. If you run MMC with the following command

  mmc -f input.inp -s output ....
the flux data will be saved in a file named "output.dat" under the current folder. If you run MMC without "-s output", the output file will be named as "input.inp.dat".

To understand this further, you need to know that a flux is measured by number of particles passing through an infinitesimal spherical surface per unit time at a given location. The unit of MMC output flux is "1/(mm2s)", if the flux is interpreted as the "particle flux", or "J/(mm2s)", if it is interpreted as the "energy flux".

The Green's function of the flux simply means that the flux is produced by a unitary source. In simple terms, this represents the fraction of particles/energy that arrives a location per second under the radiation of 1 unit (packet or J) of particle or energy at time t=0. The Green's function is calculated by a process referred to as the "normalization" in the MMC code and is detailed in the MCX paper (MCX and MMC outputs share the same meanings).

Please be aware that the output flux is calculated at each time-window defined in the input file. For example, if you type

 0.e+00 5.e-09 1e-10  # time-gates(s): start, end, step
in the 5th row in the input file, MMC will produce 50 flux distributions, corresponding to the time-windows at [0 0.1] ns, [0.1 0.2]ns ... and [4.9,5.0] ns. To convert the flux distributions to the fluence distributions for each time-window, you just need to multiply each solution by the width of the window, 0.1 ns in this case. To convert the time-domain flux to the continuous-wave (CW) fluence, you need to integrate the flux in t=[0,inf]. Assuming the flux after 5 ns is negligible, then the CW fluence is simply sum(flux_i*0.1 ns, i=1,50). You can read mmc/examples/validation/plotcuberes.m and mmc/examples/meshtest/plotmmcsph.m for examples for the conversion in order to compare with the analytical fluence solutions.

12. Can MMC output photon partial lengths at the detectors?

Yes. This is supported in MMC v0.8 or later versions. By default, MMC produces a file named "session_name.mch" for each simulation, along with the flux output "session_name.dat". The suffix ".mch" denotes "Monte Carlo history". The mch output is completely compatible with Monte Carlo eXtreme (MCX) mch output.

To process the .mch file, you need to use the loadmch.m script provided in the MCX package. A mch file uses a binary data format. The version 1 of "mch format" allows to contain multiple data chunks. Each chunk starts with a 64byte header and the rest is data. The header specifies the total detected photon counts and record length for each photon. The data section contains all detected photon records. Each record follows a format as

  det_id, scat_events, plength_1, plength_2, ...., additional data ...
where det_id (starts from 1) is the ID of the detector that captures the photon; scat_events is an integer denotes the total scattering events that the detected photon has experienced; plength_i is the partial path length (in mm) for each medium type. The additional columns can save other statics of the photon, and are reserved for further extension.

13. Will you consider porting MMC to MPI to run on my cluster?

At this point, no; and it is unlikely in the near future. There are a couple of reasons for me to justify why I am not motivated for an MPI implementation of MMC (although it is completely do-able):

1. the parallelization of Monte Carlo photon transport simulation is extremely simple (sometimes known as an "embarrassingly parallelizable problem"). There is almost no need for inter-process communication. Each process/thread is almost completely independent to others as long as it is initialized with a distinct random number seed. The only thing that requires communication is to merge the solutions at the end of the simulation from all processes (inside a process, inter-thread communications are taken care of by the build-in OpenMP automatically), and this can be easily solved by using data files and merge with a post process. Thus, using the full-fledged parallel mechanisms provided by MPI is overkilled.

2. As the downside, use of MPI libraries adds dependency to the code. This not only creates overhead, reduces the flexibility, but also significantly limits the portability of the code.

3. There are a lot of excellent parallel job management mechanisms available on a modern cluster platform. For example, the qsub system is able to perform dynamic load-balancing by using a priorities queue, and is installed for most clusters. Another excellent free tool that can launch parallel jobs inside a network is GNU parallel. Examples of using GNU parallel is detailed in this page.

The above argument can also be extended to MCX: if you have a cluster that include multiple GPU devices, you can use process-level parallelization to run multiple MCX simultaneous. The benefit to use MPI for communication is marginal.

14. Why using the same random number seed won't give me reproducible MMC solutions?

This only happens when you used the the multi-threaded version of MMC. Fortunately, the differences between different runs are quite small when this happens.

The problem was caused by the limitation of floating-point operations when running in parallel with an arbitrary order. In this case, the floating-point operations became non-commutative. In other words, the results for "a+b" and "b+a" are no longer identical when running with limited accuracy. The result becomes slightly different from one to the other when the order of the execution are different. Unfortunately, OpenMP do not guarantee the execution order on CPUs. In comparison, CUDA does a much better job and the results for MCX with the same seeds are reproducible.

Fortunately this difference is typically much smaller than the noise from the Monte Carlo method itself, thus you don't need to worry about it. If you are extremely cautious about reproducibility, you have to run MMC with a single thread for each session and launch many processes in parallel to use all the CPU resources. This will guarantee that the results are exactly identical with the same seed.

Powered by Habitat