diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 53e9e98..2e01d77 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-05T19:31:31","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-19T18:38:13","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/abstract_mc.html b/dev/abstract_mc.html index 75a09c9..e91e951 100644 --- a/dev/abstract_mc.html +++ b/dev/abstract_mc.html @@ -1,2 +1,2 @@ -Implementing your algorithm · Carlo.jl

Implementing your algorithm

To run your own Monte Carlo algorithm with Carlo, you need to implement the AbstractMC interface documented in this file. For an example implementation showcasing all the features, take a look at the Ising example implementation.

Carlo.AbstractMCType

This type is an interface for implementing your own Monte Carlo algorithm that will be run by Carlo.

source

The following methods all need to be defined for your Monte Carlo algoritm type (here referred to as YourMC <: AbstractMC). See Parallel run mode for a slightly different interface that allows inner MPI parallelization of your algorithm.

Carlo.init!Function
Carlo.init!(mc::YourMC, ctx::MCContext, params::AbstractDict)

Executed when a simulation is started from scratch.

source
Carlo.sweep!Function
Carlo.sweep!(mc::YourMC, ctx::MCContext)

Perform one Monte Carlo sweep or update to the configuration.

Note

Doing measurements is supported during this step as some algorithms require doing so for efficiency. Remember to check for is_thermalized in that case.

source
Carlo.measure!Function
Carlo.measure!(mc::YourMC, ctx::MCContext)

Perform one Monte Carlo measurement.

source
Carlo.write_checkpointFunction
Carlo.write_checkpoint(mc::YourMC, out::HDF5.Group)

Save the complete state of the simulation to out.

source
Carlo.register_evaluablesFunction
Carlo.register_evaluables(mc::Type{YourMC}, eval::Evaluator, params::AbstractDict)

This function is used to calculate postprocessed quantities from quantities that were measured during the simulation. Common examples are variances or ratios of observables.

See evaluables for more details.

source

Interfacing with Carlo features

The MCContext type, passed to your code by some of the functions above enables to use some features provided by Carlo.

Carlo.measure!Method
measure!(ctx::MCContext, name::Symbol, value)

Measure a sample for the observable named name. The sample value may be either a scalar or vector of a float type.

source
+Implementing your algorithm · Carlo.jl

Implementing your algorithm

To run your own Monte Carlo algorithm with Carlo, you need to implement the AbstractMC interface documented in this file. For an example implementation showcasing all the features, take a look at the Ising example implementation.

Carlo.AbstractMCType

This type is an interface for implementing your own Monte Carlo algorithm that will be run by Carlo.

source

The following methods all need to be defined for your Monte Carlo algoritm type (here referred to as YourMC <: AbstractMC). See Parallel run mode for a slightly different interface that allows inner MPI parallelization of your algorithm.

Carlo.init!Function
Carlo.init!(mc::YourMC, ctx::MCContext, params::AbstractDict)

Executed when a simulation is started from scratch.

source
Carlo.sweep!Function
Carlo.sweep!(mc::YourMC, ctx::MCContext)

Perform one Monte Carlo sweep or update to the configuration.

Note

Doing measurements is supported during this step as some algorithms require doing so for efficiency. Remember to check for is_thermalized in that case.

source
Carlo.measure!Function
Carlo.measure!(mc::YourMC, ctx::MCContext)

Perform one Monte Carlo measurement.

source
Carlo.write_checkpointFunction
Carlo.write_checkpoint(mc::YourMC, out::HDF5.Group)

Save the complete state of the simulation to out.

source
Carlo.register_evaluablesFunction
Carlo.register_evaluables(mc::Type{YourMC}, eval::Evaluator, params::AbstractDict)

This function is used to calculate postprocessed quantities from quantities that were measured during the simulation. Common examples are variances or ratios of observables.

See evaluables for more details.

source

Interfacing with Carlo features

The MCContext type, passed to your code by some of the functions above enables to use some features provided by Carlo.

Carlo.measure!Method
measure!(ctx::MCContext, name::Symbol, value)

Measure a sample for the observable named name. The sample value may be either a scalar or vector of a float type.

source
diff --git a/dev/evaluables.html b/dev/evaluables.html index cd8d3ea..9c1da76 100644 --- a/dev/evaluables.html +++ b/dev/evaluables.html @@ -1,5 +1,5 @@ -Evaluables · Carlo.jl

Evaluables

In addition to simply calculating the averages of some observables in your Monte Carlo simulations, sometimes you are also interested in quantities that are functions of these observables, such as the Binder cumulant which is related to the ratio of moments of the magnetization.

This presents two problems. First, estimating the errors of such quantities is not trivial due to correlations. Second, simply computing functions of quantities with errorbars incurs a bias.

Luckily, Carlo can help you with this by letting you define such quantities – we call them evaluables – in the Carlo.register_evaluables(YourMC, eval, params) function.

This function gets an Evaluator which can be used to

Carlo.evaluate!Function
evaluate!(func::Function, eval::Evaluator, name::Symbol, (ingredients::Symbol...))

Define an evaluable called name, i.e. a quantity depending on the observable averages ingredients.... The function func will get the ingredients as parameters and should return the value of the evaluable. Carlo will then perform jackknifing to calculate a bias-corrected result with correct error bars that appears together with the observables in the result file.

source

Example

This is an example for a register_evaluables implementation for a model of a magnet.

using Carlo
+Evaluables · Carlo.jl

Evaluables

In addition to simply calculating the averages of some observables in your Monte Carlo simulations, sometimes you are also interested in quantities that are functions of these observables, such as the Binder cumulant which is related to the ratio of moments of the magnetization.

This presents two problems. First, estimating the errors of such quantities is not trivial due to correlations. Second, simply computing functions of quantities with errorbars incurs a bias.

Luckily, Carlo can help you with this by letting you define such quantities – we call them evaluables – in the Carlo.register_evaluables(YourMC, eval, params) function.

This function gets an Evaluator which can be used to

Carlo.evaluate!Function
evaluate!(func::Function, eval::Evaluator, name::Symbol, (ingredients::Symbol...))

Define an evaluable called name, i.e. a quantity depending on the observable averages ingredients.... The function func will get the ingredients as parameters and should return the value of the evaluable. Carlo will then perform jackknifing to calculate a bias-corrected result with correct error bars that appears together with the observables in the result file.

source

Example

This is an example for a register_evaluables implementation for a model of a magnet.

using Carlo
 
 function Carlo.register_evaluables(
     ::Type{YourMC},
@@ -20,4 +20,4 @@
     end
 
     return nothing
-end

Note that this code is called after the simulation is over, so there is no way to access the simulation state. However, it is possible to get the needed information about the system (e.g. temperature, system size) from the task parameters params.

+end

Note that this code is called after the simulation is over, so there is no way to access the simulation state. However, it is possible to get the needed information about the system (e.g. temperature, system size) from the task parameters params.

diff --git a/dev/index.html b/dev/index.html index f363036..2bf92d1 100644 --- a/dev/index.html +++ b/dev/index.html @@ -24,4 +24,4 @@ tasks=make_tasks(tm) ) -start(job, ARGS)

This example starts a simulation for the Ising model on the 10×10 lattice for 20 different temperatures. Using the function start(job::JobInfo, ARGS) enables the Carlo CLI when we execute the script above as follows.

./myjob --help

The command line interface allows (re)starting a job, merging preliminary results, and showing the completion status of a calculation.

Starting jobs

./myjob run

This will start a simulation on a single core. To use multiple cores, use MPI.

mpirun -n $num_cores ./myjob run

Once the simulation is started, a directory myjob.data will be created to store all simulation data. The name of the directory corresponds to the first argument of JobInfo. Usually that will be @__FILE__, but you could also collect your simulation data in a different directory.

The data directory will contain hdf5 files for each task of the job that contain checkpointing snapshots and measurement results. Once the job is done, Carlo will average the measurement data for you and produce the file myjob.results.json in the same directory as the myjob.data directory. This file contains means and errorbars of all observables. See ResultTools for some tips on consuming this file back into julia for your plotting or other postprocessing.

Job status

./myjob status

Use this command to find out the state of the simulation. It will show a table with the number of completed measurement sweeps, the target number of sweeps, the numbers of runs, and the fraction of them that is thermalized.

The fraction is defined as thermalization sweeps completed/total thermalization sweeps needed.

Merging jobs

./myjob merge

Usually Carlo will automatically merge results once a job is complete, but when you are impatient and you want to check on results of a running or aborted job, this command is your friend. It will produce a myjob.results.json file containing the averages of the currently available data.

Deleting jobs

./myjob delete

This deletes myjob.data and myjob.results.json. Of course, you should archive your simulation data instead of deleting them. However, if you made an error in a previous simulation, keep in mind that by default Carlo will continue it from the checkpoints.

For that case of restarting a job there is a handy shortcut as well

./myjob run --restart

Shortcuts

All commands here have shortcut versions that you can view in the help.

+start(job, ARGS)

This example starts a simulation for the Ising model on the 10×10 lattice for 20 different temperatures. Using the function start(job::JobInfo, ARGS) enables the Carlo CLI when we execute the script above as follows.

./myjob --help

The command line interface allows (re)starting a job, merging preliminary results, and showing the completion status of a calculation.

Starting jobs

./myjob run

This will start a simulation on a single core. To use multiple cores, use MPI.

mpirun -n $num_cores ./myjob run

Once the simulation is started, a directory myjob.data will be created to store all simulation data. The name of the directory corresponds to the first argument of JobInfo. Usually that will be @__FILE__, but you could also collect your simulation data in a different directory.

The data directory will contain hdf5 files for each task of the job that contain checkpointing snapshots and measurement results. Once the job is done, Carlo will average the measurement data for you and produce the file myjob.results.json in the same directory as the myjob.data directory. This file contains means and errorbars of all observables. See ResultTools for some tips on consuming this file back into julia for your plotting or other postprocessing.

Job status

./myjob status

Use this command to find out the state of the simulation. It will show a table with the number of completed measurement sweeps, the target number of sweeps, the numbers of runs, and the fraction of them that is thermalized.

The fraction is defined as thermalization sweeps completed/total thermalization sweeps needed.

Merging jobs

./myjob merge

Usually Carlo will automatically merge results once a job is complete, but when you are impatient and you want to check on results of a running or aborted job, this command is your friend. It will produce a myjob.results.json file containing the averages of the currently available data.

Deleting jobs

./myjob delete

This deletes myjob.data and myjob.results.json. Of course, you should archive your simulation data instead of deleting them. However, if you made an error in a previous simulation, keep in mind that by default Carlo will continue it from the checkpoints.

For that case of restarting a job there is a handy shortcut as well

./myjob run --restart

Shortcuts

All commands here have shortcut versions that you can view in the help.

diff --git a/dev/jobtools.html b/dev/jobtools.html index 1182478..2e68280 100644 --- a/dev/jobtools.html +++ b/dev/jobtools.html @@ -7,7 +7,7 @@ tasks::Vector{TaskInfo}, rng::Type = Random.Xoshiro, ranks_per_run::Union{Integer, Symbol} = 1, -)

Holds all information required for a Monte Carlo calculation. The data of the calculation (parameters, results, and checkpoints) will be saved under job_directory_prefix.

mc is the the type of the algorithm to use, implementing the abstract_mc interface.

checkpoint_time and run_time specify the interval between checkpoints and the total desired run_time of the simulation. Both may be specified as a string of format [[hours:]minutes:]seconds

Each job contains a set of tasks, corresponding to different sets of simulation parameters that should be run in parallel. The TaskMaker type can be used to conveniently generate them.

rng sets the type of random number generator that should be used.

Setting the optional parameter ranks_per_run > 1 enables Parallel run mode. The special value ranks_per_run = :all uses all available ranks for a single run.

source
Carlo.JobTools.TaskInfoType
TaskInfo(name::AbstractString, params::Dict{Symbol,Any})

Holds information of one parameter set in a Monte Carlo calculation. While it is possible to construct it by hand, for multiple tasks, it is recommended to use TaskMaker for convenience.

Special parameters

While params can hold any kind of parameter, some are special and used to configure the behavior of Carlo.

  • sweeps: required. The minimum number of Monte Carlo measurement sweeps to perform for the task.

  • thermalization: required. The number of thermalization sweeps to perform.

  • binsize: required. The internal default binsize for observables. Carlo will merge this many samples into one bin before saving them. On top of this, a rebinning analysis is performed, so that this setting mostly affects disk space and IO efficiency. To get correct autocorrelation times, it should be 1. In all other cases much higher.

  • rng: optional. Type of the random number generator to use. See rng.

  • seed: optional. Optionally run calculations with a fixed seed. Useful for debugging.

  • rebin_length: optional. Override the automatic rebinning length chosen by Carlo (⚠ do not set without knowing what you are doing).

  • rebin_sample_skip: optional. Skip the first $N$ internal bins of each run when performing the rebinning analysis. Useful if thermalization was not set high enough at the start of the simulation.

  • max_runs_per_task: optional. If set, puts a limit on the maximum number of runs that will be scheduled for this task.

Out of these parameters, it is only permitted to change sweeps for an existing calculation. This is handy to run the simulation for longer or shorter than planned originally.

source
Carlo.JobTools.result_filenameFunction
result_filename(job::JobInfo)

Returns the filename of the .results.json file containing the merged results of the calculation of job.

source
Carlo.startMethod
start(job::JobInfo, ARGS)

Call this from your job script to start the Carlo command line interface.

If for any reason you do not want to use job scripts, you can directly schedule a job using

start(Carlo.MPIScheduler, job)
source

TaskMaker

Carlo.JobTools.TaskMakerType
TaskMaker()

Tool for generating a list of tasks, i.e. parameter sets, to be simulated in a Monte Carlo simulation.

The fields of TaskMaker can be freely assigned and each time task is called, their current state will be copied into a new task. Finally the list of tasks can be generated using make_tasks

In most cases the resulting tasks will be used in the constructor of JobInfo, the basic description for jobs in Carlo.

Example

The following example creates a list of 5 tasks for different parameters T. This could be a scan of the finite-temperature phase diagram of some model. The first task will be run with more sweeps than the rest.

tm = TaskMaker()
+)

Holds all information required for a Monte Carlo calculation. The data of the calculation (parameters, results, and checkpoints) will be saved under job_directory_prefix.

mc is the the type of the algorithm to use, implementing the abstract_mc interface.

checkpoint_time and run_time specify the interval between checkpoints and the total desired run_time of the simulation. Both may be specified as a string of format [[hours:]minutes:]seconds

Each job contains a set of tasks, corresponding to different sets of simulation parameters that should be run in parallel. The TaskMaker type can be used to conveniently generate them.

rng sets the type of random number generator that should be used.

Setting the optional parameter ranks_per_run > 1 enables Parallel run mode. The special value ranks_per_run = :all uses all available ranks for a single run.

source
Carlo.JobTools.TaskInfoType
TaskInfo(name::AbstractString, params::Dict{Symbol,Any})

Holds information of one parameter set in a Monte Carlo calculation. While it is possible to construct it by hand, for multiple tasks, it is recommended to use TaskMaker for convenience.

Special parameters

While params can hold any kind of parameter, some are special and used to configure the behavior of Carlo.

  • sweeps: required. The minimum number of Monte Carlo measurement sweeps to perform for the task.

  • thermalization: required. The number of thermalization sweeps to perform.

  • binsize: required. The internal default binsize for observables. Carlo will merge this many samples into one bin before saving them. On top of this, a rebinning analysis is performed, so that this setting mostly affects disk space and IO efficiency. To get correct autocorrelation times, it should be 1. In all other cases much higher.

  • rng: optional. Type of the random number generator to use. See rng.

  • seed: optional. Optionally run calculations with a fixed seed. Useful for debugging.

  • rebin_length: optional. Override the automatic rebinning length chosen by Carlo (⚠ do not set without knowing what you are doing).

  • rebin_sample_skip: optional. Skip the first $N$ internal bins of each run when performing the rebinning analysis. Useful if thermalization was not set high enough at the start of the simulation.

  • max_runs_per_task: optional. If set, puts a limit on the maximum number of runs that will be scheduled for this task.

Out of these parameters, it is only permitted to change sweeps for an existing calculation. This is handy to run the simulation for longer or shorter than planned originally.

source
Carlo.JobTools.result_filenameFunction
result_filename(job::JobInfo)

Returns the filename of the .results.json file containing the merged results of the calculation of job.

source
Carlo.startMethod
start(job::JobInfo, ARGS)

Call this from your job script to start the Carlo command line interface.

If for any reason you do not want to use job scripts, you can directly schedule a job using

start(Carlo.MPIScheduler, job)
source

TaskMaker

Carlo.JobTools.TaskMakerType
TaskMaker()

Tool for generating a list of tasks, i.e. parameter sets, to be simulated in a Monte Carlo simulation.

The fields of TaskMaker can be freely assigned and each time task is called, their current state will be copied into a new task. Finally the list of tasks can be generated using make_tasks

In most cases the resulting tasks will be used in the constructor of JobInfo, the basic description for jobs in Carlo.

Example

The following example creates a list of 5 tasks for different parameters T. This could be a scan of the finite-temperature phase diagram of some model. The first task will be run with more sweeps than the rest.

tm = TaskMaker()
 tm.sweeps = 10000
 tm.thermalization = 2000
 tm.binsize = 500
@@ -18,4 +18,4 @@
     task(tm; T=T)
 end
 
-tasks = make_tasks(tm)
source
Carlo.JobTools.taskFunction
task(tm::TaskMaker; kwargs...)

Creates a new task for the current set of parameters saved in tm. Optionally, kwargs can be used to specify parameters that are set for this task only.

source
Carlo.JobTools.make_tasksFunction
make_tasks(tm::TaskMaker)::Vector{TaskInfo}

Generate a list of tasks from tm based on the previous calls of task. The output of this will typically be supplied to the tasks argument of JobInfo.

source
Carlo.JobTools.current_task_nameFunction
current_task_name(tm::TaskMaker)

Returns the name of the task that will be created by task(tm).

source
+tasks = make_tasks(tm)source
Carlo.JobTools.taskFunction
task(tm::TaskMaker; kwargs...)

Creates a new task for the current set of parameters saved in tm. Optionally, kwargs can be used to specify parameters that are set for this task only.

source
Carlo.JobTools.make_tasksFunction
make_tasks(tm::TaskMaker)::Vector{TaskInfo}

Generate a list of tasks from tm based on the previous calls of task. The output of this will typically be supplied to the tasks argument of JobInfo.

source
Carlo.JobTools.current_task_nameFunction
current_task_name(tm::TaskMaker)

Returns the name of the task that will be created by task(tm).

source
diff --git a/dev/parallel_run_mode.html b/dev/parallel_run_mode.html index 14b773f..94c04f9 100644 --- a/dev/parallel_run_mode.html +++ b/dev/parallel_run_mode.html @@ -2,4 +2,4 @@ Parallel run mode · Carlo.jl

Parallel run mode

One of Carlo’s features is to automatically parallelize independent Monte Carlo simulation runs over MPI. These runs can either share the same set of parameters – in which case their results are averaged – or have different parameters entirely.

Sometimes this kind of trivial parallelism is not satisfactory. For example, it does not shorten the time needed for thermalization, and some Monte Carlo algorithms can benefit from some sort of population control that exchanges data between different simulations of the same random process.

For these cases, Carlo features a parallel run mode where each Carlo run does not run on one but multiple MPI ranks. Parallel run mode is enabled in JobInfo by passing the ranks_per_run argument.

Parallel AbstractMC interface

In order to use parallel run mode, the Monte Carlo algorithm must implement a modified version of the AbstractMC interface including additional MPI.Comm arguments that allow coordination between the different ranks per run.

The first three functions

Carlo.init!(mc::YourMC, ctx::MCContext, params::AbstractDict, comm::MPI.Comm)
 Carlo.sweep!(mc::YourMC, ctx::MCContext, comm::MPI.Comm)
 Carlo.measure!(mc::YourMC, ctx::MCContext, comm::MPI.Comm)

simply receive an additional comm argument. An important restriction here is that only rank 0 can make measurements on the given MCContext, so you are responsible to communicate the measurement results to that rank.

For checkpointing, there is a similar catch.

Carlo.write_checkpoint(mc::YourMC, out::Union{HDF5.Group,Nothing}, comm::MPI.Comm)
-Carlo.read_checkpoint!(mc::YourMC, in::Union{HDF5.Group,Nothing}, comm::MPI.Comm)

In these methods, only rank 0 receives an HDF5.Group and the other ranks need to communicate. Carlo does not use the collective writing mode of parallel HDF5.

Sometimes, you also want to share work during the construction of YourMC. For this reason, Carlo will add the hidden parameter _comm to the parameter dictionary received by the constructor YourMC(params::AbstractDict). params[:_comm] is then an MPI communicator similar to the comm argument of the functions above.

Lastly, the Carlo.register_evaluables function remains the same as in the normal interface.

+Carlo.read_checkpoint!(mc::YourMC, in::Union{HDF5.Group,Nothing}, comm::MPI.Comm)

In these methods, only rank 0 receives an HDF5.Group and the other ranks need to communicate. Carlo does not use the collective writing mode of parallel HDF5.

Sometimes, you also want to share work during the construction of YourMC. For this reason, Carlo will add the hidden parameter _comm to the parameter dictionary received by the constructor YourMC(params::AbstractDict). params[:_comm] is then an MPI communicator similar to the comm argument of the functions above.

Lastly, the Carlo.register_evaluables function remains the same as in the normal interface.

diff --git a/dev/resulttools-3b27b815.svg b/dev/resulttools-c9795009.svg similarity index 83% rename from dev/resulttools-3b27b815.svg rename to dev/resulttools-c9795009.svg index 660a33c..0a9fd84 100644 --- a/dev/resulttools-3b27b815.svg +++ b/dev/resulttools-c9795009.svg @@ -1,60 +1,60 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/resulttools.html b/dev/resulttools.html index 0d436c2..72c4420 100644 --- a/dev/resulttools.html +++ b/dev/resulttools.html @@ -1,8 +1,8 @@ -ResultTools · Carlo.jl

ResultTools

This is a small module to ease importing Carlo results back into Julia. It contains the function

Carlo.ResultTools.dataframeFunction
ResultTools.dataframe(result_json::AbstractString)

Helper to import result data from a *.results.json file produced after a Carlo calculation. Returns a Tables.jl-compatible dictionary that can be used as is or converted into a DataFrame or other table structure. Observables and their errorbars will be converted to Measurements.jl measurements.

source

If we use ResultTools with DataFrames.jl to read out the results of the Ising example, it would be the following.

using Plots
+ResultTools · Carlo.jl

ResultTools

This is a small module to ease importing Carlo results back into Julia. It contains the function

Carlo.ResultTools.dataframeFunction
ResultTools.dataframe(result_json::AbstractString)

Helper to import result data from a *.results.json file produced after a Carlo calculation. Returns a Tables.jl-compatible dictionary that can be used as is or converted into a DataFrame or other table structure. Observables and their errorbars will be converted to Measurements.jl measurements.

source

If we use ResultTools with DataFrames.jl to read out the results of the Ising example, it would be the following.

using Plots
 using DataFrames
 using Carlo.ResultTools
 
 df = DataFrame(ResultTools.dataframe("example.results.json"))
 
-plot(df.T, df.Energy; xlabel = "Temperature", ylabel="Energy per spin", group=df.Lx, legendtitle="L")
Example block output

In the plot we can nicely see how the model approaches the ground state energy at low temperatures.

+plot(df.T, df.Energy; xlabel = "Temperature", ylabel="Energy per spin", group=df.Lx, legendtitle="L")
Example block output

In the plot we can nicely see how the model approaches the ground state energy at low temperatures.

diff --git a/dev/rng.html b/dev/rng.html index 4b2054d..c08cbd4 100644 --- a/dev/rng.html +++ b/dev/rng.html @@ -1,2 +1,2 @@ -Random Number Generators · Carlo.jl

Random Number Generators

Carlo takes care of storing and managing the state of random number generators (RNG) for you. It is accessible through the rng field of MCContext and the type of RNG to use can be set by the rng parameter in every task (see TaskInfo).

The currently supported types are

  • Random.Xoshiro
+Random Number Generators · Carlo.jl

Random Number Generators

Carlo takes care of storing and managing the state of random number generators (RNG) for you. It is accessible through the rng field of MCContext and the type of RNG to use can be set by the rng parameter in every task (see TaskInfo).

The currently supported types are

  • Random.Xoshiro