The link to this Github package is https://github.com/FRBNY-DSGE/Replication-of-Online-Estimation-of-DSGE-Models.
The code in this repository replicates "Online Estimation of DSGE Models" by Michael Cai, Marco Del Negro, Edward Herbst, Ethan Matlin, Reca Sarfati, and Frank Schorfheide. In general, most of the computation code is structured to run in batch mode in parallel on a large cluster, while the plotting code is structured to be run using a single core.
This repository is designed primarily to replicate the figures and tables in the "Online Estimation" paper. If you are interested in using SMC for your own models, check out our independent software package SMC.jl.
Note: To re-run all exercises in the paper could take many weeks with thousands of coures. Hence, we've modularized the replication codes for ease of use because:
- Many clusters impose time limits on jobs--hence it may be necessary for users to run the replication chunk by chunk
- Some users may want to focus on particular sections rather than "pressing a button and having the whole shebang run."
Make sure you're using the latest versions of all our packages!
DSGE.jl
v1.1.1SMC.jl
v0.1.4StateSpaceRoutines.jl
v0.3.1ModelConstructors.jl
v0.1.8- To add packages, in the Julia REPL, type
]add PACKAGENAME
To add all packages you need, enter the following into the Julia REPL, in the indicated order:
- ]add [email protected]
- ]add [email protected]
- ]add [email protected]
- ]add [email protected]
- ]add [email protected]
- ]add BenchmarkTools CSV Calculus ClusterManagers ColorTypes DataFrames DataStructures Dates DelimitedFiles DiffEqDiffTools DifferentialEquations Distributed Distributions FFTW FileIO ForwardDiff FredData GR HDF5 InteractiveUtils JLD2 KernelDensity LinearAlgebra MAT MbedTLS Measures Missings NLSolversBase Nullables Optim OrderedCollections PDMats PackageCompiler Plots Printf Query Random RecipesBase Roots SharedArrays SparseArrays SpecialFunctions Statistics StatsBase StatsFuns StatsPlots Test TimeZones Tracker
- At this point, you will also need to also follow the instructions here for setting up a FRED API key.
Before running any of the code below, you need to load and precompile all the packages (otherwise, if you're launching multiple jobs at once--as our batch scripts do--the various jobs will all precompile over each other and screw everything up). The easiest way to do so is to start running one of the main Julia scripts within a single instance, as follows:
- Navigate to batchfiles/200202/AnSchorfheide/
- Open a Julia session
- In the Julia session, run
include("specAnSchorf_N_MH=1_3_5.jl")
- This will load and precompile all the packages that are used but will eventually throw an error ("Bounds Error:attempt to access 0-element Array" since it's not getting the arguments from the batch job submission). That's ok; the only point of running this is to precompile all the packages before running all the batch job submissions.
- Now you should be ready to go with submitting the batch scripts as discussed below.
- If it still doesn't work, you can manually precompile all the packages by copy and pasting in the list of packages you added above with
using [insert list of packages here]
. That should guarantee all the packages are precompiled and ready to go.
WARNING: Running all 400 simulations of AS and 200 simulations of SW takes multiple days using >12,000 cores and produces 1.1 TB of output. To reduce the number of simulations:
- Go to line 9 of
batchfiles/200202/AnSchorfheide/master_script_as.sh
to the number of simulations you want (currently 400) - Go to line 10 of
batchfiles/200202/SmetsWouters/master_script_sw.sh
to the number of simulations you want (currently 200)
To re-run the simulations for sections 4.1 and 4.2: (this code has been run and tested using a slurm scheduler and Julia 1.1.0 on BigTex)
-
Go to
batchfiles/200202/AnSchorfheide
-
Change line 10 of
specAnSchorf_N_MH=1_3_5.jl
to the path where you git cloned the repo. (e.g. If the directory you git cloned is~/user/work/SMC_Paper_Replication/
, you should set this line to~/user/work/SMC_Paper_Replication/
) -
Run (
sbatch
if uisng slurm scheduler)master_script_as.sh
. This launches the 400 estimations of AS for each alpha x N_MH combination. -
Go to
batchfiles/200202/SmetsWouters
. -
Change line 10 of
specsmetsWout_N_MH=1_3_5.jl
to the path where you git cloned the repo. (e.g. if the directory you git cloned is~/user/work/SMC_Paper_Replication/
, you should set this line to~/user/work/SMC_Paper_Replication/
) -
Run (
sbatch
if uisng slurm scheduler)master_script_sw.sh
. This launches the 200 estimations of SW for each alpha x N_MH combination. -
The output is saved in
save/200202
. This output is read in byestimation_section.jl
Sometimes, particular jobs will segfault (by jobs, I mean iteration number by est_spec pairs). This is fine and likely just a cluster issue. You are using enormous amounts of resources after all! To find out which job failed, you can run estimation_section.jl. If it can't find a file, that's because that job (iteration number by est_spec pair_ failed. Say the (est_spec=3, iteration=206) job failed. All you need to do in this case is:
- go to batchfiles/200202/AnSchorfheide/est_spec_3/206, open up as_estimation.sh
- delete lines 20 and 21 (since you're not passing arguments from another batch script if you just re-run this particular one)
- change line 22 to julia ../../specAnSchorf_N_MH=1_3_5.jl 48 3 206 (3 is est_spec and 206 is iteration number as referenced in the comment)
- in that folder, sbatch as_estimation.sh
- This is fine to do because the only thing the higher level batch script does is just submits all the individual batch scripts which can be run and re-run individually
- For every job that failed, do the same thing but with different est_spec and iteration pairs corresponding to the job which failed
To re-run the simulations for section 4.3: (this code has been run and tested using a SGE scheduler and Julia 1.1.1 on the FRBNY RAN)
- Go to
specfiles/200201
- Change line 19 of
specAnSchorfheideExercise.jl
to the path where you git cloned the repo (see example in section above) - Run
specAnSchorfheideExercise.jl
with at least 100 GB memory on the head worker. The Julia script adds 48 workers (with 3GB memory each). You'll need to modify the lines which add workers to your local cluster. - This saves the estimation simulations in
save/200201
and also makes the plots based on these simulations (also saved insave/200201
) - If you get an error, that's fine (you probably just ran out of memory for your cluster job). All you need to do is restart running the script from the last one it failed at minus 1 (minus 1 in case any in the last were possibly corrupted due to the error) i.e. change line 91 1 i.e. change line 91 to X-1:N_sim. where X is is the iteration the job failed at. This is fine to do because all of the N_sim iterations are totally independent of each other.
To re-run the realtime estimations of SW, SWFF, SWpi used in section 5: (this code has been run and tested using a slurm scheduler and Julia 1.1.0 on BigTex)
- Change line 6 of
specfiles/200203/specAll_1991-2017.jl
to the directory where you git cloned the repo - Go to
batchfiles/200203
- Run (
sbatch
if uisng slurm scheduler)master.sh
- They will save in
save/200203
To re-run predictive densities in section 5: (this code has been run and tested using a slurm scheduler and Julia 1.1.0 on BigTex)
- Go to
batchfiles/200204/
- Change line 21 of
specPredDensity.jl
to the path where you git cloned the repo to (see example in first section) - Run (
sbatch
if using slurm scheduler)master_script.sh
- Running this launches separate parallel jobs for different combinations of predictive densities (prior, conditional data, etc.)
- The predictive densities' raw output save to
save/200117
. This data is loaded in byforecasting_section.jl
to produce the predictive density plots.
(this code has been run and tested using Julia 1.1.1 on the FRBNY RAN)
To replicate AS figures/tables
in section 4:
- Run
estimation_section.jl
with model =:AS
(line 4) - If the script can't find a file, that means that job failed (see section beginning with "Sometimes, particular jobs will segfault" above)
To replicate SW figures/tables
in section 4:
- Run
estimation_section.jl
with model =:SW
(line 4)- If the script can't find a file, that means that job failed (see section beginning with "Sometimes, particular jobs will segfault" above)
To replicate figures/tables
in section 5:
- Run
forecasting_section.jl
Copyright Federal Reserve Bank of New York. You may reproduce, use, modify, make derivative works of, and distribute and this code in whole or in part so long as you keep this notice in the documentation associated with any distributed works. Neither the name of the Federal Reserve Bank of New York (FRBNY) nor the names of any of the authors may be used to endorse or promote works derived from this code without prior written permission. Portions of the code attributed to third parties are subject to applicable third party licenses and rights. By your use of this code you accept this license and any applicable third party license.
THIS CODE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT ANY WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, EXCEPT TO THE EXTENT THAT THESE DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. FRBNY IS NOT, UNDER ANY CIRCUMSTANCES, LIABLE TO YOU FOR DAMAGES OF ANY KIND ARISING OUT OF OR IN CONNECTION WITH USE OF OR INABILITY TO USE THE CODE, INCLUDING, BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, SPECIAL OR EXEMPLARY DAMAGES, WHETHER BASED ON BREACH OF CONTRACT, BREACH OF WARRANTY, TORT OR OTHER LEGAL OR EQUITABLE THEORY, EVEN IF FRBNY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES OR LOSS AND REGARDLESS OF WHETHER SUCH DAMAGES OR LOSS IS FORESEEABLE.