This program is a data converter that allows you to preprocess data from netCDF4 files and convert them into images for 3D visualization on https://climatearchive.org. It offers a wide range of options and features to customize the data conversion process to the user's needs. The program supports conversion of files (or folders of files) provided by the user.
python nimbus.py [OPTIONS]
Options:
- --variables, -v: Select one or more variables (coma separated values) for conversion, 'all' for every variable specified in the configuration file. See more
- --config, -c: Select configuration files. See more
- --experiments, -e: Select one or more experiments id (coma separated values) for conversion .
- --folder, -f: Convert the given file or folder.
- --output, -o: Select file or folder.
- --clean, -cl: Clean the output directory.
- --debug, -d: Show debugs in the console.
- --chunkstime, -ct: specify the number of chunks (horizontally).
- --chunksvertical, -cv: specify the number of chunks (vertically).
- --labels, -l: specify labels for the given experiments for later use in the climate archive api.
- --publication, -p: only used with the climate archive api, specify a folder, a file or a url that contains information about published papers for more precise filtering of experiments in the climate archive api.
Usage Examples:
- To select an experiment for conversion:
python nimbus.py -v variable1 -e expid -c config.toml
- Send publication information to the database:
python nimbus.py -p file.html
To convert netCDF files, one must define the variables that are in these files. Many variables are already defined in the Bridge configuration file. The configuration file can be written as follows, using the TOML language. Here is a simple example :
[Model]
dir="your/directory/containig/data"
output_dir="your/output/directory"
name="your_name"
[variable1]
preprocessing="PREPROCESS_REF"
processing="PROCESS_REF"
files=["file1.nc", "file2.nc"]
variable = "variable_name_in_netcdf_file"
[Model.metadata]
file = "file/containing/experiment/metadata"
parser="type of parser for the file [bridge, dat, json]"
tags = ["key of metadata that you want to store"]
*any additional key value pair will be considered as default metadata tags
The preprocessing and processing attributes allows to reference the processing steps that should be applied to the data before conversion. (See More). Here is a list of specification that can be added in the Model section of the configuration file :
- specific levels, for example :
[Model.Atmosphere]
levels = [1000, 850, 700, 500, 200, 100, 10]
unit = "hPa"
resolutions = [["default", "default"], [3.5, -5]]
Here we specify the list of levels that we are interested to display in the output images, the unit of these levels, and the resolutions. The resolutions are a list of lists where each list contains new values for longitude and latitude spacing in degrees. For each couple of values, an image will be produced with the desired resolution. If no resolutions are specified, or if the couple ["default", "default"]
is in the list, there will be no resizing of the input data.
- resolutions, as a list of tuples where each tuple contains new values for longitude and latitude spacing in degrees. For example:
resolutions = [(3.5, -5)]
In order to add a new variable, one must define a new python file in the folder supported_variables. The preprocessing and processing functions can be defined in this file, as well as the realm that corresponds to the variable. For example the variable currents has a specific preprocessing function that we annotate with :
@preprocessing(currents,'BRIDGE')
And the argument 'BRIDGE' corresponds to the value assigned to 'preprocessing' in the TOML configuration file (See BRIDGE_monthly.toml).
This program outputs images in the png folder, the output of the processed netCDF files given in input in the netcdf folder, and a log folder. These 3 folders are in a folder named as the expID.
The images are named as follows :
{configNAME}.{expID}.{variableNAME}.avg.png
for the time mean image, and
{configNAME}.{expID}.{variableNAME}.ts.png
for the time serie image.
If resolutions are specified in the config file (and different than "default"), the resolutions will be specified in the image name :
{configNAME}.{expID}.{variableNAME}.rx{xVALUE}.ry{yVALUE}.[avg¦ts].png
In order to add a new variable, one must define a new python file in the folder supported_variables. The preprocessing and processing functions can be defined in this file, as well as the realm that corresponds to the variable. For example the variable oceanCurrents has a specific preprocessing function that we annotate with :
@preprocessing(OceanCurrents,'BRIDGE')
And the argument 'BRIDGE' corresponds to the value assigned to 'preprocessing' in the TOML configuration file (See BRIDGE_monthly.toml).
-
clt :
named "totCloud_mm_ua" in bridge netcdf files -
tas :
named "temp_mm_1_5m" in bridge netcdf files -
pr :
named "precip_mm_srf" in bridge netcdf files -
winds :
named "u_mm_p" in bridge netcdf files -
snc :
named "snowCover_mm_srf" in bridge netcdf files -
liconc :
named "fracPFTs_mm_srf" in bridge netcdf files -
pfts :
named "fracPFTs_mm_srf" in bridge netcdf files -
tos :
named "temp_mm_uo" in bridge netcdf files -
mlotst :
named "mixLyrDpth_mm_uo" in bridge netcdf files -
siconc :
named "iceconc_mm_uo" in bridge netcdf files -
oceanCurrents :
named "ucurrTot_ym_dpth" in bridge netcdf files -
height :
converted with inidata files --> qrparm.orog